2026-03-29 00:00:07.444805 | Job console starting 2026-03-29 00:00:07.496239 | Updating git repos 2026-03-29 00:00:07.889683 | Cloning repos into workspace 2026-03-29 00:00:08.305801 | Restoring repo states 2026-03-29 00:00:08.341000 | Merging changes 2026-03-29 00:00:08.341024 | Checking out repos 2026-03-29 00:00:09.036688 | Preparing playbooks 2026-03-29 00:00:10.512488 | Running Ansible setup 2026-03-29 00:00:19.408447 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2026-03-29 00:00:21.187493 | 2026-03-29 00:00:21.187652 | PLAY [Base pre] 2026-03-29 00:00:21.219682 | 2026-03-29 00:00:21.219824 | TASK [Setup log path fact] 2026-03-29 00:00:21.266777 | orchestrator | ok 2026-03-29 00:00:21.307510 | 2026-03-29 00:00:21.307684 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-03-29 00:00:21.366057 | orchestrator | ok 2026-03-29 00:00:21.385082 | 2026-03-29 00:00:21.385202 | TASK [emit-job-header : Print job information] 2026-03-29 00:00:21.451952 | # Job Information 2026-03-29 00:00:21.452132 | Ansible Version: 2.16.14 2026-03-29 00:00:21.452167 | Job: testbed-deploy-current-in-a-nutshell-with-tempest-ubuntu-24.04 2026-03-29 00:00:21.452202 | Pipeline: periodic-midnight 2026-03-29 00:00:21.452225 | Executor: 521e9411259a 2026-03-29 00:00:21.452246 | Triggered by: https://github.com/osism/testbed 2026-03-29 00:00:21.452268 | Event ID: 8728361d0a6a491ab345cc1284af2839 2026-03-29 00:00:21.466700 | 2026-03-29 00:00:21.466824 | LOOP [emit-job-header : Print node information] 2026-03-29 00:00:21.687573 | orchestrator | ok: 2026-03-29 00:00:21.687883 | orchestrator | # Node Information 2026-03-29 00:00:21.687934 | orchestrator | Inventory Hostname: orchestrator 2026-03-29 00:00:21.687961 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2026-03-29 00:00:21.687984 | orchestrator | Username: zuul-testbed02 2026-03-29 00:00:21.688006 | orchestrator | Distro: Debian 12.13 2026-03-29 00:00:21.688029 | orchestrator | Provider: static-testbed 2026-03-29 00:00:21.688051 | orchestrator | Region: 2026-03-29 00:00:21.688071 | orchestrator | Label: testbed-orchestrator 2026-03-29 00:00:21.688090 | orchestrator | Product Name: OpenStack Nova 2026-03-29 00:00:21.688109 | orchestrator | Interface IP: 81.163.193.140 2026-03-29 00:00:21.709098 | 2026-03-29 00:00:21.709204 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2026-03-29 00:00:22.682375 | orchestrator -> localhost | changed 2026-03-29 00:00:22.690398 | 2026-03-29 00:00:22.690516 | TASK [log-inventory : Copy ansible inventory to logs dir] 2026-03-29 00:00:26.181209 | orchestrator -> localhost | changed 2026-03-29 00:00:26.242288 | 2026-03-29 00:00:26.242978 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2026-03-29 00:00:27.628215 | orchestrator -> localhost | ok 2026-03-29 00:00:27.645963 | 2026-03-29 00:00:27.646124 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2026-03-29 00:00:27.736427 | orchestrator | ok 2026-03-29 00:00:27.784562 | orchestrator | included: /var/lib/zuul/builds/0c9b8dad94e24d61892e6bb3a93b466e/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2026-03-29 00:00:27.815567 | 2026-03-29 00:00:27.832440 | TASK [add-build-sshkey : Create Temp SSH key] 2026-03-29 00:00:30.750168 | orchestrator -> localhost | Generating public/private rsa key pair. 2026-03-29 00:00:30.750347 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/0c9b8dad94e24d61892e6bb3a93b466e/work/0c9b8dad94e24d61892e6bb3a93b466e_id_rsa 2026-03-29 00:00:30.750379 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/0c9b8dad94e24d61892e6bb3a93b466e/work/0c9b8dad94e24d61892e6bb3a93b466e_id_rsa.pub 2026-03-29 00:00:30.750403 | orchestrator -> localhost | The key fingerprint is: 2026-03-29 00:00:30.750426 | orchestrator -> localhost | SHA256:jn+WT189yVgEbyOTKsar6XI8j6pcm12NWrOGc6FG++o zuul-build-sshkey 2026-03-29 00:00:30.750446 | orchestrator -> localhost | The key's randomart image is: 2026-03-29 00:00:30.750474 | orchestrator -> localhost | +---[RSA 3072]----+ 2026-03-29 00:00:30.750492 | orchestrator -> localhost | | . | 2026-03-29 00:00:30.750510 | orchestrator -> localhost | | + | 2026-03-29 00:00:30.750526 | orchestrator -> localhost | | + = | 2026-03-29 00:00:30.750542 | orchestrator -> localhost | | . . = .| 2026-03-29 00:00:30.750558 | orchestrator -> localhost | | S+ . . | 2026-03-29 00:00:30.750578 | orchestrator -> localhost | | +..= + o| 2026-03-29 00:00:30.750595 | orchestrator -> localhost | | .+ =*.o.. +o| 2026-03-29 00:00:30.750611 | orchestrator -> localhost | | . ..+@Bo*. . ..| 2026-03-29 00:00:30.750627 | orchestrator -> localhost | | o.+BEX* .. . | 2026-03-29 00:00:30.750642 | orchestrator -> localhost | +----[SHA256]-----+ 2026-03-29 00:00:30.750684 | orchestrator -> localhost | ok: Runtime: 0:00:01.520993 2026-03-29 00:00:30.756818 | 2026-03-29 00:00:30.756919 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2026-03-29 00:00:30.814515 | orchestrator | ok 2026-03-29 00:00:30.840268 | orchestrator | included: /var/lib/zuul/builds/0c9b8dad94e24d61892e6bb3a93b466e/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2026-03-29 00:00:30.872580 | 2026-03-29 00:00:30.872670 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2026-03-29 00:00:30.919169 | orchestrator | skipping: Conditional result was False 2026-03-29 00:00:30.925340 | 2026-03-29 00:00:30.925420 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2026-03-29 00:00:31.968504 | orchestrator | changed 2026-03-29 00:00:31.974674 | 2026-03-29 00:00:31.974774 | TASK [add-build-sshkey : Make sure user has a .ssh] 2026-03-29 00:00:32.329517 | orchestrator | ok 2026-03-29 00:00:32.337201 | 2026-03-29 00:00:32.337291 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2026-03-29 00:00:32.859076 | orchestrator | ok 2026-03-29 00:00:32.863876 | 2026-03-29 00:00:32.863970 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2026-03-29 00:00:33.315386 | orchestrator | ok 2026-03-29 00:00:33.320441 | 2026-03-29 00:00:33.320524 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2026-03-29 00:00:33.352617 | orchestrator | skipping: Conditional result was False 2026-03-29 00:00:33.376171 | 2026-03-29 00:00:33.376268 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2026-03-29 00:00:34.878142 | orchestrator -> localhost | changed 2026-03-29 00:00:34.894155 | 2026-03-29 00:00:34.894253 | TASK [add-build-sshkey : Add back temp key] 2026-03-29 00:00:36.041503 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/0c9b8dad94e24d61892e6bb3a93b466e/work/0c9b8dad94e24d61892e6bb3a93b466e_id_rsa (zuul-build-sshkey) 2026-03-29 00:00:36.041695 | orchestrator -> localhost | ok: Runtime: 0:00:00.031769 2026-03-29 00:00:36.047528 | 2026-03-29 00:00:36.047600 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2026-03-29 00:00:36.843837 | orchestrator | ok 2026-03-29 00:00:36.855812 | 2026-03-29 00:00:36.855924 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2026-03-29 00:00:36.895712 | orchestrator | skipping: Conditional result was False 2026-03-29 00:00:36.976217 | 2026-03-29 00:00:36.976314 | TASK [start-zuul-console : Start zuul_console daemon.] 2026-03-29 00:00:37.563417 | orchestrator | ok 2026-03-29 00:00:37.586909 | 2026-03-29 00:00:37.587015 | TASK [validate-host : Define zuul_info_dir fact] 2026-03-29 00:00:37.645114 | orchestrator | ok 2026-03-29 00:00:37.655102 | 2026-03-29 00:00:37.655193 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2026-03-29 00:00:38.263865 | orchestrator -> localhost | ok 2026-03-29 00:00:38.269910 | 2026-03-29 00:00:38.269990 | TASK [validate-host : Collect information about the host] 2026-03-29 00:00:39.664140 | orchestrator | ok 2026-03-29 00:00:39.732161 | 2026-03-29 00:00:39.732286 | TASK [validate-host : Sanitize hostname] 2026-03-29 00:00:39.822469 | orchestrator | ok 2026-03-29 00:00:39.832992 | 2026-03-29 00:00:39.833103 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2026-03-29 00:00:41.388410 | orchestrator -> localhost | changed 2026-03-29 00:00:41.399445 | 2026-03-29 00:00:41.399538 | TASK [validate-host : Collect information about zuul worker] 2026-03-29 00:00:41.929016 | orchestrator | ok 2026-03-29 00:00:41.933425 | 2026-03-29 00:00:41.933508 | TASK [validate-host : Write out all zuul information for each host] 2026-03-29 00:00:43.191927 | orchestrator -> localhost | changed 2026-03-29 00:00:43.203146 | 2026-03-29 00:00:43.203244 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2026-03-29 00:00:43.526462 | orchestrator | ok 2026-03-29 00:00:43.544320 | 2026-03-29 00:00:43.544420 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2026-03-29 00:02:15.442221 | orchestrator | changed: 2026-03-29 00:02:15.443806 | orchestrator | .d..t...... src/ 2026-03-29 00:02:15.443867 | orchestrator | .d..t...... src/github.com/ 2026-03-29 00:02:15.443894 | orchestrator | .d..t...... src/github.com/osism/ 2026-03-29 00:02:15.443916 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2026-03-29 00:02:15.443937 | orchestrator | RedHat.yml 2026-03-29 00:02:15.459121 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2026-03-29 00:02:15.459139 | orchestrator | RedHat.yml 2026-03-29 00:02:15.459191 | orchestrator | = 1.53.0"... 2026-03-29 00:02:27.519575 | orchestrator | - Finding hashicorp/local versions matching ">= 2.2.0"... 2026-03-29 00:02:27.540170 | orchestrator | - Finding latest version of hashicorp/null... 2026-03-29 00:02:27.771140 | orchestrator | - Installing terraform-provider-openstack/openstack v3.4.0... 2026-03-29 00:02:28.551107 | orchestrator | - Installed terraform-provider-openstack/openstack v3.4.0 (signed, key ID 4F80527A391BEFD2) 2026-03-29 00:02:28.810999 | orchestrator | - Installing hashicorp/local v2.7.0... 2026-03-29 00:02:29.600864 | orchestrator | - Installed hashicorp/local v2.7.0 (signed, key ID 0C0AF313E5FD9F80) 2026-03-29 00:02:29.669544 | orchestrator | - Installing hashicorp/null v3.2.4... 2026-03-29 00:02:30.103905 | orchestrator | - Installed hashicorp/null v3.2.4 (signed, key ID 0C0AF313E5FD9F80) 2026-03-29 00:02:30.103976 | orchestrator | 2026-03-29 00:02:30.103987 | orchestrator | Providers are signed by their developers. 2026-03-29 00:02:30.103995 | orchestrator | If you'd like to know more about provider signing, you can read about it here: 2026-03-29 00:02:30.104013 | orchestrator | https://opentofu.org/docs/cli/plugins/signing/ 2026-03-29 00:02:30.104024 | orchestrator | 2026-03-29 00:02:30.104032 | orchestrator | OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2026-03-29 00:02:30.104040 | orchestrator | selections it made above. Include this file in your version control repository 2026-03-29 00:02:30.104059 | orchestrator | so that OpenTofu can guarantee to make the same selections by default when 2026-03-29 00:02:30.104064 | orchestrator | you run "tofu init" in the future. 2026-03-29 00:02:30.104316 | orchestrator | 2026-03-29 00:02:30.104332 | orchestrator | OpenTofu has been successfully initialized! 2026-03-29 00:02:30.104339 | orchestrator | 2026-03-29 00:02:30.104346 | orchestrator | You may now begin working with OpenTofu. Try running "tofu plan" to see 2026-03-29 00:02:30.104353 | orchestrator | any changes that are required for your infrastructure. All OpenTofu commands 2026-03-29 00:02:30.104360 | orchestrator | should now work. 2026-03-29 00:02:30.104366 | orchestrator | 2026-03-29 00:02:30.104372 | orchestrator | If you ever set or change modules or backend configuration for OpenTofu, 2026-03-29 00:02:30.104379 | orchestrator | rerun this command to reinitialize your working directory. If you forget, other 2026-03-29 00:02:30.104386 | orchestrator | commands will detect it and remind you to do so if necessary. 2026-03-29 00:02:30.308958 | orchestrator | Created and switched to workspace "ci"! 2026-03-29 00:02:30.309027 | orchestrator | 2026-03-29 00:02:30.309036 | orchestrator | You're now on a new, empty workspace. Workspaces isolate their state, 2026-03-29 00:02:30.309044 | orchestrator | so if you run "tofu plan" OpenTofu will not see any existing state 2026-03-29 00:02:30.309052 | orchestrator | for this configuration. 2026-03-29 00:02:30.427681 | orchestrator | ci.auto.tfvars 2026-03-29 00:02:30.961889 | orchestrator | default_custom.tf 2026-03-29 00:02:33.744803 | orchestrator | data.openstack_networking_network_v2.public: Reading... 2026-03-29 00:02:34.310803 | orchestrator | data.openstack_networking_network_v2.public: Read complete after 0s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2026-03-29 00:02:34.666090 | orchestrator | 2026-03-29 00:02:34.666163 | orchestrator | OpenTofu used the selected providers to generate the following execution 2026-03-29 00:02:34.666171 | orchestrator | plan. Resource actions are indicated with the following symbols: 2026-03-29 00:02:34.680904 | orchestrator | + create 2026-03-29 00:02:34.680952 | orchestrator | <= read (data resources) 2026-03-29 00:02:34.680963 | orchestrator | 2026-03-29 00:02:34.680968 | orchestrator | OpenTofu will perform the following actions: 2026-03-29 00:02:34.681006 | orchestrator | 2026-03-29 00:02:34.681012 | orchestrator | # data.openstack_images_image_v2.image will be read during apply 2026-03-29 00:02:34.681017 | orchestrator | # (config refers to values not yet known) 2026-03-29 00:02:34.681022 | orchestrator | <= data "openstack_images_image_v2" "image" { 2026-03-29 00:02:34.681027 | orchestrator | + checksum = (known after apply) 2026-03-29 00:02:34.681031 | orchestrator | + created_at = (known after apply) 2026-03-29 00:02:34.681036 | orchestrator | + file = (known after apply) 2026-03-29 00:02:34.681040 | orchestrator | + id = (known after apply) 2026-03-29 00:02:34.681059 | orchestrator | + metadata = (known after apply) 2026-03-29 00:02:34.681064 | orchestrator | + min_disk_gb = (known after apply) 2026-03-29 00:02:34.681068 | orchestrator | + min_ram_mb = (known after apply) 2026-03-29 00:02:34.681072 | orchestrator | + most_recent = true 2026-03-29 00:02:34.681076 | orchestrator | + name = (known after apply) 2026-03-29 00:02:34.681080 | orchestrator | + protected = (known after apply) 2026-03-29 00:02:34.681084 | orchestrator | + region = (known after apply) 2026-03-29 00:02:34.681091 | orchestrator | + schema = (known after apply) 2026-03-29 00:02:34.681095 | orchestrator | + size_bytes = (known after apply) 2026-03-29 00:02:34.681099 | orchestrator | + tags = (known after apply) 2026-03-29 00:02:34.681103 | orchestrator | + updated_at = (known after apply) 2026-03-29 00:02:34.681107 | orchestrator | } 2026-03-29 00:02:34.681113 | orchestrator | 2026-03-29 00:02:34.681117 | orchestrator | # data.openstack_images_image_v2.image_node will be read during apply 2026-03-29 00:02:34.681121 | orchestrator | # (config refers to values not yet known) 2026-03-29 00:02:34.681125 | orchestrator | <= data "openstack_images_image_v2" "image_node" { 2026-03-29 00:02:34.681129 | orchestrator | + checksum = (known after apply) 2026-03-29 00:02:34.681132 | orchestrator | + created_at = (known after apply) 2026-03-29 00:02:34.681136 | orchestrator | + file = (known after apply) 2026-03-29 00:02:34.681140 | orchestrator | + id = (known after apply) 2026-03-29 00:02:34.681144 | orchestrator | + metadata = (known after apply) 2026-03-29 00:02:34.681148 | orchestrator | + min_disk_gb = (known after apply) 2026-03-29 00:02:34.681152 | orchestrator | + min_ram_mb = (known after apply) 2026-03-29 00:02:34.681155 | orchestrator | + most_recent = true 2026-03-29 00:02:34.681159 | orchestrator | + name = (known after apply) 2026-03-29 00:02:34.681163 | orchestrator | + protected = (known after apply) 2026-03-29 00:02:34.681167 | orchestrator | + region = (known after apply) 2026-03-29 00:02:34.681170 | orchestrator | + schema = (known after apply) 2026-03-29 00:02:34.681174 | orchestrator | + size_bytes = (known after apply) 2026-03-29 00:02:34.681178 | orchestrator | + tags = (known after apply) 2026-03-29 00:02:34.681182 | orchestrator | + updated_at = (known after apply) 2026-03-29 00:02:34.681186 | orchestrator | } 2026-03-29 00:02:34.681191 | orchestrator | 2026-03-29 00:02:34.681195 | orchestrator | # local_file.MANAGER_ADDRESS will be created 2026-03-29 00:02:34.681199 | orchestrator | + resource "local_file" "MANAGER_ADDRESS" { 2026-03-29 00:02:34.681203 | orchestrator | + content = (known after apply) 2026-03-29 00:02:34.681207 | orchestrator | + content_base64sha256 = (known after apply) 2026-03-29 00:02:34.681211 | orchestrator | + content_base64sha512 = (known after apply) 2026-03-29 00:02:34.681214 | orchestrator | + content_md5 = (known after apply) 2026-03-29 00:02:34.681218 | orchestrator | + content_sha1 = (known after apply) 2026-03-29 00:02:34.681222 | orchestrator | + content_sha256 = (known after apply) 2026-03-29 00:02:34.681226 | orchestrator | + content_sha512 = (known after apply) 2026-03-29 00:02:34.681229 | orchestrator | + directory_permission = "0777" 2026-03-29 00:02:34.681233 | orchestrator | + file_permission = "0644" 2026-03-29 00:02:34.681237 | orchestrator | + filename = ".MANAGER_ADDRESS.ci" 2026-03-29 00:02:34.681241 | orchestrator | + id = (known after apply) 2026-03-29 00:02:34.681244 | orchestrator | } 2026-03-29 00:02:34.681250 | orchestrator | 2026-03-29 00:02:34.681253 | orchestrator | # local_file.id_rsa_pub will be created 2026-03-29 00:02:34.681257 | orchestrator | + resource "local_file" "id_rsa_pub" { 2026-03-29 00:02:34.681261 | orchestrator | + content = (known after apply) 2026-03-29 00:02:34.681265 | orchestrator | + content_base64sha256 = (known after apply) 2026-03-29 00:02:34.681268 | orchestrator | + content_base64sha512 = (known after apply) 2026-03-29 00:02:34.681272 | orchestrator | + content_md5 = (known after apply) 2026-03-29 00:02:34.681276 | orchestrator | + content_sha1 = (known after apply) 2026-03-29 00:02:34.681280 | orchestrator | + content_sha256 = (known after apply) 2026-03-29 00:02:34.681283 | orchestrator | + content_sha512 = (known after apply) 2026-03-29 00:02:34.681287 | orchestrator | + directory_permission = "0777" 2026-03-29 00:02:34.681291 | orchestrator | + file_permission = "0644" 2026-03-29 00:02:34.681299 | orchestrator | + filename = ".id_rsa.ci.pub" 2026-03-29 00:02:34.681302 | orchestrator | + id = (known after apply) 2026-03-29 00:02:34.681306 | orchestrator | } 2026-03-29 00:02:34.681311 | orchestrator | 2026-03-29 00:02:34.681319 | orchestrator | # local_file.inventory will be created 2026-03-29 00:02:34.681323 | orchestrator | + resource "local_file" "inventory" { 2026-03-29 00:02:34.681326 | orchestrator | + content = (known after apply) 2026-03-29 00:02:34.681330 | orchestrator | + content_base64sha256 = (known after apply) 2026-03-29 00:02:34.681334 | orchestrator | + content_base64sha512 = (known after apply) 2026-03-29 00:02:34.681338 | orchestrator | + content_md5 = (known after apply) 2026-03-29 00:02:34.681341 | orchestrator | + content_sha1 = (known after apply) 2026-03-29 00:02:34.681345 | orchestrator | + content_sha256 = (known after apply) 2026-03-29 00:02:34.681349 | orchestrator | + content_sha512 = (known after apply) 2026-03-29 00:02:34.681353 | orchestrator | + directory_permission = "0777" 2026-03-29 00:02:34.681357 | orchestrator | + file_permission = "0644" 2026-03-29 00:02:34.681361 | orchestrator | + filename = "inventory.ci" 2026-03-29 00:02:34.681364 | orchestrator | + id = (known after apply) 2026-03-29 00:02:34.681368 | orchestrator | } 2026-03-29 00:02:34.681373 | orchestrator | 2026-03-29 00:02:34.681377 | orchestrator | # local_sensitive_file.id_rsa will be created 2026-03-29 00:02:34.681381 | orchestrator | + resource "local_sensitive_file" "id_rsa" { 2026-03-29 00:02:34.681385 | orchestrator | + content = (sensitive value) 2026-03-29 00:02:34.681389 | orchestrator | + content_base64sha256 = (known after apply) 2026-03-29 00:02:34.681392 | orchestrator | + content_base64sha512 = (known after apply) 2026-03-29 00:02:34.681396 | orchestrator | + content_md5 = (known after apply) 2026-03-29 00:02:34.681400 | orchestrator | + content_sha1 = (known after apply) 2026-03-29 00:02:34.681404 | orchestrator | + content_sha256 = (known after apply) 2026-03-29 00:02:34.681407 | orchestrator | + content_sha512 = (known after apply) 2026-03-29 00:02:34.681411 | orchestrator | + directory_permission = "0700" 2026-03-29 00:02:34.681415 | orchestrator | + file_permission = "0600" 2026-03-29 00:02:34.681419 | orchestrator | + filename = ".id_rsa.ci" 2026-03-29 00:02:34.681423 | orchestrator | + id = (known after apply) 2026-03-29 00:02:34.681426 | orchestrator | } 2026-03-29 00:02:34.681430 | orchestrator | 2026-03-29 00:02:34.681434 | orchestrator | # null_resource.node_semaphore will be created 2026-03-29 00:02:34.681438 | orchestrator | + resource "null_resource" "node_semaphore" { 2026-03-29 00:02:34.681441 | orchestrator | + id = (known after apply) 2026-03-29 00:02:34.681457 | orchestrator | } 2026-03-29 00:02:34.681466 | orchestrator | 2026-03-29 00:02:34.681470 | orchestrator | # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2026-03-29 00:02:34.681474 | orchestrator | + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2026-03-29 00:02:34.681478 | orchestrator | + attachment = (known after apply) 2026-03-29 00:02:34.681482 | orchestrator | + availability_zone = "nova" 2026-03-29 00:02:34.681486 | orchestrator | + id = (known after apply) 2026-03-29 00:02:34.681489 | orchestrator | + image_id = (known after apply) 2026-03-29 00:02:34.681493 | orchestrator | + metadata = (known after apply) 2026-03-29 00:02:34.681497 | orchestrator | + name = "testbed-volume-manager-base" 2026-03-29 00:02:34.681501 | orchestrator | + region = (known after apply) 2026-03-29 00:02:34.681505 | orchestrator | + size = 80 2026-03-29 00:02:34.681509 | orchestrator | + volume_retype_policy = "never" 2026-03-29 00:02:34.681512 | orchestrator | + volume_type = "ssd" 2026-03-29 00:02:34.681516 | orchestrator | } 2026-03-29 00:02:34.681520 | orchestrator | 2026-03-29 00:02:34.681524 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2026-03-29 00:02:34.681528 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-29 00:02:34.681531 | orchestrator | + attachment = (known after apply) 2026-03-29 00:02:34.681535 | orchestrator | + availability_zone = "nova" 2026-03-29 00:02:34.681539 | orchestrator | + id = (known after apply) 2026-03-29 00:02:34.681546 | orchestrator | + image_id = (known after apply) 2026-03-29 00:02:34.681550 | orchestrator | + metadata = (known after apply) 2026-03-29 00:02:34.681554 | orchestrator | + name = "testbed-volume-0-node-base" 2026-03-29 00:02:34.681557 | orchestrator | + region = (known after apply) 2026-03-29 00:02:34.681561 | orchestrator | + size = 80 2026-03-29 00:02:34.681565 | orchestrator | + volume_retype_policy = "never" 2026-03-29 00:02:34.681569 | orchestrator | + volume_type = "ssd" 2026-03-29 00:02:34.681573 | orchestrator | } 2026-03-29 00:02:34.681578 | orchestrator | 2026-03-29 00:02:34.681582 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2026-03-29 00:02:34.681586 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-29 00:02:34.681590 | orchestrator | + attachment = (known after apply) 2026-03-29 00:02:34.681593 | orchestrator | + availability_zone = "nova" 2026-03-29 00:02:34.681597 | orchestrator | + id = (known after apply) 2026-03-29 00:02:34.681601 | orchestrator | + image_id = (known after apply) 2026-03-29 00:02:34.681605 | orchestrator | + metadata = (known after apply) 2026-03-29 00:02:34.681609 | orchestrator | + name = "testbed-volume-1-node-base" 2026-03-29 00:02:34.681612 | orchestrator | + region = (known after apply) 2026-03-29 00:02:34.681616 | orchestrator | + size = 80 2026-03-29 00:02:34.681620 | orchestrator | + volume_retype_policy = "never" 2026-03-29 00:02:34.681624 | orchestrator | + volume_type = "ssd" 2026-03-29 00:02:34.681628 | orchestrator | } 2026-03-29 00:02:34.681631 | orchestrator | 2026-03-29 00:02:34.681635 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2026-03-29 00:02:34.681639 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-29 00:02:34.681643 | orchestrator | + attachment = (known after apply) 2026-03-29 00:02:34.681647 | orchestrator | + availability_zone = "nova" 2026-03-29 00:02:34.681650 | orchestrator | + id = (known after apply) 2026-03-29 00:02:34.681654 | orchestrator | + image_id = (known after apply) 2026-03-29 00:02:34.681658 | orchestrator | + metadata = (known after apply) 2026-03-29 00:02:34.681662 | orchestrator | + name = "testbed-volume-2-node-base" 2026-03-29 00:02:34.681666 | orchestrator | + region = (known after apply) 2026-03-29 00:02:34.681669 | orchestrator | + size = 80 2026-03-29 00:02:34.681673 | orchestrator | + volume_retype_policy = "never" 2026-03-29 00:02:34.681677 | orchestrator | + volume_type = "ssd" 2026-03-29 00:02:34.681681 | orchestrator | } 2026-03-29 00:02:34.681686 | orchestrator | 2026-03-29 00:02:34.681690 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2026-03-29 00:02:34.681694 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-29 00:02:34.681697 | orchestrator | + attachment = (known after apply) 2026-03-29 00:02:34.681701 | orchestrator | + availability_zone = "nova" 2026-03-29 00:02:34.681705 | orchestrator | + id = (known after apply) 2026-03-29 00:02:34.681709 | orchestrator | + image_id = (known after apply) 2026-03-29 00:02:34.681712 | orchestrator | + metadata = (known after apply) 2026-03-29 00:02:34.681721 | orchestrator | + name = "testbed-volume-3-node-base" 2026-03-29 00:02:34.681725 | orchestrator | + region = (known after apply) 2026-03-29 00:02:34.681729 | orchestrator | + size = 80 2026-03-29 00:02:34.681733 | orchestrator | + volume_retype_policy = "never" 2026-03-29 00:02:34.681737 | orchestrator | + volume_type = "ssd" 2026-03-29 00:02:34.681741 | orchestrator | } 2026-03-29 00:02:34.681744 | orchestrator | 2026-03-29 00:02:34.681748 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2026-03-29 00:02:34.681752 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-29 00:02:34.681756 | orchestrator | + attachment = (known after apply) 2026-03-29 00:02:34.681760 | orchestrator | + availability_zone = "nova" 2026-03-29 00:02:34.681763 | orchestrator | + id = (known after apply) 2026-03-29 00:02:34.681770 | orchestrator | + image_id = (known after apply) 2026-03-29 00:02:34.681774 | orchestrator | + metadata = (known after apply) 2026-03-29 00:02:34.681778 | orchestrator | + name = "testbed-volume-4-node-base" 2026-03-29 00:02:34.681782 | orchestrator | + region = (known after apply) 2026-03-29 00:02:34.681785 | orchestrator | + size = 80 2026-03-29 00:02:34.681789 | orchestrator | + volume_retype_policy = "never" 2026-03-29 00:02:34.681793 | orchestrator | + volume_type = "ssd" 2026-03-29 00:02:34.681797 | orchestrator | } 2026-03-29 00:02:34.681802 | orchestrator | 2026-03-29 00:02:34.681806 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2026-03-29 00:02:34.681810 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-29 00:02:34.681814 | orchestrator | + attachment = (known after apply) 2026-03-29 00:02:34.681817 | orchestrator | + availability_zone = "nova" 2026-03-29 00:02:34.681821 | orchestrator | + id = (known after apply) 2026-03-29 00:02:34.681825 | orchestrator | + image_id = (known after apply) 2026-03-29 00:02:34.681829 | orchestrator | + metadata = (known after apply) 2026-03-29 00:02:34.681833 | orchestrator | + name = "testbed-volume-5-node-base" 2026-03-29 00:02:34.681836 | orchestrator | + region = (known after apply) 2026-03-29 00:02:34.681840 | orchestrator | + size = 80 2026-03-29 00:02:34.681844 | orchestrator | + volume_retype_policy = "never" 2026-03-29 00:02:34.681848 | orchestrator | + volume_type = "ssd" 2026-03-29 00:02:34.681852 | orchestrator | } 2026-03-29 00:02:34.681855 | orchestrator | 2026-03-29 00:02:34.681859 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[0] will be created 2026-03-29 00:02:34.681863 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-29 00:02:34.681867 | orchestrator | + attachment = (known after apply) 2026-03-29 00:02:34.681871 | orchestrator | + availability_zone = "nova" 2026-03-29 00:02:34.681874 | orchestrator | + id = (known after apply) 2026-03-29 00:02:34.681878 | orchestrator | + metadata = (known after apply) 2026-03-29 00:02:34.681882 | orchestrator | + name = "testbed-volume-0-node-3" 2026-03-29 00:02:34.681886 | orchestrator | + region = (known after apply) 2026-03-29 00:02:34.681890 | orchestrator | + size = 20 2026-03-29 00:02:34.681894 | orchestrator | + volume_retype_policy = "never" 2026-03-29 00:02:34.681898 | orchestrator | + volume_type = "ssd" 2026-03-29 00:02:34.681902 | orchestrator | } 2026-03-29 00:02:34.681907 | orchestrator | 2026-03-29 00:02:34.681911 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[1] will be created 2026-03-29 00:02:34.681914 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-29 00:02:34.681918 | orchestrator | + attachment = (known after apply) 2026-03-29 00:02:34.681922 | orchestrator | + availability_zone = "nova" 2026-03-29 00:02:34.681926 | orchestrator | + id = (known after apply) 2026-03-29 00:02:34.681930 | orchestrator | + metadata = (known after apply) 2026-03-29 00:02:34.681933 | orchestrator | + name = "testbed-volume-1-node-4" 2026-03-29 00:02:34.681937 | orchestrator | + region = (known after apply) 2026-03-29 00:02:34.681941 | orchestrator | + size = 20 2026-03-29 00:02:34.681945 | orchestrator | + volume_retype_policy = "never" 2026-03-29 00:02:34.681948 | orchestrator | + volume_type = "ssd" 2026-03-29 00:02:34.681952 | orchestrator | } 2026-03-29 00:02:34.681956 | orchestrator | 2026-03-29 00:02:34.681960 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[2] will be created 2026-03-29 00:02:34.681964 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-29 00:02:34.681968 | orchestrator | + attachment = (known after apply) 2026-03-29 00:02:34.681971 | orchestrator | + availability_zone = "nova" 2026-03-29 00:02:34.681975 | orchestrator | + id = (known after apply) 2026-03-29 00:02:34.681979 | orchestrator | + metadata = (known after apply) 2026-03-29 00:02:34.681983 | orchestrator | + name = "testbed-volume-2-node-5" 2026-03-29 00:02:34.681986 | orchestrator | + region = (known after apply) 2026-03-29 00:02:34.681993 | orchestrator | + size = 20 2026-03-29 00:02:34.681997 | orchestrator | + volume_retype_policy = "never" 2026-03-29 00:02:34.682000 | orchestrator | + volume_type = "ssd" 2026-03-29 00:02:34.682004 | orchestrator | } 2026-03-29 00:02:34.682009 | orchestrator | 2026-03-29 00:02:34.682031 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[3] will be created 2026-03-29 00:02:34.682035 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-29 00:02:34.682039 | orchestrator | + attachment = (known after apply) 2026-03-29 00:02:34.682043 | orchestrator | + availability_zone = "nova" 2026-03-29 00:02:34.682046 | orchestrator | + id = (known after apply) 2026-03-29 00:02:34.682050 | orchestrator | + metadata = (known after apply) 2026-03-29 00:02:34.682054 | orchestrator | + name = "testbed-volume-3-node-3" 2026-03-29 00:02:34.682058 | orchestrator | + region = (known after apply) 2026-03-29 00:02:34.682062 | orchestrator | + size = 20 2026-03-29 00:02:34.682066 | orchestrator | + volume_retype_policy = "never" 2026-03-29 00:02:34.682069 | orchestrator | + volume_type = "ssd" 2026-03-29 00:02:34.682073 | orchestrator | } 2026-03-29 00:02:34.682077 | orchestrator | 2026-03-29 00:02:34.682081 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[4] will be created 2026-03-29 00:02:34.682084 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-29 00:02:34.682088 | orchestrator | + attachment = (known after apply) 2026-03-29 00:02:34.682092 | orchestrator | + availability_zone = "nova" 2026-03-29 00:02:34.682096 | orchestrator | + id = (known after apply) 2026-03-29 00:02:34.682100 | orchestrator | + metadata = (known after apply) 2026-03-29 00:02:34.682103 | orchestrator | + name = "testbed-volume-4-node-4" 2026-03-29 00:02:34.682107 | orchestrator | + region = (known after apply) 2026-03-29 00:02:34.682113 | orchestrator | + size = 20 2026-03-29 00:02:34.682117 | orchestrator | + volume_retype_policy = "never" 2026-03-29 00:02:34.682121 | orchestrator | + volume_type = "ssd" 2026-03-29 00:02:34.682125 | orchestrator | } 2026-03-29 00:02:34.682131 | orchestrator | 2026-03-29 00:02:34.682135 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[5] will be created 2026-03-29 00:02:34.682138 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-29 00:02:34.682142 | orchestrator | + attachment = (known after apply) 2026-03-29 00:02:34.682146 | orchestrator | + availability_zone = "nova" 2026-03-29 00:02:34.682150 | orchestrator | + id = (known after apply) 2026-03-29 00:02:34.682153 | orchestrator | + metadata = (known after apply) 2026-03-29 00:02:34.682157 | orchestrator | + name = "testbed-volume-5-node-5" 2026-03-29 00:02:34.682161 | orchestrator | + region = (known after apply) 2026-03-29 00:02:34.682165 | orchestrator | + size = 20 2026-03-29 00:02:34.682168 | orchestrator | + volume_retype_policy = "never" 2026-03-29 00:02:34.682172 | orchestrator | + volume_type = "ssd" 2026-03-29 00:02:34.682176 | orchestrator | } 2026-03-29 00:02:34.682180 | orchestrator | 2026-03-29 00:02:34.682184 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[6] will be created 2026-03-29 00:02:34.682187 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-29 00:02:34.682191 | orchestrator | + attachment = (known after apply) 2026-03-29 00:02:34.682195 | orchestrator | + availability_zone = "nova" 2026-03-29 00:02:34.682199 | orchestrator | + id = (known after apply) 2026-03-29 00:02:34.682202 | orchestrator | + metadata = (known after apply) 2026-03-29 00:02:34.682206 | orchestrator | + name = "testbed-volume-6-node-3" 2026-03-29 00:02:34.682210 | orchestrator | + region = (known after apply) 2026-03-29 00:02:34.682214 | orchestrator | + size = 20 2026-03-29 00:02:34.682217 | orchestrator | + volume_retype_policy = "never" 2026-03-29 00:02:34.682221 | orchestrator | + volume_type = "ssd" 2026-03-29 00:02:34.682225 | orchestrator | } 2026-03-29 00:02:34.682229 | orchestrator | 2026-03-29 00:02:34.682233 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[7] will be created 2026-03-29 00:02:34.682236 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-29 00:02:34.682243 | orchestrator | + attachment = (known after apply) 2026-03-29 00:02:34.682247 | orchestrator | + availability_zone = "nova" 2026-03-29 00:02:34.682251 | orchestrator | + id = (known after apply) 2026-03-29 00:02:34.682254 | orchestrator | + metadata = (known after apply) 2026-03-29 00:02:34.682258 | orchestrator | + name = "testbed-volume-7-node-4" 2026-03-29 00:02:34.682262 | orchestrator | + region = (known after apply) 2026-03-29 00:02:34.682266 | orchestrator | + size = 20 2026-03-29 00:02:34.682270 | orchestrator | + volume_retype_policy = "never" 2026-03-29 00:02:34.682274 | orchestrator | + volume_type = "ssd" 2026-03-29 00:02:34.682277 | orchestrator | } 2026-03-29 00:02:34.682283 | orchestrator | 2026-03-29 00:02:34.682287 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[8] will be created 2026-03-29 00:02:34.682290 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-29 00:02:34.682294 | orchestrator | + attachment = (known after apply) 2026-03-29 00:02:34.682298 | orchestrator | + availability_zone = "nova" 2026-03-29 00:02:34.682302 | orchestrator | + id = (known after apply) 2026-03-29 00:02:34.682305 | orchestrator | + metadata = (known after apply) 2026-03-29 00:02:34.682309 | orchestrator | + name = "testbed-volume-8-node-5" 2026-03-29 00:02:34.682313 | orchestrator | + region = (known after apply) 2026-03-29 00:02:34.682317 | orchestrator | + size = 20 2026-03-29 00:02:34.682321 | orchestrator | + volume_retype_policy = "never" 2026-03-29 00:02:34.682324 | orchestrator | + volume_type = "ssd" 2026-03-29 00:02:34.682328 | orchestrator | } 2026-03-29 00:02:34.683860 | orchestrator | 2026-03-29 00:02:34.683889 | orchestrator | # openstack_compute_instance_v2.manager_server will be created 2026-03-29 00:02:34.683894 | orchestrator | + resource "openstack_compute_instance_v2" "manager_server" { 2026-03-29 00:02:34.683898 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-29 00:02:34.683902 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-29 00:02:34.683906 | orchestrator | + all_metadata = (known after apply) 2026-03-29 00:02:34.683910 | orchestrator | + all_tags = (known after apply) 2026-03-29 00:02:34.683914 | orchestrator | + availability_zone = "nova" 2026-03-29 00:02:34.683918 | orchestrator | + config_drive = true 2026-03-29 00:02:34.683921 | orchestrator | + created = (known after apply) 2026-03-29 00:02:34.683925 | orchestrator | + flavor_id = (known after apply) 2026-03-29 00:02:34.683929 | orchestrator | + flavor_name = "OSISM-4V-16" 2026-03-29 00:02:34.683933 | orchestrator | + force_delete = false 2026-03-29 00:02:34.683936 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-29 00:02:34.683940 | orchestrator | + id = (known after apply) 2026-03-29 00:02:34.683944 | orchestrator | + image_id = (known after apply) 2026-03-29 00:02:34.683948 | orchestrator | + image_name = (known after apply) 2026-03-29 00:02:34.683952 | orchestrator | + key_pair = "testbed" 2026-03-29 00:02:34.683956 | orchestrator | + name = "testbed-manager" 2026-03-29 00:02:34.683959 | orchestrator | + power_state = "active" 2026-03-29 00:02:34.683963 | orchestrator | + region = (known after apply) 2026-03-29 00:02:34.683967 | orchestrator | + security_groups = (known after apply) 2026-03-29 00:02:34.683970 | orchestrator | + stop_before_destroy = false 2026-03-29 00:02:34.683974 | orchestrator | + updated = (known after apply) 2026-03-29 00:02:34.683978 | orchestrator | + user_data = (sensitive value) 2026-03-29 00:02:34.683982 | orchestrator | 2026-03-29 00:02:34.683986 | orchestrator | + block_device { 2026-03-29 00:02:34.683989 | orchestrator | + boot_index = 0 2026-03-29 00:02:34.683993 | orchestrator | + delete_on_termination = false 2026-03-29 00:02:34.684003 | orchestrator | + destination_type = "volume" 2026-03-29 00:02:34.684007 | orchestrator | + multiattach = false 2026-03-29 00:02:34.684011 | orchestrator | + source_type = "volume" 2026-03-29 00:02:34.684015 | orchestrator | + uuid = (known after apply) 2026-03-29 00:02:34.684026 | orchestrator | } 2026-03-29 00:02:34.684030 | orchestrator | 2026-03-29 00:02:34.684034 | orchestrator | + network { 2026-03-29 00:02:34.684038 | orchestrator | + access_network = false 2026-03-29 00:02:34.684042 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-29 00:02:34.684045 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-29 00:02:34.684049 | orchestrator | + mac = (known after apply) 2026-03-29 00:02:34.684053 | orchestrator | + name = (known after apply) 2026-03-29 00:02:34.684057 | orchestrator | + port = (known after apply) 2026-03-29 00:02:34.684060 | orchestrator | + uuid = (known after apply) 2026-03-29 00:02:34.684064 | orchestrator | } 2026-03-29 00:02:34.684068 | orchestrator | } 2026-03-29 00:02:34.684072 | orchestrator | 2026-03-29 00:02:34.684076 | orchestrator | # openstack_compute_instance_v2.node_server[0] will be created 2026-03-29 00:02:34.684079 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-29 00:02:34.684083 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-29 00:02:34.684087 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-29 00:02:34.684091 | orchestrator | + all_metadata = (known after apply) 2026-03-29 00:02:34.684095 | orchestrator | + all_tags = (known after apply) 2026-03-29 00:02:34.684114 | orchestrator | + availability_zone = "nova" 2026-03-29 00:02:34.684118 | orchestrator | + config_drive = true 2026-03-29 00:02:34.684121 | orchestrator | + created = (known after apply) 2026-03-29 00:02:34.684125 | orchestrator | + flavor_id = (known after apply) 2026-03-29 00:02:34.684129 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-29 00:02:34.684133 | orchestrator | + force_delete = false 2026-03-29 00:02:34.684137 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-29 00:02:34.684140 | orchestrator | + id = (known after apply) 2026-03-29 00:02:34.684144 | orchestrator | + image_id = (known after apply) 2026-03-29 00:02:34.684148 | orchestrator | + image_name = (known after apply) 2026-03-29 00:02:34.684152 | orchestrator | + key_pair = "testbed" 2026-03-29 00:02:34.684155 | orchestrator | + name = "testbed-node-0" 2026-03-29 00:02:34.684159 | orchestrator | + power_state = "active" 2026-03-29 00:02:34.684163 | orchestrator | + region = (known after apply) 2026-03-29 00:02:34.684167 | orchestrator | + security_groups = (known after apply) 2026-03-29 00:02:34.684171 | orchestrator | + stop_before_destroy = false 2026-03-29 00:02:34.684174 | orchestrator | + updated = (known after apply) 2026-03-29 00:02:34.684178 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-29 00:02:34.684183 | orchestrator | 2026-03-29 00:02:34.684186 | orchestrator | + block_device { 2026-03-29 00:02:34.684190 | orchestrator | + boot_index = 0 2026-03-29 00:02:34.684194 | orchestrator | + delete_on_termination = false 2026-03-29 00:02:34.684198 | orchestrator | + destination_type = "volume" 2026-03-29 00:02:34.684201 | orchestrator | + multiattach = false 2026-03-29 00:02:34.684205 | orchestrator | + source_type = "volume" 2026-03-29 00:02:34.684209 | orchestrator | + uuid = (known after apply) 2026-03-29 00:02:34.684213 | orchestrator | } 2026-03-29 00:02:34.684216 | orchestrator | 2026-03-29 00:02:34.684220 | orchestrator | + network { 2026-03-29 00:02:34.684224 | orchestrator | + access_network = false 2026-03-29 00:02:34.684228 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-29 00:02:34.684232 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-29 00:02:34.684235 | orchestrator | + mac = (known after apply) 2026-03-29 00:02:34.684239 | orchestrator | + name = (known after apply) 2026-03-29 00:02:34.684243 | orchestrator | + port = (known after apply) 2026-03-29 00:02:34.684246 | orchestrator | + uuid = (known after apply) 2026-03-29 00:02:34.684250 | orchestrator | } 2026-03-29 00:02:34.684254 | orchestrator | } 2026-03-29 00:02:34.684258 | orchestrator | 2026-03-29 00:02:34.684262 | orchestrator | # openstack_compute_instance_v2.node_server[1] will be created 2026-03-29 00:02:34.684265 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-29 00:02:34.684269 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-29 00:02:34.684279 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-29 00:02:34.684283 | orchestrator | + all_metadata = (known after apply) 2026-03-29 00:02:34.684287 | orchestrator | + all_tags = (known after apply) 2026-03-29 00:02:34.684291 | orchestrator | + availability_zone = "nova" 2026-03-29 00:02:34.684294 | orchestrator | + config_drive = true 2026-03-29 00:02:34.684298 | orchestrator | + created = (known after apply) 2026-03-29 00:02:34.684307 | orchestrator | + flavor_id = (known after apply) 2026-03-29 00:02:34.684311 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-29 00:02:34.684315 | orchestrator | + force_delete = false 2026-03-29 00:02:34.684319 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-29 00:02:34.684322 | orchestrator | + id = (known after apply) 2026-03-29 00:02:34.684326 | orchestrator | + image_id = (known after apply) 2026-03-29 00:02:34.684330 | orchestrator | + image_name = (known after apply) 2026-03-29 00:02:34.684334 | orchestrator | + key_pair = "testbed" 2026-03-29 00:02:34.684337 | orchestrator | + name = "testbed-node-1" 2026-03-29 00:02:34.684341 | orchestrator | + power_state = "active" 2026-03-29 00:02:34.684345 | orchestrator | + region = (known after apply) 2026-03-29 00:02:34.684349 | orchestrator | + security_groups = (known after apply) 2026-03-29 00:02:34.684352 | orchestrator | + stop_before_destroy = false 2026-03-29 00:02:34.684356 | orchestrator | + updated = (known after apply) 2026-03-29 00:02:34.684360 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-29 00:02:34.684364 | orchestrator | 2026-03-29 00:02:34.684367 | orchestrator | + block_device { 2026-03-29 00:02:34.684371 | orchestrator | + boot_index = 0 2026-03-29 00:02:34.684375 | orchestrator | + delete_on_termination = false 2026-03-29 00:02:34.684379 | orchestrator | + destination_type = "volume" 2026-03-29 00:02:34.684382 | orchestrator | + multiattach = false 2026-03-29 00:02:34.684386 | orchestrator | + source_type = "volume" 2026-03-29 00:02:34.684390 | orchestrator | + uuid = (known after apply) 2026-03-29 00:02:34.684394 | orchestrator | } 2026-03-29 00:02:34.684398 | orchestrator | 2026-03-29 00:02:34.684401 | orchestrator | + network { 2026-03-29 00:02:34.684405 | orchestrator | + access_network = false 2026-03-29 00:02:34.684409 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-29 00:02:34.684413 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-29 00:02:34.684416 | orchestrator | + mac = (known after apply) 2026-03-29 00:02:34.684420 | orchestrator | + name = (known after apply) 2026-03-29 00:02:34.684424 | orchestrator | + port = (known after apply) 2026-03-29 00:02:34.684427 | orchestrator | + uuid = (known after apply) 2026-03-29 00:02:34.684431 | orchestrator | } 2026-03-29 00:02:34.684435 | orchestrator | } 2026-03-29 00:02:34.684439 | orchestrator | 2026-03-29 00:02:34.684474 | orchestrator | # openstack_compute_instance_v2.node_server[2] will be created 2026-03-29 00:02:34.684479 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-29 00:02:34.684483 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-29 00:02:34.684486 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-29 00:02:34.684491 | orchestrator | + all_metadata = (known after apply) 2026-03-29 00:02:34.684495 | orchestrator | + all_tags = (known after apply) 2026-03-29 00:02:34.684502 | orchestrator | + availability_zone = "nova" 2026-03-29 00:02:34.684506 | orchestrator | + config_drive = true 2026-03-29 00:02:34.684509 | orchestrator | + created = (known after apply) 2026-03-29 00:02:34.684513 | orchestrator | + flavor_id = (known after apply) 2026-03-29 00:02:34.684517 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-29 00:02:34.684521 | orchestrator | + force_delete = false 2026-03-29 00:02:34.684524 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-29 00:02:34.684528 | orchestrator | + id = (known after apply) 2026-03-29 00:02:34.684532 | orchestrator | + image_id = (known after apply) 2026-03-29 00:02:34.684539 | orchestrator | + image_name = (known after apply) 2026-03-29 00:02:34.684543 | orchestrator | + key_pair = "testbed" 2026-03-29 00:02:34.684546 | orchestrator | + name = "testbed-node-2" 2026-03-29 00:02:34.684550 | orchestrator | + power_state = "active" 2026-03-29 00:02:34.684554 | orchestrator | + region = (known after apply) 2026-03-29 00:02:34.684558 | orchestrator | + security_groups = (known after apply) 2026-03-29 00:02:34.684561 | orchestrator | + stop_before_destroy = false 2026-03-29 00:02:34.684565 | orchestrator | + updated = (known after apply) 2026-03-29 00:02:34.684569 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-29 00:02:34.684573 | orchestrator | 2026-03-29 00:02:34.684576 | orchestrator | + block_device { 2026-03-29 00:02:34.684580 | orchestrator | + boot_index = 0 2026-03-29 00:02:34.684584 | orchestrator | + delete_on_termination = false 2026-03-29 00:02:34.684587 | orchestrator | + destination_type = "volume" 2026-03-29 00:02:34.684591 | orchestrator | + multiattach = false 2026-03-29 00:02:34.684595 | orchestrator | + source_type = "volume" 2026-03-29 00:02:34.684598 | orchestrator | + uuid = (known after apply) 2026-03-29 00:02:34.684602 | orchestrator | } 2026-03-29 00:02:34.684606 | orchestrator | 2026-03-29 00:02:34.684610 | orchestrator | + network { 2026-03-29 00:02:34.684614 | orchestrator | + access_network = false 2026-03-29 00:02:34.684617 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-29 00:02:34.684621 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-29 00:02:34.684625 | orchestrator | + mac = (known after apply) 2026-03-29 00:02:34.684628 | orchestrator | + name = (known after apply) 2026-03-29 00:02:34.684632 | orchestrator | + port = (known after apply) 2026-03-29 00:02:34.684636 | orchestrator | + uuid = (known after apply) 2026-03-29 00:02:34.684640 | orchestrator | } 2026-03-29 00:02:34.684643 | orchestrator | } 2026-03-29 00:02:34.684647 | orchestrator | 2026-03-29 00:02:34.684651 | orchestrator | # openstack_compute_instance_v2.node_server[3] will be created 2026-03-29 00:02:34.684655 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-29 00:02:34.684658 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-29 00:02:34.684662 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-29 00:02:34.684666 | orchestrator | + all_metadata = (known after apply) 2026-03-29 00:02:34.684670 | orchestrator | + all_tags = (known after apply) 2026-03-29 00:02:34.684673 | orchestrator | + availability_zone = "nova" 2026-03-29 00:02:34.684677 | orchestrator | + config_drive = true 2026-03-29 00:02:34.684681 | orchestrator | + created = (known after apply) 2026-03-29 00:02:34.684684 | orchestrator | + flavor_id = (known after apply) 2026-03-29 00:02:34.684688 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-29 00:02:34.684692 | orchestrator | + force_delete = false 2026-03-29 00:02:34.684696 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-29 00:02:34.684699 | orchestrator | + id = (known after apply) 2026-03-29 00:02:34.684703 | orchestrator | + image_id = (known after apply) 2026-03-29 00:02:34.684707 | orchestrator | + image_name = (known after apply) 2026-03-29 00:02:34.684711 | orchestrator | + key_pair = "testbed" 2026-03-29 00:02:34.684714 | orchestrator | + name = "testbed-node-3" 2026-03-29 00:02:34.684721 | orchestrator | + power_state = "active" 2026-03-29 00:02:34.684725 | orchestrator | + region = (known after apply) 2026-03-29 00:02:34.684728 | orchestrator | + security_groups = (known after apply) 2026-03-29 00:02:34.684732 | orchestrator | + stop_before_destroy = false 2026-03-29 00:02:34.684736 | orchestrator | + updated = (known after apply) 2026-03-29 00:02:34.684740 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-29 00:02:34.684743 | orchestrator | 2026-03-29 00:02:34.684747 | orchestrator | + block_device { 2026-03-29 00:02:34.684754 | orchestrator | + boot_index = 0 2026-03-29 00:02:34.684758 | orchestrator | + delete_on_termination = false 2026-03-29 00:02:34.684761 | orchestrator | + destination_type = "volume" 2026-03-29 00:02:34.684768 | orchestrator | + multiattach = false 2026-03-29 00:02:34.684772 | orchestrator | + source_type = "volume" 2026-03-29 00:02:34.684775 | orchestrator | + uuid = (known after apply) 2026-03-29 00:02:34.684779 | orchestrator | } 2026-03-29 00:02:34.684783 | orchestrator | 2026-03-29 00:02:34.684787 | orchestrator | + network { 2026-03-29 00:02:34.684790 | orchestrator | + access_network = false 2026-03-29 00:02:34.684794 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-29 00:02:34.684798 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-29 00:02:34.684801 | orchestrator | + mac = (known after apply) 2026-03-29 00:02:34.684805 | orchestrator | + name = (known after apply) 2026-03-29 00:02:34.684809 | orchestrator | + port = (known after apply) 2026-03-29 00:02:34.684813 | orchestrator | + uuid = (known after apply) 2026-03-29 00:02:34.684816 | orchestrator | } 2026-03-29 00:02:34.684820 | orchestrator | } 2026-03-29 00:02:34.684824 | orchestrator | 2026-03-29 00:02:34.684828 | orchestrator | # openstack_compute_instance_v2.node_server[4] will be created 2026-03-29 00:02:34.684832 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-29 00:02:34.684835 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-29 00:02:34.684839 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-29 00:02:34.684843 | orchestrator | + all_metadata = (known after apply) 2026-03-29 00:02:34.684847 | orchestrator | + all_tags = (known after apply) 2026-03-29 00:02:34.684850 | orchestrator | + availability_zone = "nova" 2026-03-29 00:02:34.684854 | orchestrator | + config_drive = true 2026-03-29 00:02:34.684858 | orchestrator | + created = (known after apply) 2026-03-29 00:02:34.684861 | orchestrator | + flavor_id = (known after apply) 2026-03-29 00:02:34.684865 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-29 00:02:34.684869 | orchestrator | + force_delete = false 2026-03-29 00:02:34.684873 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-29 00:02:34.684876 | orchestrator | + id = (known after apply) 2026-03-29 00:02:34.684880 | orchestrator | + image_id = (known after apply) 2026-03-29 00:02:34.684884 | orchestrator | + image_name = (known after apply) 2026-03-29 00:02:34.684887 | orchestrator | + key_pair = "testbed" 2026-03-29 00:02:34.684891 | orchestrator | + name = "testbed-node-4" 2026-03-29 00:02:34.684895 | orchestrator | + power_state = "active" 2026-03-29 00:02:34.684899 | orchestrator | + region = (known after apply) 2026-03-29 00:02:34.684902 | orchestrator | + security_groups = (known after apply) 2026-03-29 00:02:34.684906 | orchestrator | + stop_before_destroy = false 2026-03-29 00:02:34.684910 | orchestrator | + updated = (known after apply) 2026-03-29 00:02:34.684914 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-29 00:02:34.684917 | orchestrator | 2026-03-29 00:02:34.684921 | orchestrator | + block_device { 2026-03-29 00:02:34.684925 | orchestrator | + boot_index = 0 2026-03-29 00:02:34.684928 | orchestrator | + delete_on_termination = false 2026-03-29 00:02:34.684932 | orchestrator | + destination_type = "volume" 2026-03-29 00:02:34.684936 | orchestrator | + multiattach = false 2026-03-29 00:02:34.684940 | orchestrator | + source_type = "volume" 2026-03-29 00:02:34.684943 | orchestrator | + uuid = (known after apply) 2026-03-29 00:02:34.684947 | orchestrator | } 2026-03-29 00:02:34.684951 | orchestrator | 2026-03-29 00:02:34.684955 | orchestrator | + network { 2026-03-29 00:02:34.684958 | orchestrator | + access_network = false 2026-03-29 00:02:34.684962 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-29 00:02:34.684966 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-29 00:02:34.684970 | orchestrator | + mac = (known after apply) 2026-03-29 00:02:34.684973 | orchestrator | + name = (known after apply) 2026-03-29 00:02:34.684977 | orchestrator | + port = (known after apply) 2026-03-29 00:02:34.684981 | orchestrator | + uuid = (known after apply) 2026-03-29 00:02:34.684984 | orchestrator | } 2026-03-29 00:02:34.684988 | orchestrator | } 2026-03-29 00:02:34.684995 | orchestrator | 2026-03-29 00:02:34.684999 | orchestrator | # openstack_compute_instance_v2.node_server[5] will be created 2026-03-29 00:02:34.685003 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-29 00:02:34.685006 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-29 00:02:34.685010 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-29 00:02:34.685014 | orchestrator | + all_metadata = (known after apply) 2026-03-29 00:02:34.685018 | orchestrator | + all_tags = (known after apply) 2026-03-29 00:02:34.685021 | orchestrator | + availability_zone = "nova" 2026-03-29 00:02:34.685025 | orchestrator | + config_drive = true 2026-03-29 00:02:34.685029 | orchestrator | + created = (known after apply) 2026-03-29 00:02:34.685033 | orchestrator | + flavor_id = (known after apply) 2026-03-29 00:02:34.685036 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-29 00:02:34.685040 | orchestrator | + force_delete = false 2026-03-29 00:02:34.685046 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-29 00:02:34.685050 | orchestrator | + id = (known after apply) 2026-03-29 00:02:34.685054 | orchestrator | + image_id = (known after apply) 2026-03-29 00:02:34.685058 | orchestrator | + image_name = (known after apply) 2026-03-29 00:02:34.685061 | orchestrator | + key_pair = "testbed" 2026-03-29 00:02:34.685065 | orchestrator | + name = "testbed-node-5" 2026-03-29 00:02:34.685069 | orchestrator | + power_state = "active" 2026-03-29 00:02:34.685073 | orchestrator | + region = (known after apply) 2026-03-29 00:02:34.685076 | orchestrator | + security_groups = (known after apply) 2026-03-29 00:02:34.685080 | orchestrator | + stop_before_destroy = false 2026-03-29 00:02:34.685084 | orchestrator | + updated = (known after apply) 2026-03-29 00:02:34.685087 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-29 00:02:34.685091 | orchestrator | 2026-03-29 00:02:34.685095 | orchestrator | + block_device { 2026-03-29 00:02:34.685099 | orchestrator | + boot_index = 0 2026-03-29 00:02:34.685103 | orchestrator | + delete_on_termination = false 2026-03-29 00:02:34.685106 | orchestrator | + destination_type = "volume" 2026-03-29 00:02:34.685113 | orchestrator | + multiattach = false 2026-03-29 00:02:34.685117 | orchestrator | + source_type = "volume" 2026-03-29 00:02:34.685121 | orchestrator | + uuid = (known after apply) 2026-03-29 00:02:34.685125 | orchestrator | } 2026-03-29 00:02:34.685129 | orchestrator | 2026-03-29 00:02:34.685132 | orchestrator | + network { 2026-03-29 00:02:34.685136 | orchestrator | + access_network = false 2026-03-29 00:02:34.685140 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-29 00:02:34.685144 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-29 00:02:34.685147 | orchestrator | + mac = (known after apply) 2026-03-29 00:02:34.685151 | orchestrator | + name = (known after apply) 2026-03-29 00:02:34.685155 | orchestrator | + port = (known after apply) 2026-03-29 00:02:34.685159 | orchestrator | + uuid = (known after apply) 2026-03-29 00:02:34.685163 | orchestrator | } 2026-03-29 00:02:34.685167 | orchestrator | } 2026-03-29 00:02:34.685170 | orchestrator | 2026-03-29 00:02:34.685174 | orchestrator | # openstack_compute_keypair_v2.key will be created 2026-03-29 00:02:34.685178 | orchestrator | + resource "openstack_compute_keypair_v2" "key" { 2026-03-29 00:02:34.685182 | orchestrator | + fingerprint = (known after apply) 2026-03-29 00:02:34.685186 | orchestrator | + id = (known after apply) 2026-03-29 00:02:34.685189 | orchestrator | + name = "testbed" 2026-03-29 00:02:34.685193 | orchestrator | + private_key = (sensitive value) 2026-03-29 00:02:34.685197 | orchestrator | + public_key = (known after apply) 2026-03-29 00:02:34.685201 | orchestrator | + region = (known after apply) 2026-03-29 00:02:34.685204 | orchestrator | + user_id = (known after apply) 2026-03-29 00:02:34.685208 | orchestrator | } 2026-03-29 00:02:34.685212 | orchestrator | 2026-03-29 00:02:34.685216 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2026-03-29 00:02:34.685220 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-29 00:02:34.685226 | orchestrator | + device = (known after apply) 2026-03-29 00:02:34.685230 | orchestrator | + id = (known after apply) 2026-03-29 00:02:34.685234 | orchestrator | + instance_id = (known after apply) 2026-03-29 00:02:34.685238 | orchestrator | + region = (known after apply) 2026-03-29 00:02:34.685242 | orchestrator | + volume_id = (known after apply) 2026-03-29 00:02:34.685245 | orchestrator | } 2026-03-29 00:02:34.685249 | orchestrator | 2026-03-29 00:02:34.685253 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2026-03-29 00:02:34.685257 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-29 00:02:34.685261 | orchestrator | + device = (known after apply) 2026-03-29 00:02:34.685264 | orchestrator | + id = (known after apply) 2026-03-29 00:02:34.685268 | orchestrator | + instance_id = (known after apply) 2026-03-29 00:02:34.685272 | orchestrator | + region = (known after apply) 2026-03-29 00:02:34.685276 | orchestrator | + volume_id = (known after apply) 2026-03-29 00:02:34.685279 | orchestrator | } 2026-03-29 00:02:34.685283 | orchestrator | 2026-03-29 00:02:34.685287 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2026-03-29 00:02:34.685291 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-29 00:02:34.685295 | orchestrator | + device = (known after apply) 2026-03-29 00:02:34.685299 | orchestrator | + id = (known after apply) 2026-03-29 00:02:34.685302 | orchestrator | + instance_id = (known after apply) 2026-03-29 00:02:34.685306 | orchestrator | + region = (known after apply) 2026-03-29 00:02:34.685310 | orchestrator | + volume_id = (known after apply) 2026-03-29 00:02:34.685314 | orchestrator | } 2026-03-29 00:02:34.685317 | orchestrator | 2026-03-29 00:02:34.685321 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2026-03-29 00:02:34.685325 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-29 00:02:34.685329 | orchestrator | + device = (known after apply) 2026-03-29 00:02:34.685333 | orchestrator | + id = (known after apply) 2026-03-29 00:02:34.685336 | orchestrator | + instance_id = (known after apply) 2026-03-29 00:02:34.685340 | orchestrator | + region = (known after apply) 2026-03-29 00:02:34.685344 | orchestrator | + volume_id = (known after apply) 2026-03-29 00:02:34.685348 | orchestrator | } 2026-03-29 00:02:34.685352 | orchestrator | 2026-03-29 00:02:34.685355 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2026-03-29 00:02:34.685359 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-29 00:02:34.685363 | orchestrator | + device = (known after apply) 2026-03-29 00:02:34.685367 | orchestrator | + id = (known after apply) 2026-03-29 00:02:34.685371 | orchestrator | + instance_id = (known after apply) 2026-03-29 00:02:34.685377 | orchestrator | + region = (known after apply) 2026-03-29 00:02:34.685381 | orchestrator | + volume_id = (known after apply) 2026-03-29 00:02:34.685385 | orchestrator | } 2026-03-29 00:02:34.685389 | orchestrator | 2026-03-29 00:02:34.685393 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2026-03-29 00:02:34.685396 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-29 00:02:34.685400 | orchestrator | + device = (known after apply) 2026-03-29 00:02:34.685404 | orchestrator | + id = (known after apply) 2026-03-29 00:02:34.685408 | orchestrator | + instance_id = (known after apply) 2026-03-29 00:02:34.685411 | orchestrator | + region = (known after apply) 2026-03-29 00:02:34.685415 | orchestrator | + volume_id = (known after apply) 2026-03-29 00:02:34.685419 | orchestrator | } 2026-03-29 00:02:34.685423 | orchestrator | 2026-03-29 00:02:34.685426 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2026-03-29 00:02:34.685430 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-29 00:02:34.685434 | orchestrator | + device = (known after apply) 2026-03-29 00:02:34.685438 | orchestrator | + id = (known after apply) 2026-03-29 00:02:34.685442 | orchestrator | + instance_id = (known after apply) 2026-03-29 00:02:34.685454 | orchestrator | + region = (known after apply) 2026-03-29 00:02:34.685460 | orchestrator | + volume_id = (known after apply) 2026-03-29 00:02:34.685464 | orchestrator | } 2026-03-29 00:02:34.685468 | orchestrator | 2026-03-29 00:02:34.685472 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2026-03-29 00:02:34.685476 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-29 00:02:34.685480 | orchestrator | + device = (known after apply) 2026-03-29 00:02:34.685483 | orchestrator | + id = (known after apply) 2026-03-29 00:02:34.685487 | orchestrator | + instance_id = (known after apply) 2026-03-29 00:02:34.685491 | orchestrator | + region = (known after apply) 2026-03-29 00:02:34.685495 | orchestrator | + volume_id = (known after apply) 2026-03-29 00:02:34.685499 | orchestrator | } 2026-03-29 00:02:34.685503 | orchestrator | 2026-03-29 00:02:34.685506 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2026-03-29 00:02:34.685510 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-29 00:02:34.685514 | orchestrator | + device = (known after apply) 2026-03-29 00:02:34.685520 | orchestrator | + id = (known after apply) 2026-03-29 00:02:34.685524 | orchestrator | + instance_id = (known after apply) 2026-03-29 00:02:34.685528 | orchestrator | + region = (known after apply) 2026-03-29 00:02:34.685531 | orchestrator | + volume_id = (known after apply) 2026-03-29 00:02:34.685535 | orchestrator | } 2026-03-29 00:02:34.685539 | orchestrator | 2026-03-29 00:02:34.685543 | orchestrator | # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2026-03-29 00:02:34.685547 | orchestrator | + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2026-03-29 00:02:34.685551 | orchestrator | + fixed_ip = (known after apply) 2026-03-29 00:02:34.685555 | orchestrator | + floating_ip = (known after apply) 2026-03-29 00:02:34.685558 | orchestrator | + id = (known after apply) 2026-03-29 00:02:34.685562 | orchestrator | + port_id = (known after apply) 2026-03-29 00:02:34.685566 | orchestrator | + region = (known after apply) 2026-03-29 00:02:34.685570 | orchestrator | } 2026-03-29 00:02:34.685574 | orchestrator | 2026-03-29 00:02:34.685577 | orchestrator | # openstack_networking_floatingip_v2.manager_floating_ip will be created 2026-03-29 00:02:34.685581 | orchestrator | + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2026-03-29 00:02:34.685585 | orchestrator | + address = (known after apply) 2026-03-29 00:02:34.685589 | orchestrator | + all_tags = (known after apply) 2026-03-29 00:02:34.685592 | orchestrator | + dns_domain = (known after apply) 2026-03-29 00:02:34.685596 | orchestrator | + dns_name = (known after apply) 2026-03-29 00:02:34.685600 | orchestrator | + fixed_ip = (known after apply) 2026-03-29 00:02:34.685604 | orchestrator | + id = (known after apply) 2026-03-29 00:02:34.685607 | orchestrator | + pool = "public" 2026-03-29 00:02:34.685611 | orchestrator | + port_id = (known after apply) 2026-03-29 00:02:34.685615 | orchestrator | + region = (known after apply) 2026-03-29 00:02:34.685619 | orchestrator | + subnet_id = (known after apply) 2026-03-29 00:02:34.685622 | orchestrator | + tenant_id = (known after apply) 2026-03-29 00:02:34.685626 | orchestrator | } 2026-03-29 00:02:34.685630 | orchestrator | 2026-03-29 00:02:34.685634 | orchestrator | # openstack_networking_network_v2.net_management will be created 2026-03-29 00:02:34.685637 | orchestrator | + resource "openstack_networking_network_v2" "net_management" { 2026-03-29 00:02:34.685641 | orchestrator | + admin_state_up = (known after apply) 2026-03-29 00:02:34.685645 | orchestrator | + all_tags = (known after apply) 2026-03-29 00:02:34.685649 | orchestrator | + availability_zone_hints = [ 2026-03-29 00:02:34.685653 | orchestrator | + "nova", 2026-03-29 00:02:34.685657 | orchestrator | ] 2026-03-29 00:02:34.685660 | orchestrator | + dns_domain = (known after apply) 2026-03-29 00:02:34.685664 | orchestrator | + external = (known after apply) 2026-03-29 00:02:34.685668 | orchestrator | + id = (known after apply) 2026-03-29 00:02:34.685672 | orchestrator | + mtu = (known after apply) 2026-03-29 00:02:34.685675 | orchestrator | + name = "net-testbed-management" 2026-03-29 00:02:34.685679 | orchestrator | + port_security_enabled = (known after apply) 2026-03-29 00:02:34.685686 | orchestrator | + qos_policy_id = (known after apply) 2026-03-29 00:02:34.685690 | orchestrator | + region = (known after apply) 2026-03-29 00:02:34.685693 | orchestrator | + shared = (known after apply) 2026-03-29 00:02:34.685697 | orchestrator | + tenant_id = (known after apply) 2026-03-29 00:02:34.685701 | orchestrator | + transparent_vlan = (known after apply) 2026-03-29 00:02:34.685705 | orchestrator | 2026-03-29 00:02:34.685708 | orchestrator | + segments (known after apply) 2026-03-29 00:02:34.685712 | orchestrator | } 2026-03-29 00:02:34.685716 | orchestrator | 2026-03-29 00:02:34.685720 | orchestrator | # openstack_networking_port_v2.manager_port_management will be created 2026-03-29 00:02:34.685724 | orchestrator | + resource "openstack_networking_port_v2" "manager_port_management" { 2026-03-29 00:02:34.685727 | orchestrator | + admin_state_up = (known after apply) 2026-03-29 00:02:34.685731 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-29 00:02:34.685735 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-29 00:02:34.685741 | orchestrator | + all_tags = (known after apply) 2026-03-29 00:02:34.685745 | orchestrator | + device_id = (known after apply) 2026-03-29 00:02:34.685749 | orchestrator | + device_owner = (known after apply) 2026-03-29 00:02:34.685752 | orchestrator | + dns_assignment = (known after apply) 2026-03-29 00:02:34.685756 | orchestrator | + dns_name = (known after apply) 2026-03-29 00:02:34.685760 | orchestrator | + id = (known after apply) 2026-03-29 00:02:34.685764 | orchestrator | + mac_address = (known after apply) 2026-03-29 00:02:34.685767 | orchestrator | + network_id = (known after apply) 2026-03-29 00:02:34.685771 | orchestrator | + port_security_enabled = (known after apply) 2026-03-29 00:02:34.685775 | orchestrator | + qos_policy_id = (known after apply) 2026-03-29 00:02:34.685779 | orchestrator | + region = (known after apply) 2026-03-29 00:02:34.685782 | orchestrator | + security_group_ids = (known after apply) 2026-03-29 00:02:34.685786 | orchestrator | + tenant_id = (known after apply) 2026-03-29 00:02:34.685790 | orchestrator | 2026-03-29 00:02:34.685794 | orchestrator | + allowed_address_pairs { 2026-03-29 00:02:34.685798 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-29 00:02:34.685801 | orchestrator | } 2026-03-29 00:02:34.685805 | orchestrator | 2026-03-29 00:02:34.685809 | orchestrator | + binding (known after apply) 2026-03-29 00:02:34.685813 | orchestrator | 2026-03-29 00:02:34.685817 | orchestrator | + fixed_ip { 2026-03-29 00:02:34.685820 | orchestrator | + ip_address = "192.168.16.5" 2026-03-29 00:02:34.685834 | orchestrator | + subnet_id = (known after apply) 2026-03-29 00:02:34.685838 | orchestrator | } 2026-03-29 00:02:34.685842 | orchestrator | } 2026-03-29 00:02:34.685846 | orchestrator | 2026-03-29 00:02:34.685849 | orchestrator | # openstack_networking_port_v2.node_port_management[0] will be created 2026-03-29 00:02:34.685853 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-29 00:02:34.685857 | orchestrator | + admin_state_up = (known after apply) 2026-03-29 00:02:34.685861 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-29 00:02:34.685865 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-29 00:02:34.685868 | orchestrator | + all_tags = (known after apply) 2026-03-29 00:02:34.685872 | orchestrator | + device_id = (known after apply) 2026-03-29 00:02:34.685876 | orchestrator | + device_owner = (known after apply) 2026-03-29 00:02:34.685880 | orchestrator | + dns_assignment = (known after apply) 2026-03-29 00:02:34.685884 | orchestrator | + dns_name = (known after apply) 2026-03-29 00:02:34.685887 | orchestrator | + id = (known after apply) 2026-03-29 00:02:34.685891 | orchestrator | + mac_address = (known after apply) 2026-03-29 00:02:34.685895 | orchestrator | + network_id = (known after apply) 2026-03-29 00:02:34.685904 | orchestrator | + port_security_enabled = (known after apply) 2026-03-29 00:02:34.685908 | orchestrator | + qos_policy_id = (known after apply) 2026-03-29 00:02:34.685912 | orchestrator | + region = (known after apply) 2026-03-29 00:02:34.685919 | orchestrator | + security_group_ids = (known after apply) 2026-03-29 00:02:34.685923 | orchestrator | + tenant_id = (known after apply) 2026-03-29 00:02:34.685927 | orchestrator | 2026-03-29 00:02:34.685931 | orchestrator | + allowed_address_pairs { 2026-03-29 00:02:34.685934 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-29 00:02:34.685938 | orchestrator | } 2026-03-29 00:02:34.685942 | orchestrator | + allowed_address_pairs { 2026-03-29 00:02:34.685946 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-29 00:02:34.685950 | orchestrator | } 2026-03-29 00:02:34.685953 | orchestrator | + allowed_address_pairs { 2026-03-29 00:02:34.685957 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-29 00:02:34.685961 | orchestrator | } 2026-03-29 00:02:34.685965 | orchestrator | 2026-03-29 00:02:34.685969 | orchestrator | + binding (known after apply) 2026-03-29 00:02:34.685972 | orchestrator | 2026-03-29 00:02:34.685976 | orchestrator | + fixed_ip { 2026-03-29 00:02:34.685980 | orchestrator | + ip_address = "192.168.16.10" 2026-03-29 00:02:34.685984 | orchestrator | + subnet_id = (known after apply) 2026-03-29 00:02:34.685987 | orchestrator | } 2026-03-29 00:02:34.685991 | orchestrator | } 2026-03-29 00:02:34.685995 | orchestrator | 2026-03-29 00:02:34.685999 | orchestrator | # openstack_networking_port_v2.node_port_management[1] will be created 2026-03-29 00:02:34.686003 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-29 00:02:34.686006 | orchestrator | + admin_state_up = (known after apply) 2026-03-29 00:02:34.686010 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-29 00:02:34.686028 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-29 00:02:34.686032 | orchestrator | + all_tags = (known after apply) 2026-03-29 00:02:34.686036 | orchestrator | + device_id = (known after apply) 2026-03-29 00:02:34.686059 | orchestrator | + device_owner = (known after apply) 2026-03-29 00:02:34.686063 | orchestrator | + dns_assignment = (known after apply) 2026-03-29 00:02:34.686067 | orchestrator | + dns_name = (known after apply) 2026-03-29 00:02:34.686071 | orchestrator | + id = (known after apply) 2026-03-29 00:02:34.686074 | orchestrator | + mac_address = (known after apply) 2026-03-29 00:02:34.686078 | orchestrator | + network_id = (known after apply) 2026-03-29 00:02:34.686082 | orchestrator | + port_security_enabled = (known after apply) 2026-03-29 00:02:34.686086 | orchestrator | + qos_policy_id = (known after apply) 2026-03-29 00:02:34.686089 | orchestrator | + region = (known after apply) 2026-03-29 00:02:34.686093 | orchestrator | + security_group_ids = (known after apply) 2026-03-29 00:02:34.686097 | orchestrator | + tenant_id = (known after apply) 2026-03-29 00:02:34.686101 | orchestrator | 2026-03-29 00:02:34.686104 | orchestrator | + allowed_address_pairs { 2026-03-29 00:02:34.686108 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-29 00:02:34.686112 | orchestrator | } 2026-03-29 00:02:34.686116 | orchestrator | + allowed_address_pairs { 2026-03-29 00:02:34.686120 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-29 00:02:34.686123 | orchestrator | } 2026-03-29 00:02:34.686127 | orchestrator | + allowed_address_pairs { 2026-03-29 00:02:34.686131 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-29 00:02:34.686135 | orchestrator | } 2026-03-29 00:02:34.686138 | orchestrator | 2026-03-29 00:02:34.686142 | orchestrator | + binding (known after apply) 2026-03-29 00:02:34.686146 | orchestrator | 2026-03-29 00:02:34.686150 | orchestrator | + fixed_ip { 2026-03-29 00:02:34.686153 | orchestrator | + ip_address = "192.168.16.11" 2026-03-29 00:02:34.686157 | orchestrator | + subnet_id = (known after apply) 2026-03-29 00:02:34.686161 | orchestrator | } 2026-03-29 00:02:34.686164 | orchestrator | } 2026-03-29 00:02:34.686168 | orchestrator | 2026-03-29 00:02:34.686172 | orchestrator | # openstack_networking_port_v2.node_port_management[2] will be created 2026-03-29 00:02:34.686176 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-29 00:02:34.686180 | orchestrator | + admin_state_up = (known after apply) 2026-03-29 00:02:34.686183 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-29 00:02:34.686187 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-29 00:02:34.686191 | orchestrator | + all_tags = (known after apply) 2026-03-29 00:02:34.686198 | orchestrator | + device_id = (known after apply) 2026-03-29 00:02:34.686202 | orchestrator | + device_owner = (known after apply) 2026-03-29 00:02:34.686205 | orchestrator | + dns_assignment = (known after apply) 2026-03-29 00:02:34.686209 | orchestrator | + dns_name = (known after apply) 2026-03-29 00:02:34.686215 | orchestrator | + id = (known after apply) 2026-03-29 00:02:34.686219 | orchestrator | + mac_address = (known after apply) 2026-03-29 00:02:34.686223 | orchestrator | + network_id = (known after apply) 2026-03-29 00:02:34.686227 | orchestrator | + port_security_enabled = (known after apply) 2026-03-29 00:02:34.686231 | orchestrator | + qos_policy_id = (known after apply) 2026-03-29 00:02:34.686234 | orchestrator | + region = (known after apply) 2026-03-29 00:02:34.686238 | orchestrator | + security_group_ids = (known after apply) 2026-03-29 00:02:34.686242 | orchestrator | + tenant_id = (known after apply) 2026-03-29 00:02:34.686246 | orchestrator | 2026-03-29 00:02:34.686249 | orchestrator | + allowed_address_pairs { 2026-03-29 00:02:34.686253 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-29 00:02:34.686257 | orchestrator | } 2026-03-29 00:02:34.686261 | orchestrator | + allowed_address_pairs { 2026-03-29 00:02:34.686264 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-29 00:02:34.686268 | orchestrator | } 2026-03-29 00:02:34.686272 | orchestrator | + allowed_address_pairs { 2026-03-29 00:02:34.686276 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-29 00:02:34.686279 | orchestrator | } 2026-03-29 00:02:34.686283 | orchestrator | 2026-03-29 00:02:34.686287 | orchestrator | + binding (known after apply) 2026-03-29 00:02:34.686291 | orchestrator | 2026-03-29 00:02:34.686294 | orchestrator | + fixed_ip { 2026-03-29 00:02:34.686298 | orchestrator | + ip_address = "192.168.16.12" 2026-03-29 00:02:34.686302 | orchestrator | + subnet_id = (known after apply) 2026-03-29 00:02:34.686305 | orchestrator | } 2026-03-29 00:02:34.686309 | orchestrator | } 2026-03-29 00:02:34.686313 | orchestrator | 2026-03-29 00:02:34.686317 | orchestrator | # openstack_networking_port_v2.node_port_management[3] will be created 2026-03-29 00:02:34.686320 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-29 00:02:34.686324 | orchestrator | + admin_state_up = (known after apply) 2026-03-29 00:02:34.686328 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-29 00:02:34.686332 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-29 00:02:34.686336 | orchestrator | + all_tags = (known after apply) 2026-03-29 00:02:34.686340 | orchestrator | + device_id = (known after apply) 2026-03-29 00:02:34.686343 | orchestrator | + device_owner = (known after apply) 2026-03-29 00:02:34.686347 | orchestrator | + dns_assignment = (known after apply) 2026-03-29 00:02:34.686351 | orchestrator | + dns_name = (known after apply) 2026-03-29 00:02:34.686357 | orchestrator | + id = (known after apply) 2026-03-29 00:02:34.686361 | orchestrator | + mac_address = (known after apply) 2026-03-29 00:02:34.686365 | orchestrator | + network_id = (known after apply) 2026-03-29 00:02:34.686368 | orchestrator | + port_security_enabled = (known after apply) 2026-03-29 00:02:34.686372 | orchestrator | + qos_policy_id = (known after apply) 2026-03-29 00:02:34.686376 | orchestrator | + region = (known after apply) 2026-03-29 00:02:34.686380 | orchestrator | + security_group_ids = (known after apply) 2026-03-29 00:02:34.686383 | orchestrator | + tenant_id = (known after apply) 2026-03-29 00:02:34.686387 | orchestrator | 2026-03-29 00:02:34.686391 | orchestrator | + allowed_address_pairs { 2026-03-29 00:02:34.686395 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-29 00:02:34.686399 | orchestrator | } 2026-03-29 00:02:34.686402 | orchestrator | + allowed_address_pairs { 2026-03-29 00:02:34.686406 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-29 00:02:34.686410 | orchestrator | } 2026-03-29 00:02:34.686414 | orchestrator | + allowed_address_pairs { 2026-03-29 00:02:34.686417 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-29 00:02:34.686421 | orchestrator | } 2026-03-29 00:02:34.686425 | orchestrator | 2026-03-29 00:02:34.686434 | orchestrator | + binding (known after apply) 2026-03-29 00:02:34.686438 | orchestrator | 2026-03-29 00:02:34.686442 | orchestrator | + fixed_ip { 2026-03-29 00:02:34.686457 | orchestrator | + ip_address = "192.168.16.13" 2026-03-29 00:02:34.686461 | orchestrator | + subnet_id = (known after apply) 2026-03-29 00:02:34.686465 | orchestrator | } 2026-03-29 00:02:34.686469 | orchestrator | } 2026-03-29 00:02:34.693878 | orchestrator | 2026-03-29 00:02:34.693938 | orchestrator | # openstack_networking_port_v2.node_port_management[4] will be created 2026-03-29 00:02:34.693944 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-29 00:02:34.693949 | orchestrator | + admin_state_up = (known after apply) 2026-03-29 00:02:34.693954 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-29 00:02:34.693958 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-29 00:02:34.693974 | orchestrator | + all_tags = (known after apply) 2026-03-29 00:02:34.693979 | orchestrator | + device_id = (known after apply) 2026-03-29 00:02:34.693983 | orchestrator | + device_owner = (known after apply) 2026-03-29 00:02:34.693987 | orchestrator | + dns_assignment = (known after apply) 2026-03-29 00:02:34.693991 | orchestrator | + dns_name = (known after apply) 2026-03-29 00:02:34.693995 | orchestrator | + id = (known after apply) 2026-03-29 00:02:34.693998 | orchestrator | + mac_address = (known after apply) 2026-03-29 00:02:34.694002 | orchestrator | + network_id = (known after apply) 2026-03-29 00:02:34.694006 | orchestrator | + port_security_enabled = (known after apply) 2026-03-29 00:02:34.694010 | orchestrator | + qos_policy_id = (known after apply) 2026-03-29 00:02:34.697128 | orchestrator | + region = (known after apply) 2026-03-29 00:02:34.697139 | orchestrator | + security_group_ids = (known after apply) 2026-03-29 00:02:34.697144 | orchestrator | + tenant_id = (known after apply) 2026-03-29 00:02:34.697150 | orchestrator | 2026-03-29 00:02:34.697154 | orchestrator | + allowed_address_pairs { 2026-03-29 00:02:34.697159 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-29 00:02:34.697163 | orchestrator | } 2026-03-29 00:02:34.697167 | orchestrator | + allowed_address_pairs { 2026-03-29 00:02:34.697191 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-29 00:02:34.697195 | orchestrator | } 2026-03-29 00:02:34.697199 | orchestrator | + allowed_address_pairs { 2026-03-29 00:02:34.697203 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-29 00:02:34.697207 | orchestrator | } 2026-03-29 00:02:34.697211 | orchestrator | 2026-03-29 00:02:34.697215 | orchestrator | + binding (known after apply) 2026-03-29 00:02:34.697219 | orchestrator | 2026-03-29 00:02:34.697223 | orchestrator | + fixed_ip { 2026-03-29 00:02:34.697227 | orchestrator | + ip_address = "192.168.16.14" 2026-03-29 00:02:34.697231 | orchestrator | + subnet_id = (known after apply) 2026-03-29 00:02:34.697235 | orchestrator | } 2026-03-29 00:02:34.697239 | orchestrator | } 2026-03-29 00:02:34.698825 | orchestrator | 2026-03-29 00:02:34.698874 | orchestrator | # openstack_networking_port_v2.node_port_management[5] will be created 2026-03-29 00:02:34.698880 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-29 00:02:34.698885 | orchestrator | + admin_state_up = (known after apply) 2026-03-29 00:02:34.698889 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-29 00:02:34.698893 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-29 00:02:34.698897 | orchestrator | + all_tags = (known after apply) 2026-03-29 00:02:34.698901 | orchestrator | + device_id = (known after apply) 2026-03-29 00:02:34.698905 | orchestrator | + device_owner = (known after apply) 2026-03-29 00:02:34.698924 | orchestrator | + dns_assignment = (known after apply) 2026-03-29 00:02:34.698928 | orchestrator | + dns_name = (known after apply) 2026-03-29 00:02:34.698932 | orchestrator | + id = (known after apply) 2026-03-29 00:02:34.698935 | orchestrator | + mac_address = (known after apply) 2026-03-29 00:02:34.698939 | orchestrator | + network_id = (known after apply) 2026-03-29 00:02:34.698943 | orchestrator | + port_security_enabled = (known after apply) 2026-03-29 00:02:34.698947 | orchestrator | + qos_policy_id = (known after apply) 2026-03-29 00:02:34.698959 | orchestrator | + region = (known after apply) 2026-03-29 00:02:34.698963 | orchestrator | + security_group_ids = (known after apply) 2026-03-29 00:02:34.698967 | orchestrator | + tenant_id = (known after apply) 2026-03-29 00:02:34.698971 | orchestrator | 2026-03-29 00:02:34.698975 | orchestrator | + allowed_address_pairs { 2026-03-29 00:02:34.698979 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-29 00:02:34.698983 | orchestrator | } 2026-03-29 00:02:34.699001 | orchestrator | + allowed_address_pairs { 2026-03-29 00:02:34.699005 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-29 00:02:34.699009 | orchestrator | } 2026-03-29 00:02:34.699013 | orchestrator | + allowed_address_pairs { 2026-03-29 00:02:34.699017 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-29 00:02:34.699020 | orchestrator | } 2026-03-29 00:02:34.699024 | orchestrator | 2026-03-29 00:02:34.699033 | orchestrator | + binding (known after apply) 2026-03-29 00:02:34.699037 | orchestrator | 2026-03-29 00:02:34.699041 | orchestrator | + fixed_ip { 2026-03-29 00:02:34.699045 | orchestrator | + ip_address = "192.168.16.15" 2026-03-29 00:02:34.699048 | orchestrator | + subnet_id = (known after apply) 2026-03-29 00:02:34.699052 | orchestrator | } 2026-03-29 00:02:34.699056 | orchestrator | } 2026-03-29 00:02:34.699124 | orchestrator | 2026-03-29 00:02:34.699137 | orchestrator | # openstack_networking_router_interface_v2.router_interface will be created 2026-03-29 00:02:34.699156 | orchestrator | + resource "openstack_networking_router_interface_v2" "router_interface" { 2026-03-29 00:02:34.699160 | orchestrator | + force_destroy = false 2026-03-29 00:02:34.699164 | orchestrator | + id = (known after apply) 2026-03-29 00:02:34.699168 | orchestrator | + port_id = (known after apply) 2026-03-29 00:02:34.699172 | orchestrator | + region = (known after apply) 2026-03-29 00:02:34.699176 | orchestrator | + router_id = (known after apply) 2026-03-29 00:02:34.699180 | orchestrator | + subnet_id = (known after apply) 2026-03-29 00:02:34.699183 | orchestrator | } 2026-03-29 00:02:34.699291 | orchestrator | 2026-03-29 00:02:34.699319 | orchestrator | # openstack_networking_router_v2.router will be created 2026-03-29 00:02:34.699324 | orchestrator | + resource "openstack_networking_router_v2" "router" { 2026-03-29 00:02:34.699328 | orchestrator | + admin_state_up = (known after apply) 2026-03-29 00:02:34.699332 | orchestrator | + all_tags = (known after apply) 2026-03-29 00:02:34.699336 | orchestrator | + availability_zone_hints = [ 2026-03-29 00:02:34.699340 | orchestrator | + "nova", 2026-03-29 00:02:34.699344 | orchestrator | ] 2026-03-29 00:02:34.699348 | orchestrator | + distributed = (known after apply) 2026-03-29 00:02:34.699352 | orchestrator | + enable_snat = (known after apply) 2026-03-29 00:02:34.699356 | orchestrator | + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2026-03-29 00:02:34.699360 | orchestrator | + external_qos_policy_id = (known after apply) 2026-03-29 00:02:34.699363 | orchestrator | + id = (known after apply) 2026-03-29 00:02:34.699367 | orchestrator | + name = "testbed" 2026-03-29 00:02:34.699371 | orchestrator | + region = (known after apply) 2026-03-29 00:02:34.699390 | orchestrator | + tenant_id = (known after apply) 2026-03-29 00:02:34.699394 | orchestrator | 2026-03-29 00:02:34.699398 | orchestrator | + external_fixed_ip (known after apply) 2026-03-29 00:02:34.699402 | orchestrator | } 2026-03-29 00:02:34.699512 | orchestrator | 2026-03-29 00:02:34.699525 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2026-03-29 00:02:34.699530 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2026-03-29 00:02:34.699548 | orchestrator | + description = "ssh" 2026-03-29 00:02:34.699553 | orchestrator | + direction = "ingress" 2026-03-29 00:02:34.699557 | orchestrator | + ethertype = "IPv4" 2026-03-29 00:02:34.699560 | orchestrator | + id = (known after apply) 2026-03-29 00:02:34.699564 | orchestrator | + port_range_max = 22 2026-03-29 00:02:34.699568 | orchestrator | + port_range_min = 22 2026-03-29 00:02:34.699572 | orchestrator | + protocol = "tcp" 2026-03-29 00:02:34.699575 | orchestrator | + region = (known after apply) 2026-03-29 00:02:34.699586 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-29 00:02:34.699590 | orchestrator | + remote_group_id = (known after apply) 2026-03-29 00:02:34.699594 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-29 00:02:34.699598 | orchestrator | + security_group_id = (known after apply) 2026-03-29 00:02:34.699602 | orchestrator | + tenant_id = (known after apply) 2026-03-29 00:02:34.699605 | orchestrator | } 2026-03-29 00:02:34.699717 | orchestrator | 2026-03-29 00:02:34.699730 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2026-03-29 00:02:34.699735 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2026-03-29 00:02:34.699739 | orchestrator | + description = "wireguard" 2026-03-29 00:02:34.699743 | orchestrator | + direction = "ingress" 2026-03-29 00:02:34.699747 | orchestrator | + ethertype = "IPv4" 2026-03-29 00:02:34.699750 | orchestrator | + id = (known after apply) 2026-03-29 00:02:34.699754 | orchestrator | + port_range_max = 51820 2026-03-29 00:02:34.699758 | orchestrator | + port_range_min = 51820 2026-03-29 00:02:34.699762 | orchestrator | + protocol = "udp" 2026-03-29 00:02:34.699766 | orchestrator | + region = (known after apply) 2026-03-29 00:02:34.699784 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-29 00:02:34.699788 | orchestrator | + remote_group_id = (known after apply) 2026-03-29 00:02:34.699792 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-29 00:02:34.699796 | orchestrator | + security_group_id = (known after apply) 2026-03-29 00:02:34.699800 | orchestrator | + tenant_id = (known after apply) 2026-03-29 00:02:34.699804 | orchestrator | } 2026-03-29 00:02:34.699881 | orchestrator | 2026-03-29 00:02:34.699893 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2026-03-29 00:02:34.699897 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2026-03-29 00:02:34.699901 | orchestrator | + direction = "ingress" 2026-03-29 00:02:34.699905 | orchestrator | + ethertype = "IPv4" 2026-03-29 00:02:34.699909 | orchestrator | + id = (known after apply) 2026-03-29 00:02:34.699912 | orchestrator | + protocol = "tcp" 2026-03-29 00:02:34.699916 | orchestrator | + region = (known after apply) 2026-03-29 00:02:34.699920 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-29 00:02:34.699939 | orchestrator | + remote_group_id = (known after apply) 2026-03-29 00:02:34.699943 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-03-29 00:02:34.699947 | orchestrator | + security_group_id = (known after apply) 2026-03-29 00:02:34.699950 | orchestrator | + tenant_id = (known after apply) 2026-03-29 00:02:34.699954 | orchestrator | } 2026-03-29 00:02:34.700029 | orchestrator | 2026-03-29 00:02:34.700041 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2026-03-29 00:02:34.700046 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2026-03-29 00:02:34.700050 | orchestrator | + direction = "ingress" 2026-03-29 00:02:34.700053 | orchestrator | + ethertype = "IPv4" 2026-03-29 00:02:34.700057 | orchestrator | + id = (known after apply) 2026-03-29 00:02:34.700061 | orchestrator | + protocol = "udp" 2026-03-29 00:02:34.700065 | orchestrator | + region = (known after apply) 2026-03-29 00:02:34.700068 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-29 00:02:34.700072 | orchestrator | + remote_group_id = (known after apply) 2026-03-29 00:02:34.700090 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-03-29 00:02:34.700095 | orchestrator | + security_group_id = (known after apply) 2026-03-29 00:02:34.700098 | orchestrator | + tenant_id = (known after apply) 2026-03-29 00:02:34.700102 | orchestrator | } 2026-03-29 00:02:34.700179 | orchestrator | 2026-03-29 00:02:34.700191 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2026-03-29 00:02:34.700200 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2026-03-29 00:02:34.700204 | orchestrator | + direction = "ingress" 2026-03-29 00:02:34.700208 | orchestrator | + ethertype = "IPv4" 2026-03-29 00:02:34.700212 | orchestrator | + id = (known after apply) 2026-03-29 00:02:34.700216 | orchestrator | + protocol = "icmp" 2026-03-29 00:02:34.700219 | orchestrator | + region = (known after apply) 2026-03-29 00:02:34.700223 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-29 00:02:34.700227 | orchestrator | + remote_group_id = (known after apply) 2026-03-29 00:02:34.700231 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-29 00:02:34.700249 | orchestrator | + security_group_id = (known after apply) 2026-03-29 00:02:34.700253 | orchestrator | + tenant_id = (known after apply) 2026-03-29 00:02:34.700257 | orchestrator | } 2026-03-29 00:02:34.700332 | orchestrator | 2026-03-29 00:02:34.700344 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2026-03-29 00:02:34.700349 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2026-03-29 00:02:34.700353 | orchestrator | + direction = "ingress" 2026-03-29 00:02:34.700357 | orchestrator | + ethertype = "IPv4" 2026-03-29 00:02:34.700361 | orchestrator | + id = (known after apply) 2026-03-29 00:02:34.700364 | orchestrator | + protocol = "tcp" 2026-03-29 00:02:34.700368 | orchestrator | + region = (known after apply) 2026-03-29 00:02:34.700372 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-29 00:02:34.700379 | orchestrator | + remote_group_id = (known after apply) 2026-03-29 00:02:34.700383 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-29 00:02:34.700401 | orchestrator | + security_group_id = (known after apply) 2026-03-29 00:02:34.700406 | orchestrator | + tenant_id = (known after apply) 2026-03-29 00:02:34.700410 | orchestrator | } 2026-03-29 00:02:34.700502 | orchestrator | 2026-03-29 00:02:34.700515 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2026-03-29 00:02:34.700520 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2026-03-29 00:02:34.700523 | orchestrator | + direction = "ingress" 2026-03-29 00:02:34.700527 | orchestrator | + ethertype = "IPv4" 2026-03-29 00:02:34.700531 | orchestrator | + id = (known after apply) 2026-03-29 00:02:34.700535 | orchestrator | + protocol = "udp" 2026-03-29 00:02:34.700538 | orchestrator | + region = (known after apply) 2026-03-29 00:02:34.700542 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-29 00:02:34.700561 | orchestrator | + remote_group_id = (known after apply) 2026-03-29 00:02:34.700566 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-29 00:02:34.700569 | orchestrator | + security_group_id = (known after apply) 2026-03-29 00:02:34.700573 | orchestrator | + tenant_id = (known after apply) 2026-03-29 00:02:34.700577 | orchestrator | } 2026-03-29 00:02:34.700653 | orchestrator | 2026-03-29 00:02:34.700665 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2026-03-29 00:02:34.700670 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2026-03-29 00:02:34.700673 | orchestrator | + direction = "ingress" 2026-03-29 00:02:34.700680 | orchestrator | + ethertype = "IPv4" 2026-03-29 00:02:34.700684 | orchestrator | + id = (known after apply) 2026-03-29 00:02:34.700688 | orchestrator | + protocol = "icmp" 2026-03-29 00:02:34.700691 | orchestrator | + region = (known after apply) 2026-03-29 00:02:34.700695 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-29 00:02:34.700699 | orchestrator | + remote_group_id = (known after apply) 2026-03-29 00:02:34.700718 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-29 00:02:34.700722 | orchestrator | + security_group_id = (known after apply) 2026-03-29 00:02:34.700726 | orchestrator | + tenant_id = (known after apply) 2026-03-29 00:02:34.700733 | orchestrator | } 2026-03-29 00:02:34.700821 | orchestrator | 2026-03-29 00:02:34.700833 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2026-03-29 00:02:34.700838 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2026-03-29 00:02:34.700842 | orchestrator | + description = "vrrp" 2026-03-29 00:02:34.700846 | orchestrator | + direction = "ingress" 2026-03-29 00:02:34.700850 | orchestrator | + ethertype = "IPv4" 2026-03-29 00:02:34.700853 | orchestrator | + id = (known after apply) 2026-03-29 00:02:34.700872 | orchestrator | + protocol = "112" 2026-03-29 00:02:34.700876 | orchestrator | + region = (known after apply) 2026-03-29 00:02:34.700879 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-29 00:02:34.700883 | orchestrator | + remote_group_id = (known after apply) 2026-03-29 00:02:34.700887 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-29 00:02:34.700891 | orchestrator | + security_group_id = (known after apply) 2026-03-29 00:02:34.700895 | orchestrator | + tenant_id = (known after apply) 2026-03-29 00:02:34.700899 | orchestrator | } 2026-03-29 00:02:34.700961 | orchestrator | 2026-03-29 00:02:34.700973 | orchestrator | # openstack_networking_secgroup_v2.security_group_management will be created 2026-03-29 00:02:34.700978 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_management" { 2026-03-29 00:02:34.700982 | orchestrator | + all_tags = (known after apply) 2026-03-29 00:02:34.700986 | orchestrator | + description = "management security group" 2026-03-29 00:02:34.700990 | orchestrator | + id = (known after apply) 2026-03-29 00:02:34.700994 | orchestrator | + name = "testbed-management" 2026-03-29 00:02:34.700997 | orchestrator | + region = (known after apply) 2026-03-29 00:02:34.701001 | orchestrator | + stateful = (known after apply) 2026-03-29 00:02:34.701005 | orchestrator | + tenant_id = (known after apply) 2026-03-29 00:02:34.701009 | orchestrator | } 2026-03-29 00:02:34.701071 | orchestrator | 2026-03-29 00:02:34.701083 | orchestrator | # openstack_networking_secgroup_v2.security_group_node will be created 2026-03-29 00:02:34.701087 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_node" { 2026-03-29 00:02:34.701106 | orchestrator | + all_tags = (known after apply) 2026-03-29 00:02:34.701109 | orchestrator | + description = "node security group" 2026-03-29 00:02:34.701113 | orchestrator | + id = (known after apply) 2026-03-29 00:02:34.701117 | orchestrator | + name = "testbed-node" 2026-03-29 00:02:34.701121 | orchestrator | + region = (known after apply) 2026-03-29 00:02:34.701125 | orchestrator | + stateful = (known after apply) 2026-03-29 00:02:34.701129 | orchestrator | + tenant_id = (known after apply) 2026-03-29 00:02:34.701133 | orchestrator | } 2026-03-29 00:02:34.701268 | orchestrator | 2026-03-29 00:02:34.701280 | orchestrator | # openstack_networking_subnet_v2.subnet_management will be created 2026-03-29 00:02:34.701285 | orchestrator | + resource "openstack_networking_subnet_v2" "subnet_management" { 2026-03-29 00:02:34.701289 | orchestrator | + all_tags = (known after apply) 2026-03-29 00:02:34.701293 | orchestrator | + cidr = "192.168.16.0/20" 2026-03-29 00:02:34.701296 | orchestrator | + dns_nameservers = [ 2026-03-29 00:02:34.701301 | orchestrator | + "8.8.8.8", 2026-03-29 00:02:34.701304 | orchestrator | + "9.9.9.9", 2026-03-29 00:02:34.701308 | orchestrator | ] 2026-03-29 00:02:34.701312 | orchestrator | + enable_dhcp = true 2026-03-29 00:02:34.701316 | orchestrator | + gateway_ip = (known after apply) 2026-03-29 00:02:34.701320 | orchestrator | + id = (known after apply) 2026-03-29 00:02:34.701338 | orchestrator | + ip_version = 4 2026-03-29 00:02:34.701342 | orchestrator | + ipv6_address_mode = (known after apply) 2026-03-29 00:02:34.701346 | orchestrator | + ipv6_ra_mode = (known after apply) 2026-03-29 00:02:34.701350 | orchestrator | + name = "subnet-testbed-management" 2026-03-29 00:02:34.701353 | orchestrator | + network_id = (known after apply) 2026-03-29 00:02:34.701357 | orchestrator | + no_gateway = false 2026-03-29 00:02:34.701361 | orchestrator | + region = (known after apply) 2026-03-29 00:02:34.701365 | orchestrator | + service_types = (known after apply) 2026-03-29 00:02:34.701372 | orchestrator | + tenant_id = (known after apply) 2026-03-29 00:02:34.701376 | orchestrator | 2026-03-29 00:02:34.701380 | orchestrator | + allocation_pool { 2026-03-29 00:02:34.701383 | orchestrator | + end = "192.168.31.250" 2026-03-29 00:02:34.701387 | orchestrator | + start = "192.168.31.200" 2026-03-29 00:02:34.701391 | orchestrator | } 2026-03-29 00:02:34.701395 | orchestrator | } 2026-03-29 00:02:34.701439 | orchestrator | 2026-03-29 00:02:34.701496 | orchestrator | # terraform_data.image will be created 2026-03-29 00:02:34.701501 | orchestrator | + resource "terraform_data" "image" { 2026-03-29 00:02:34.701505 | orchestrator | + id = (known after apply) 2026-03-29 00:02:34.701509 | orchestrator | + input = "Ubuntu 24.04" 2026-03-29 00:02:34.701513 | orchestrator | + output = (known after apply) 2026-03-29 00:02:34.701517 | orchestrator | } 2026-03-29 00:02:34.701619 | orchestrator | 2026-03-29 00:02:34.701634 | orchestrator | # terraform_data.image_node will be created 2026-03-29 00:02:34.701639 | orchestrator | + resource "terraform_data" "image_node" { 2026-03-29 00:02:34.701642 | orchestrator | + id = (known after apply) 2026-03-29 00:02:34.701646 | orchestrator | + input = "Ubuntu 24.04" 2026-03-29 00:02:34.701650 | orchestrator | + output = (known after apply) 2026-03-29 00:02:34.701654 | orchestrator | } 2026-03-29 00:02:34.701670 | orchestrator | 2026-03-29 00:02:34.701674 | orchestrator | Plan: 64 to add, 0 to change, 0 to destroy. 2026-03-29 00:02:34.701700 | orchestrator | 2026-03-29 00:02:34.701705 | orchestrator | Changes to Outputs: 2026-03-29 00:02:34.701715 | orchestrator | + manager_address = (sensitive value) 2026-03-29 00:02:34.701720 | orchestrator | + private_key = (sensitive value) 2026-03-29 00:02:35.004519 | orchestrator | terraform_data.image: Creating... 2026-03-29 00:02:35.005137 | orchestrator | terraform_data.image: Creation complete after 0s [id=9f14b67d-854c-4b9b-574e-0afe20666205] 2026-03-29 00:02:35.005357 | orchestrator | terraform_data.image_node: Creating... 2026-03-29 00:02:35.005937 | orchestrator | terraform_data.image_node: Creation complete after 0s [id=f584e515-c8a6-6521-c0a4-ebb0338df46d] 2026-03-29 00:02:35.061393 | orchestrator | data.openstack_images_image_v2.image_node: Reading... 2026-03-29 00:02:35.061442 | orchestrator | data.openstack_images_image_v2.image: Reading... 2026-03-29 00:02:35.066510 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2026-03-29 00:02:35.066761 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2026-03-29 00:02:35.078818 | orchestrator | openstack_compute_keypair_v2.key: Creating... 2026-03-29 00:02:35.079373 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2026-03-29 00:02:35.090087 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2026-03-29 00:02:35.090275 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2026-03-29 00:02:35.091051 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2026-03-29 00:02:35.091105 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2026-03-29 00:02:35.536898 | orchestrator | data.openstack_images_image_v2.image: Read complete after 1s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-03-29 00:02:35.541642 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2026-03-29 00:02:35.556273 | orchestrator | data.openstack_images_image_v2.image_node: Read complete after 1s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-03-29 00:02:35.560065 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2026-03-29 00:02:35.595069 | orchestrator | openstack_compute_keypair_v2.key: Creation complete after 1s [id=testbed] 2026-03-29 00:02:35.600542 | orchestrator | openstack_networking_network_v2.net_management: Creating... 2026-03-29 00:02:36.621560 | orchestrator | openstack_networking_network_v2.net_management: Creation complete after 1s [id=1f9ac5ae-0f18-47f3-97eb-15e3d05d46df] 2026-03-29 00:02:36.626639 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2026-03-29 00:02:38.790402 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 4s [id=f8431fa8-afc6-4068-bff4-a67d5c0799f9] 2026-03-29 00:02:38.803247 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2026-03-29 00:02:38.841889 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 4s [id=64dd44e8-56db-4990-9653-26f9a904c769] 2026-03-29 00:02:38.849318 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 4s [id=6eff12ff-972f-42e1-84ee-23c8e4926f48] 2026-03-29 00:02:38.858472 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 4s [id=66d732d5-e9a7-47c2-8d7a-ba89d690a00e] 2026-03-29 00:02:38.858553 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2026-03-29 00:02:38.865150 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2026-03-29 00:02:38.868647 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2026-03-29 00:02:38.918972 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 4s [id=19b179cd-386f-4584-8a4b-106e5ad8592d] 2026-03-29 00:02:38.923644 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2026-03-29 00:02:38.937427 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 3s [id=08797191-4f26-4e13-8d53-ed6640c6fbd2] 2026-03-29 00:02:38.943054 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2026-03-29 00:02:38.954743 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 4s [id=cf707a58-c66d-4c72-840a-e00f4b50b6ac] 2026-03-29 00:02:38.963213 | orchestrator | local_sensitive_file.id_rsa: Creating... 2026-03-29 00:02:38.971638 | orchestrator | local_sensitive_file.id_rsa: Creation complete after 0s [id=3b8e346a5caf5d3f4fbc128456960828e80800a9] 2026-03-29 00:02:38.975254 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 4s [id=006c3921-cee3-45d1-95d5-34c501bc63f9] 2026-03-29 00:02:38.983430 | orchestrator | local_file.id_rsa_pub: Creating... 2026-03-29 00:02:38.986249 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creating... 2026-03-29 00:02:38.997002 | orchestrator | local_file.id_rsa_pub: Creation complete after 0s [id=35370e4ee319b629ce30bb98b4a0a52f3d97ba8c] 2026-03-29 00:02:39.329404 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 3s [id=756b3521-cc64-4337-8d74-551033403337] 2026-03-29 00:02:39.983196 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creation complete after 1s [id=b2b0ea37-1718-47df-9477-664336456fc3] 2026-03-29 00:02:39.992171 | orchestrator | openstack_networking_router_v2.router: Creating... 2026-03-29 00:02:40.339605 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 3s [id=3898928e-4667-40e3-adb8-3373f74e6bd4] 2026-03-29 00:02:42.353311 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 3s [id=42d62fbb-fef6-4bbb-9e64-d98a202adbe7] 2026-03-29 00:02:42.357300 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 3s [id=331a20ca-a2d6-4acb-b247-8df95204773a] 2026-03-29 00:02:42.403356 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 3s [id=7f3fbef7-4677-4949-824a-e0d60c532987] 2026-03-29 00:02:43.629351 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 3s [id=7b7cca30-4cc6-46c3-a861-4239a25d0253] 2026-03-29 00:02:43.629435 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 3s [id=ad0abeaf-0bd4-438b-a52a-fa71680f00ed] 2026-03-29 00:02:43.629474 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 3s [id=8ca200e1-6960-4195-9342-d9b84c11b36e] 2026-03-29 00:02:45.375769 | orchestrator | openstack_networking_router_v2.router: Creation complete after 5s [id=b206a2be-f92d-4068-b354-c896d32f0298] 2026-03-29 00:02:45.387069 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creating... 2026-03-29 00:02:45.391497 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creating... 2026-03-29 00:02:45.392937 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creating... 2026-03-29 00:02:45.630045 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creation complete after 1s [id=8670113d-96cb-4c90-85cb-c4b04e9cbfad] 2026-03-29 00:02:45.639088 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2026-03-29 00:02:45.641158 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2026-03-29 00:02:45.642045 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2026-03-29 00:02:45.647379 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creating... 2026-03-29 00:02:45.647440 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2026-03-29 00:02:45.652402 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creating... 2026-03-29 00:02:45.654222 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creating... 2026-03-29 00:02:45.655416 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creating... 2026-03-29 00:02:45.710366 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creation complete after 1s [id=b4dc7d1e-18d5-4e42-a0b1-ef24dc2eb5c1] 2026-03-29 00:02:45.722418 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creating... 2026-03-29 00:02:45.901990 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 0s [id=b00b3bc2-a64b-4536-b054-3d988ffdb121] 2026-03-29 00:02:45.912531 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creating... 2026-03-29 00:02:46.316551 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 0s [id=aa37f464-4fd3-4296-af71-d63332b1730a] 2026-03-29 00:02:46.324278 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2026-03-29 00:02:46.537415 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creation complete after 1s [id=ac18c357-4660-4cc6-b6e5-0328da0872eb] 2026-03-29 00:02:46.543390 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2026-03-29 00:02:46.554718 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 1s [id=b9a3bc82-184c-40ed-8c27-71dc2e11e231] 2026-03-29 00:02:46.559025 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2026-03-29 00:02:46.718702 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creation complete after 1s [id=e181d99b-4545-4cdf-a039-63cee6db67f8] 2026-03-29 00:02:46.725779 | orchestrator | openstack_networking_port_v2.manager_port_management: Creating... 2026-03-29 00:02:46.764058 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 1s [id=790ef607-6443-45c3-9e90-2619bf69c01a] 2026-03-29 00:02:46.770697 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2026-03-29 00:02:46.772783 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 0s [id=2dec769a-d9af-43cb-8a3c-f0e14e732aa6] 2026-03-29 00:02:46.777799 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2026-03-29 00:02:47.316378 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 1s [id=9d5a0770-5235-4419-bc08-c8c8e71ddab9] 2026-03-29 00:02:47.407589 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 0s [id=319757c4-b541-47bc-bbd1-a1a56d246083] 2026-03-29 00:02:47.464384 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creation complete after 1s [id=00fe3420-6557-41ef-8c79-be5f90caafc7] 2026-03-29 00:02:47.481319 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creation complete after 1s [id=420d3772-c5ed-41b2-adb7-1e0f00aa9366] 2026-03-29 00:02:47.649187 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 1s [id=6948e94b-14fc-4eeb-ba0f-e888002e1dc9] 2026-03-29 00:02:47.959210 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 1s [id=db02a238-8803-4140-b8bc-3027174171e1] 2026-03-29 00:02:47.974985 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creation complete after 2s [id=ab39241d-199d-4402-8df3-80486326645e] 2026-03-29 00:02:48.646381 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creation complete after 3s [id=9a1205b8-a615-4845-8813-3478f6c94635] 2026-03-29 00:02:48.696369 | orchestrator | openstack_networking_port_v2.manager_port_management: Creation complete after 2s [id=4122e688-2740-4d30-afc8-305412faa17d] 2026-03-29 00:02:50.395966 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creation complete after 5s [id=c9e32efc-2214-4b74-b0f6-36a50f457c9a] 2026-03-29 00:02:50.416920 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2026-03-29 00:02:50.427078 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creating... 2026-03-29 00:02:50.427138 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creating... 2026-03-29 00:02:50.427143 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creating... 2026-03-29 00:02:50.429099 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creating... 2026-03-29 00:02:50.445855 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creating... 2026-03-29 00:02:50.446313 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creating... 2026-03-29 00:02:52.772167 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 3s [id=6f0f1c7c-610b-4077-9674-ed8ded763e54] 2026-03-29 00:02:52.781911 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2026-03-29 00:02:52.787194 | orchestrator | local_file.inventory: Creating... 2026-03-29 00:02:52.789288 | orchestrator | local_file.MANAGER_ADDRESS: Creating... 2026-03-29 00:02:52.790590 | orchestrator | local_file.inventory: Creation complete after 0s [id=deaa4646557bfb95d33ea8dd89d1d7b348950a58] 2026-03-29 00:02:52.792709 | orchestrator | local_file.MANAGER_ADDRESS: Creation complete after 0s [id=877c1e2bc8caf89f0708a92b9075360b519a7fec] 2026-03-29 00:02:53.692853 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 1s [id=6f0f1c7c-610b-4077-9674-ed8ded763e54] 2026-03-29 00:03:00.429538 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2026-03-29 00:03:00.430692 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2026-03-29 00:03:00.430764 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2026-03-29 00:03:00.430873 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2026-03-29 00:03:00.447590 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2026-03-29 00:03:00.447675 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2026-03-29 00:03:10.439299 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2026-03-29 00:03:10.439436 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2026-03-29 00:03:10.439498 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2026-03-29 00:03:10.439524 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2026-03-29 00:03:10.447899 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2026-03-29 00:03:10.448010 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2026-03-29 00:03:20.448140 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [30s elapsed] 2026-03-29 00:03:20.448230 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [30s elapsed] 2026-03-29 00:03:20.448241 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [30s elapsed] 2026-03-29 00:03:20.448250 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [30s elapsed] 2026-03-29 00:03:20.448258 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [30s elapsed] 2026-03-29 00:03:20.448266 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [30s elapsed] 2026-03-29 00:03:30.457172 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [40s elapsed] 2026-03-29 00:03:30.457316 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [40s elapsed] 2026-03-29 00:03:30.457347 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [40s elapsed] 2026-03-29 00:03:30.457379 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [40s elapsed] 2026-03-29 00:03:30.457407 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [40s elapsed] 2026-03-29 00:03:30.457430 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [40s elapsed] 2026-03-29 00:03:40.464867 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [50s elapsed] 2026-03-29 00:03:40.464981 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [50s elapsed] 2026-03-29 00:03:40.464997 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [50s elapsed] 2026-03-29 00:03:40.465064 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [50s elapsed] 2026-03-29 00:03:40.465074 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [50s elapsed] 2026-03-29 00:03:40.465122 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [50s elapsed] 2026-03-29 00:03:41.432811 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creation complete after 51s [id=d43ad8f7-8869-46be-a2e2-2e1b1a69914b] 2026-03-29 00:03:41.480980 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creation complete after 51s [id=248cddb6-ac6f-45ba-b46f-cfa3115cea26] 2026-03-29 00:03:50.470239 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [1m0s elapsed] 2026-03-29 00:03:50.470313 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [1m0s elapsed] 2026-03-29 00:03:50.470328 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [1m0s elapsed] 2026-03-29 00:03:50.470332 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [1m0s elapsed] 2026-03-29 00:03:51.259842 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creation complete after 1m1s [id=086211f3-4556-417c-adc4-e47030fd84ee] 2026-03-29 00:03:51.507047 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creation complete after 1m2s [id=8f39eba4-8937-4cf8-b130-229676552256] 2026-03-29 00:04:00.470638 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [1m10s elapsed] 2026-03-29 00:04:00.470740 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [1m10s elapsed] 2026-03-29 00:04:01.470112 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creation complete after 1m11s [id=cee932f4-099b-4e68-b4a9-0440f774b8fb] 2026-03-29 00:04:10.470871 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [1m20s elapsed] 2026-03-29 00:04:11.925193 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creation complete after 1m22s [id=174c673e-6f71-4ab8-8cd5-a1bf7704722a] 2026-03-29 00:04:11.946075 | orchestrator | null_resource.node_semaphore: Creating... 2026-03-29 00:04:11.952139 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2026-03-29 00:04:11.952556 | orchestrator | null_resource.node_semaphore: Creation complete after 0s [id=6547259611721914028] 2026-03-29 00:04:11.959205 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2026-03-29 00:04:11.977269 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2026-03-29 00:04:11.978035 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2026-03-29 00:04:11.982920 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2026-03-29 00:04:11.986440 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2026-03-29 00:04:11.996678 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2026-03-29 00:04:12.001205 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2026-03-29 00:04:12.012165 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2026-03-29 00:04:12.037110 | orchestrator | openstack_compute_instance_v2.manager_server: Creating... 2026-03-29 00:04:15.347919 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 3s [id=cee932f4-099b-4e68-b4a9-0440f774b8fb/66d732d5-e9a7-47c2-8d7a-ba89d690a00e] 2026-03-29 00:04:15.412364 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 3s [id=248cddb6-ac6f-45ba-b46f-cfa3115cea26/6eff12ff-972f-42e1-84ee-23c8e4926f48] 2026-03-29 00:04:15.442536 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 3s [id=cee932f4-099b-4e68-b4a9-0440f774b8fb/64dd44e8-56db-4990-9653-26f9a904c769] 2026-03-29 00:04:15.446686 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 3s [id=8f39eba4-8937-4cf8-b130-229676552256/006c3921-cee3-45d1-95d5-34c501bc63f9] 2026-03-29 00:04:15.467886 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 3s [id=248cddb6-ac6f-45ba-b46f-cfa3115cea26/08797191-4f26-4e13-8d53-ed6640c6fbd2] 2026-03-29 00:04:15.481633 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 3s [id=8f39eba4-8937-4cf8-b130-229676552256/756b3521-cc64-4337-8d74-551033403337] 2026-03-29 00:04:21.531298 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 10s [id=cee932f4-099b-4e68-b4a9-0440f774b8fb/19b179cd-386f-4584-8a4b-106e5ad8592d] 2026-03-29 00:04:21.548266 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 10s [id=248cddb6-ac6f-45ba-b46f-cfa3115cea26/f8431fa8-afc6-4068-bff4-a67d5c0799f9] 2026-03-29 00:04:21.573136 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 10s [id=8f39eba4-8937-4cf8-b130-229676552256/cf707a58-c66d-4c72-840a-e00f4b50b6ac] 2026-03-29 00:04:22.044396 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2026-03-29 00:04:32.044723 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2026-03-29 00:04:32.397830 | orchestrator | openstack_compute_instance_v2.manager_server: Creation complete after 20s [id=facb1c26-eb21-4412-9c5e-022f0f51b4f0] 2026-03-29 00:04:32.420027 | orchestrator | 2026-03-29 00:04:32.420188 | orchestrator | Apply complete! Resources: 64 added, 0 changed, 0 destroyed. 2026-03-29 00:04:32.420228 | orchestrator | 2026-03-29 00:04:32.420235 | orchestrator | Outputs: 2026-03-29 00:04:32.420242 | orchestrator | 2026-03-29 00:04:32.420248 | orchestrator | manager_address = 2026-03-29 00:04:32.420255 | orchestrator | private_key = 2026-03-29 00:04:32.706760 | orchestrator | ok: Runtime: 0:02:05.192119 2026-03-29 00:04:32.730420 | 2026-03-29 00:04:32.730670 | TASK [Create infrastructure (stable)] 2026-03-29 00:04:33.265595 | orchestrator | skipping: Conditional result was False 2026-03-29 00:04:33.284671 | 2026-03-29 00:04:33.284861 | TASK [Fetch manager address] 2026-03-29 00:04:33.789331 | orchestrator | ok 2026-03-29 00:04:33.797473 | 2026-03-29 00:04:33.797671 | TASK [Set manager_host address] 2026-03-29 00:04:33.890229 | orchestrator | ok 2026-03-29 00:04:33.897816 | 2026-03-29 00:04:33.897946 | LOOP [Update ansible collections] 2026-03-29 00:04:34.810166 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-03-29 00:04:34.810449 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-03-29 00:04:34.810506 | orchestrator | Starting galaxy collection install process 2026-03-29 00:04:34.810533 | orchestrator | Process install dependency map 2026-03-29 00:04:34.810555 | orchestrator | Starting collection install process 2026-03-29 00:04:34.810620 | orchestrator | Installing 'osism.commons:999.0.0' to '/home/zuul-testbed02/.ansible/collections/ansible_collections/osism/commons' 2026-03-29 00:04:34.810647 | orchestrator | Created collection for osism.commons:999.0.0 at /home/zuul-testbed02/.ansible/collections/ansible_collections/osism/commons 2026-03-29 00:04:34.810691 | orchestrator | osism.commons:999.0.0 was installed successfully 2026-03-29 00:04:34.810747 | orchestrator | ok: Item: commons Runtime: 0:00:00.600348 2026-03-29 00:04:35.873450 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-03-29 00:04:35.873649 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-03-29 00:04:35.873704 | orchestrator | Starting galaxy collection install process 2026-03-29 00:04:35.873747 | orchestrator | Process install dependency map 2026-03-29 00:04:35.873785 | orchestrator | Starting collection install process 2026-03-29 00:04:35.873820 | orchestrator | Installing 'osism.services:999.0.0' to '/home/zuul-testbed02/.ansible/collections/ansible_collections/osism/services' 2026-03-29 00:04:35.873856 | orchestrator | Created collection for osism.services:999.0.0 at /home/zuul-testbed02/.ansible/collections/ansible_collections/osism/services 2026-03-29 00:04:35.873890 | orchestrator | osism.services:999.0.0 was installed successfully 2026-03-29 00:04:35.873943 | orchestrator | ok: Item: services Runtime: 0:00:00.784105 2026-03-29 00:04:35.893847 | 2026-03-29 00:04:35.894008 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-03-29 00:04:46.491265 | orchestrator | ok 2026-03-29 00:04:46.504725 | 2026-03-29 00:04:46.504869 | TASK [Wait a little longer for the manager so that everything is ready] 2026-03-29 00:05:46.554084 | orchestrator | ok 2026-03-29 00:05:46.564752 | 2026-03-29 00:05:46.564878 | TASK [Fetch manager ssh hostkey] 2026-03-29 00:05:48.167812 | orchestrator | Output suppressed because no_log was given 2026-03-29 00:05:48.184521 | 2026-03-29 00:05:48.184697 | TASK [Get ssh keypair from terraform environment] 2026-03-29 00:05:48.733178 | orchestrator | ok: Runtime: 0:00:00.011468 2026-03-29 00:05:48.749945 | 2026-03-29 00:05:48.750114 | TASK [Point out that the following task takes some time and does not give any output] 2026-03-29 00:05:48.797715 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2026-03-29 00:05:48.808124 | 2026-03-29 00:05:48.808263 | TASK [Run manager part 0] 2026-03-29 00:05:49.752393 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-03-29 00:05:49.799814 | orchestrator | 2026-03-29 00:05:49.799868 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2026-03-29 00:05:49.799878 | orchestrator | 2026-03-29 00:05:49.799894 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2026-03-29 00:05:51.601375 | orchestrator | ok: [testbed-manager] 2026-03-29 00:05:51.601431 | orchestrator | 2026-03-29 00:05:51.601452 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-03-29 00:05:51.601461 | orchestrator | 2026-03-29 00:05:51.601470 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-29 00:05:53.563861 | orchestrator | ok: [testbed-manager] 2026-03-29 00:05:53.564029 | orchestrator | 2026-03-29 00:05:53.564056 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-03-29 00:05:54.247588 | orchestrator | ok: [testbed-manager] 2026-03-29 00:05:54.247646 | orchestrator | 2026-03-29 00:05:54.247659 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-03-29 00:05:54.292000 | orchestrator | skipping: [testbed-manager] 2026-03-29 00:05:54.292072 | orchestrator | 2026-03-29 00:05:54.292089 | orchestrator | TASK [Fail if Ubuntu version is lower than 24.04] ****************************** 2026-03-29 00:05:54.326128 | orchestrator | skipping: [testbed-manager] 2026-03-29 00:05:54.326239 | orchestrator | 2026-03-29 00:05:54.326247 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2026-03-29 00:05:54.363279 | orchestrator | skipping: [testbed-manager] 2026-03-29 00:05:54.363348 | orchestrator | 2026-03-29 00:05:54.363360 | orchestrator | TASK [Set APT options on manager] ********************************************** 2026-03-29 00:05:55.067941 | orchestrator | changed: [testbed-manager] 2026-03-29 00:05:55.067973 | orchestrator | 2026-03-29 00:05:55.067979 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2026-03-29 00:08:50.788926 | orchestrator | changed: [testbed-manager] 2026-03-29 00:08:50.789021 | orchestrator | 2026-03-29 00:08:50.789041 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-03-29 00:10:11.972606 | orchestrator | changed: [testbed-manager] 2026-03-29 00:10:11.972675 | orchestrator | 2026-03-29 00:10:11.972695 | orchestrator | TASK [Install required packages] *********************************************** 2026-03-29 00:10:37.290860 | orchestrator | changed: [testbed-manager] 2026-03-29 00:10:37.291123 | orchestrator | 2026-03-29 00:10:37.291162 | orchestrator | TASK [Remove some python packages] ********************************************* 2026-03-29 00:10:47.692331 | orchestrator | changed: [testbed-manager] 2026-03-29 00:10:47.692427 | orchestrator | 2026-03-29 00:10:47.692444 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-03-29 00:10:47.741564 | orchestrator | ok: [testbed-manager] 2026-03-29 00:10:47.741650 | orchestrator | 2026-03-29 00:10:47.741674 | orchestrator | TASK [Get current user] ******************************************************** 2026-03-29 00:10:48.628049 | orchestrator | ok: [testbed-manager] 2026-03-29 00:10:48.628284 | orchestrator | 2026-03-29 00:10:48.628326 | orchestrator | TASK [Create venv directory] *************************************************** 2026-03-29 00:10:49.399469 | orchestrator | changed: [testbed-manager] 2026-03-29 00:10:49.399556 | orchestrator | 2026-03-29 00:10:49.399570 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2026-03-29 00:10:55.910405 | orchestrator | changed: [testbed-manager] 2026-03-29 00:10:55.910493 | orchestrator | 2026-03-29 00:10:55.910511 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2026-03-29 00:11:01.943869 | orchestrator | changed: [testbed-manager] 2026-03-29 00:11:01.943915 | orchestrator | 2026-03-29 00:11:01.943923 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2026-03-29 00:11:04.551826 | orchestrator | changed: [testbed-manager] 2026-03-29 00:11:04.551871 | orchestrator | 2026-03-29 00:11:04.551880 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2026-03-29 00:11:06.272571 | orchestrator | changed: [testbed-manager] 2026-03-29 00:11:06.272729 | orchestrator | 2026-03-29 00:11:06.272742 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2026-03-29 00:11:07.380232 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-03-29 00:11:07.380350 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-03-29 00:11:07.380366 | orchestrator | 2026-03-29 00:11:07.380382 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2026-03-29 00:11:07.427309 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-03-29 00:11:07.427380 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-03-29 00:11:07.427394 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-03-29 00:11:07.427407 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-03-29 00:11:13.768249 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-03-29 00:11:13.768348 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-03-29 00:11:13.768363 | orchestrator | 2026-03-29 00:11:13.768376 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2026-03-29 00:11:14.327379 | orchestrator | changed: [testbed-manager] 2026-03-29 00:11:14.327467 | orchestrator | 2026-03-29 00:11:14.327484 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2026-03-29 00:14:36.147791 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2026-03-29 00:14:36.147941 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2026-03-29 00:14:36.147966 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2026-03-29 00:14:36.147978 | orchestrator | 2026-03-29 00:14:36.147990 | orchestrator | TASK [Install local collections] *********************************************** 2026-03-29 00:14:38.410958 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2026-03-29 00:14:38.411050 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2026-03-29 00:14:38.411065 | orchestrator | 2026-03-29 00:14:38.411079 | orchestrator | PLAY [Create operator user] **************************************************** 2026-03-29 00:14:38.411092 | orchestrator | 2026-03-29 00:14:38.411103 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-29 00:14:39.732710 | orchestrator | ok: [testbed-manager] 2026-03-29 00:14:39.732798 | orchestrator | 2026-03-29 00:14:39.732816 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-03-29 00:14:39.781500 | orchestrator | ok: [testbed-manager] 2026-03-29 00:14:39.781544 | orchestrator | 2026-03-29 00:14:39.781554 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-03-29 00:14:39.851089 | orchestrator | ok: [testbed-manager] 2026-03-29 00:14:39.851127 | orchestrator | 2026-03-29 00:14:39.851135 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-03-29 00:14:40.568425 | orchestrator | changed: [testbed-manager] 2026-03-29 00:14:40.568503 | orchestrator | 2026-03-29 00:14:40.568518 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-03-29 00:14:41.253998 | orchestrator | changed: [testbed-manager] 2026-03-29 00:14:41.254117 | orchestrator | 2026-03-29 00:14:41.254134 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-03-29 00:14:42.580973 | orchestrator | changed: [testbed-manager] => (item=adm) 2026-03-29 00:14:42.839765 | orchestrator | changed: [testbed-manager] => (item=sudo) 2026-03-29 00:14:42.839831 | orchestrator | 2026-03-29 00:14:42.839847 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-03-29 00:14:44.085801 | orchestrator | changed: [testbed-manager] 2026-03-29 00:14:44.085966 | orchestrator | 2026-03-29 00:14:44.085992 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-03-29 00:14:45.695256 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2026-03-29 00:14:45.695305 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2026-03-29 00:14:45.695321 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2026-03-29 00:14:45.695328 | orchestrator | 2026-03-29 00:14:45.695336 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-03-29 00:14:45.740970 | orchestrator | skipping: [testbed-manager] 2026-03-29 00:14:45.741053 | orchestrator | 2026-03-29 00:14:45.741068 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-03-29 00:14:45.819752 | orchestrator | skipping: [testbed-manager] 2026-03-29 00:14:45.819878 | orchestrator | 2026-03-29 00:14:45.819907 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-03-29 00:14:46.381022 | orchestrator | changed: [testbed-manager] 2026-03-29 00:14:46.381080 | orchestrator | 2026-03-29 00:14:46.381089 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-03-29 00:14:46.448292 | orchestrator | skipping: [testbed-manager] 2026-03-29 00:14:46.448354 | orchestrator | 2026-03-29 00:14:46.448364 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-03-29 00:14:47.282752 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-29 00:14:47.282799 | orchestrator | changed: [testbed-manager] 2026-03-29 00:14:47.282810 | orchestrator | 2026-03-29 00:14:47.282818 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-03-29 00:14:47.316345 | orchestrator | skipping: [testbed-manager] 2026-03-29 00:14:47.316393 | orchestrator | 2026-03-29 00:14:47.316399 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-03-29 00:14:47.343626 | orchestrator | skipping: [testbed-manager] 2026-03-29 00:14:47.343691 | orchestrator | 2026-03-29 00:14:47.343702 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-03-29 00:14:47.377959 | orchestrator | skipping: [testbed-manager] 2026-03-29 00:14:47.378064 | orchestrator | 2026-03-29 00:14:47.378082 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-03-29 00:14:47.443133 | orchestrator | skipping: [testbed-manager] 2026-03-29 00:14:47.443188 | orchestrator | 2026-03-29 00:14:47.443194 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-03-29 00:14:48.184233 | orchestrator | ok: [testbed-manager] 2026-03-29 00:14:48.184312 | orchestrator | 2026-03-29 00:14:48.184329 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-03-29 00:14:48.184341 | orchestrator | 2026-03-29 00:14:48.184356 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-29 00:14:49.555418 | orchestrator | ok: [testbed-manager] 2026-03-29 00:14:49.555515 | orchestrator | 2026-03-29 00:14:49.555532 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2026-03-29 00:14:50.503550 | orchestrator | changed: [testbed-manager] 2026-03-29 00:14:50.503584 | orchestrator | 2026-03-29 00:14:50.503590 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 00:14:50.503596 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=10 rescued=0 ignored=0 2026-03-29 00:14:50.503600 | orchestrator | 2026-03-29 00:14:50.850120 | orchestrator | ok: Runtime: 0:09:01.437818 2026-03-29 00:14:50.866783 | 2026-03-29 00:14:50.866976 | TASK [Point out that the log in on the manager is now possible] 2026-03-29 00:14:50.910531 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2026-03-29 00:14:50.918807 | 2026-03-29 00:14:50.918972 | TASK [Point out that the following task takes some time and does not give any output] 2026-03-29 00:14:50.953423 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2026-03-29 00:14:50.962553 | 2026-03-29 00:14:50.962673 | TASK [Run manager part 1 + 2] 2026-03-29 00:14:52.619547 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-03-29 00:14:52.679056 | orchestrator | 2026-03-29 00:14:52.679115 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2026-03-29 00:14:52.679127 | orchestrator | 2026-03-29 00:14:52.679148 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-29 00:14:55.645445 | orchestrator | ok: [testbed-manager] 2026-03-29 00:14:55.645534 | orchestrator | 2026-03-29 00:14:55.645594 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2026-03-29 00:14:55.693774 | orchestrator | skipping: [testbed-manager] 2026-03-29 00:14:55.693811 | orchestrator | 2026-03-29 00:14:55.693819 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-03-29 00:14:55.753212 | orchestrator | ok: [testbed-manager] 2026-03-29 00:14:55.753260 | orchestrator | 2026-03-29 00:14:55.753279 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-03-29 00:14:55.801819 | orchestrator | ok: [testbed-manager] 2026-03-29 00:14:55.801888 | orchestrator | 2026-03-29 00:14:55.801901 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-03-29 00:14:55.885692 | orchestrator | ok: [testbed-manager] 2026-03-29 00:14:55.885739 | orchestrator | 2026-03-29 00:14:55.885749 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-03-29 00:14:55.958419 | orchestrator | ok: [testbed-manager] 2026-03-29 00:14:55.958483 | orchestrator | 2026-03-29 00:14:55.958500 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-03-29 00:14:56.005969 | orchestrator | included: /home/zuul-testbed02/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2026-03-29 00:14:56.006071 | orchestrator | 2026-03-29 00:14:56.006088 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-03-29 00:14:56.756731 | orchestrator | ok: [testbed-manager] 2026-03-29 00:14:56.756797 | orchestrator | 2026-03-29 00:14:56.756814 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-03-29 00:14:56.800946 | orchestrator | skipping: [testbed-manager] 2026-03-29 00:14:56.800983 | orchestrator | 2026-03-29 00:14:56.800989 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-03-29 00:14:58.264891 | orchestrator | changed: [testbed-manager] 2026-03-29 00:14:58.264934 | orchestrator | 2026-03-29 00:14:58.264944 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-03-29 00:14:58.824700 | orchestrator | ok: [testbed-manager] 2026-03-29 00:14:58.824769 | orchestrator | 2026-03-29 00:14:58.824785 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-03-29 00:14:59.988435 | orchestrator | changed: [testbed-manager] 2026-03-29 00:14:59.988484 | orchestrator | 2026-03-29 00:14:59.988497 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-03-29 00:15:16.369647 | orchestrator | changed: [testbed-manager] 2026-03-29 00:15:16.369720 | orchestrator | 2026-03-29 00:15:16.369736 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-03-29 00:15:17.031008 | orchestrator | ok: [testbed-manager] 2026-03-29 00:15:17.031065 | orchestrator | 2026-03-29 00:15:17.031081 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-03-29 00:15:17.092877 | orchestrator | skipping: [testbed-manager] 2026-03-29 00:15:17.092960 | orchestrator | 2026-03-29 00:15:17.092976 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2026-03-29 00:15:18.035860 | orchestrator | changed: [testbed-manager] 2026-03-29 00:15:18.035953 | orchestrator | 2026-03-29 00:15:18.035971 | orchestrator | TASK [Copy SSH private key] **************************************************** 2026-03-29 00:15:19.019292 | orchestrator | changed: [testbed-manager] 2026-03-29 00:15:19.019421 | orchestrator | 2026-03-29 00:15:19.019438 | orchestrator | TASK [Create configuration directory] ****************************************** 2026-03-29 00:15:19.600348 | orchestrator | changed: [testbed-manager] 2026-03-29 00:15:19.600441 | orchestrator | 2026-03-29 00:15:19.600459 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2026-03-29 00:15:19.646326 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-03-29 00:15:19.646389 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-03-29 00:15:19.646396 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-03-29 00:15:19.646401 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-03-29 00:15:22.574889 | orchestrator | changed: [testbed-manager] 2026-03-29 00:15:22.574958 | orchestrator | 2026-03-29 00:15:22.574970 | orchestrator | TASK [Install python requirements in venv] ************************************* 2026-03-29 00:15:30.964653 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2026-03-29 00:15:30.964710 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2026-03-29 00:15:30.964718 | orchestrator | ok: [testbed-manager] => (item=packaging) 2026-03-29 00:15:30.964724 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2026-03-29 00:15:30.964734 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2026-03-29 00:15:30.964739 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2026-03-29 00:15:30.964743 | orchestrator | 2026-03-29 00:15:30.964748 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2026-03-29 00:15:32.028718 | orchestrator | changed: [testbed-manager] 2026-03-29 00:15:32.028880 | orchestrator | 2026-03-29 00:15:32.028899 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2026-03-29 00:15:35.125686 | orchestrator | changed: [testbed-manager] 2026-03-29 00:15:35.125793 | orchestrator | 2026-03-29 00:15:35.125811 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2026-03-29 00:15:35.173008 | orchestrator | skipping: [testbed-manager] 2026-03-29 00:15:35.173099 | orchestrator | 2026-03-29 00:15:35.173116 | orchestrator | TASK [Run manager part 2] ****************************************************** 2026-03-29 00:17:11.031756 | orchestrator | changed: [testbed-manager] 2026-03-29 00:17:11.031889 | orchestrator | 2026-03-29 00:17:11.031911 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-03-29 00:17:12.166211 | orchestrator | ok: [testbed-manager] 2026-03-29 00:17:12.166256 | orchestrator | 2026-03-29 00:17:12.166267 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 00:17:12.166319 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=4 rescued=0 ignored=0 2026-03-29 00:17:12.166327 | orchestrator | 2026-03-29 00:17:12.617756 | orchestrator | ok: Runtime: 0:02:20.964949 2026-03-29 00:17:12.640035 | 2026-03-29 00:17:12.640219 | TASK [Reboot manager] 2026-03-29 00:17:14.180489 | orchestrator | ok: Runtime: 0:00:00.990245 2026-03-29 00:17:14.199033 | 2026-03-29 00:17:14.199209 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-03-29 00:17:28.166753 | orchestrator | ok 2026-03-29 00:17:28.176534 | 2026-03-29 00:17:28.176670 | TASK [Wait a little longer for the manager so that everything is ready] 2026-03-29 00:18:28.213800 | orchestrator | ok 2026-03-29 00:18:28.222343 | 2026-03-29 00:18:28.222483 | TASK [Deploy manager + bootstrap nodes] 2026-03-29 00:18:30.645599 | orchestrator | 2026-03-29 00:18:30.645817 | orchestrator | # DEPLOY MANAGER 2026-03-29 00:18:30.645843 | orchestrator | 2026-03-29 00:18:30.645857 | orchestrator | + set -e 2026-03-29 00:18:30.645871 | orchestrator | + echo 2026-03-29 00:18:30.645884 | orchestrator | + echo '# DEPLOY MANAGER' 2026-03-29 00:18:30.645901 | orchestrator | + echo 2026-03-29 00:18:30.645950 | orchestrator | + cat /opt/manager-vars.sh 2026-03-29 00:18:30.648392 | orchestrator | export NUMBER_OF_NODES=6 2026-03-29 00:18:30.648490 | orchestrator | 2026-03-29 00:18:30.648549 | orchestrator | export CEPH_VERSION=reef 2026-03-29 00:18:30.648574 | orchestrator | export CONFIGURATION_VERSION=main 2026-03-29 00:18:30.648595 | orchestrator | export MANAGER_VERSION=latest 2026-03-29 00:18:30.648625 | orchestrator | export OPENSTACK_VERSION=2024.2 2026-03-29 00:18:30.648636 | orchestrator | 2026-03-29 00:18:30.648655 | orchestrator | export ARA=false 2026-03-29 00:18:30.648667 | orchestrator | export DEPLOY_MODE=manager 2026-03-29 00:18:30.648684 | orchestrator | export TEMPEST=true 2026-03-29 00:18:30.648696 | orchestrator | export IS_ZUUL=true 2026-03-29 00:18:30.648707 | orchestrator | 2026-03-29 00:18:30.648725 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.35 2026-03-29 00:18:30.648736 | orchestrator | export EXTERNAL_API=false 2026-03-29 00:18:30.648747 | orchestrator | 2026-03-29 00:18:30.648757 | orchestrator | export IMAGE_USER=ubuntu 2026-03-29 00:18:30.648772 | orchestrator | export IMAGE_NODE_USER=ubuntu 2026-03-29 00:18:30.648783 | orchestrator | 2026-03-29 00:18:30.648793 | orchestrator | export CEPH_STACK=ceph-ansible 2026-03-29 00:18:30.648817 | orchestrator | 2026-03-29 00:18:30.648829 | orchestrator | + echo 2026-03-29 00:18:30.648846 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-29 00:18:30.649248 | orchestrator | ++ export INTERACTIVE=false 2026-03-29 00:18:30.649268 | orchestrator | ++ INTERACTIVE=false 2026-03-29 00:18:30.649280 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-29 00:18:30.649292 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-29 00:18:30.649585 | orchestrator | + source /opt/manager-vars.sh 2026-03-29 00:18:30.649607 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-03-29 00:18:30.649619 | orchestrator | ++ NUMBER_OF_NODES=6 2026-03-29 00:18:30.649630 | orchestrator | ++ export CEPH_VERSION=reef 2026-03-29 00:18:30.649640 | orchestrator | ++ CEPH_VERSION=reef 2026-03-29 00:18:30.649651 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-03-29 00:18:30.649662 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-03-29 00:18:30.649673 | orchestrator | ++ export MANAGER_VERSION=latest 2026-03-29 00:18:30.649708 | orchestrator | ++ MANAGER_VERSION=latest 2026-03-29 00:18:30.649720 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-03-29 00:18:30.649741 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-03-29 00:18:30.649752 | orchestrator | ++ export ARA=false 2026-03-29 00:18:30.649763 | orchestrator | ++ ARA=false 2026-03-29 00:18:30.649774 | orchestrator | ++ export DEPLOY_MODE=manager 2026-03-29 00:18:30.649790 | orchestrator | ++ DEPLOY_MODE=manager 2026-03-29 00:18:30.649801 | orchestrator | ++ export TEMPEST=true 2026-03-29 00:18:30.649812 | orchestrator | ++ TEMPEST=true 2026-03-29 00:18:30.649822 | orchestrator | ++ export IS_ZUUL=true 2026-03-29 00:18:30.649833 | orchestrator | ++ IS_ZUUL=true 2026-03-29 00:18:30.649844 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.35 2026-03-29 00:18:30.649855 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.35 2026-03-29 00:18:30.649866 | orchestrator | ++ export EXTERNAL_API=false 2026-03-29 00:18:30.649876 | orchestrator | ++ EXTERNAL_API=false 2026-03-29 00:18:30.649887 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-03-29 00:18:30.649898 | orchestrator | ++ IMAGE_USER=ubuntu 2026-03-29 00:18:30.649909 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-03-29 00:18:30.649920 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-03-29 00:18:30.649930 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-03-29 00:18:30.649941 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-03-29 00:18:30.649953 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2026-03-29 00:18:30.696021 | orchestrator | + docker version 2026-03-29 00:18:30.796229 | orchestrator | Client: Docker Engine - Community 2026-03-29 00:18:30.796354 | orchestrator | Version: 27.5.1 2026-03-29 00:18:30.796371 | orchestrator | API version: 1.47 2026-03-29 00:18:30.796384 | orchestrator | Go version: go1.22.11 2026-03-29 00:18:30.796395 | orchestrator | Git commit: 9f9e405 2026-03-29 00:18:30.796406 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-03-29 00:18:30.796419 | orchestrator | OS/Arch: linux/amd64 2026-03-29 00:18:30.796429 | orchestrator | Context: default 2026-03-29 00:18:30.796440 | orchestrator | 2026-03-29 00:18:30.796451 | orchestrator | Server: Docker Engine - Community 2026-03-29 00:18:30.796463 | orchestrator | Engine: 2026-03-29 00:18:30.796487 | orchestrator | Version: 27.5.1 2026-03-29 00:18:30.796538 | orchestrator | API version: 1.47 (minimum version 1.24) 2026-03-29 00:18:30.796583 | orchestrator | Go version: go1.22.11 2026-03-29 00:18:30.796595 | orchestrator | Git commit: 4c9b3b0 2026-03-29 00:18:30.796606 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-03-29 00:18:30.796617 | orchestrator | OS/Arch: linux/amd64 2026-03-29 00:18:30.796627 | orchestrator | Experimental: false 2026-03-29 00:18:30.796638 | orchestrator | containerd: 2026-03-29 00:18:30.796649 | orchestrator | Version: v2.2.2 2026-03-29 00:18:30.796660 | orchestrator | GitCommit: 301b2dac98f15c27117da5c8af12118a041a31d9 2026-03-29 00:18:30.796672 | orchestrator | runc: 2026-03-29 00:18:30.796683 | orchestrator | Version: 1.3.4 2026-03-29 00:18:30.796694 | orchestrator | GitCommit: v1.3.4-0-gd6d73eb8 2026-03-29 00:18:30.796704 | orchestrator | docker-init: 2026-03-29 00:18:30.796715 | orchestrator | Version: 0.19.0 2026-03-29 00:18:30.796727 | orchestrator | GitCommit: de40ad0 2026-03-29 00:18:30.799464 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2026-03-29 00:18:30.806233 | orchestrator | + set -e 2026-03-29 00:18:30.806288 | orchestrator | + source /opt/manager-vars.sh 2026-03-29 00:18:30.806302 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-03-29 00:18:30.806316 | orchestrator | ++ NUMBER_OF_NODES=6 2026-03-29 00:18:30.806327 | orchestrator | ++ export CEPH_VERSION=reef 2026-03-29 00:18:30.806338 | orchestrator | ++ CEPH_VERSION=reef 2026-03-29 00:18:30.806350 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-03-29 00:18:30.806362 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-03-29 00:18:30.806373 | orchestrator | ++ export MANAGER_VERSION=latest 2026-03-29 00:18:30.806384 | orchestrator | ++ MANAGER_VERSION=latest 2026-03-29 00:18:30.806395 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-03-29 00:18:30.806405 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-03-29 00:18:30.806416 | orchestrator | ++ export ARA=false 2026-03-29 00:18:30.806426 | orchestrator | ++ ARA=false 2026-03-29 00:18:30.806437 | orchestrator | ++ export DEPLOY_MODE=manager 2026-03-29 00:18:30.806448 | orchestrator | ++ DEPLOY_MODE=manager 2026-03-29 00:18:30.806458 | orchestrator | ++ export TEMPEST=true 2026-03-29 00:18:30.806469 | orchestrator | ++ TEMPEST=true 2026-03-29 00:18:30.806479 | orchestrator | ++ export IS_ZUUL=true 2026-03-29 00:18:30.806489 | orchestrator | ++ IS_ZUUL=true 2026-03-29 00:18:30.806523 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.35 2026-03-29 00:18:30.806536 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.35 2026-03-29 00:18:30.806547 | orchestrator | ++ export EXTERNAL_API=false 2026-03-29 00:18:30.806558 | orchestrator | ++ EXTERNAL_API=false 2026-03-29 00:18:30.806577 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-03-29 00:18:30.806588 | orchestrator | ++ IMAGE_USER=ubuntu 2026-03-29 00:18:30.806599 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-03-29 00:18:30.806609 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-03-29 00:18:30.806620 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-03-29 00:18:30.806631 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-03-29 00:18:30.806641 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-29 00:18:30.806652 | orchestrator | ++ export INTERACTIVE=false 2026-03-29 00:18:30.806662 | orchestrator | ++ INTERACTIVE=false 2026-03-29 00:18:30.806673 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-29 00:18:30.806689 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-29 00:18:30.806700 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-03-29 00:18:30.806710 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-03-29 00:18:30.806721 | orchestrator | + /opt/configuration/scripts/set-ceph-version.sh reef 2026-03-29 00:18:30.810958 | orchestrator | + set -e 2026-03-29 00:18:30.810983 | orchestrator | + VERSION=reef 2026-03-29 00:18:30.811814 | orchestrator | ++ grep '^ceph_version:' /opt/configuration/environments/manager/configuration.yml 2026-03-29 00:18:30.817714 | orchestrator | + [[ -n ceph_version: reef ]] 2026-03-29 00:18:30.817741 | orchestrator | + sed -i 's/ceph_version: .*/ceph_version: reef/g' /opt/configuration/environments/manager/configuration.yml 2026-03-29 00:18:30.822665 | orchestrator | + /opt/configuration/scripts/set-openstack-version.sh 2024.2 2026-03-29 00:18:30.829177 | orchestrator | + set -e 2026-03-29 00:18:30.829234 | orchestrator | + VERSION=2024.2 2026-03-29 00:18:30.829637 | orchestrator | ++ grep '^openstack_version:' /opt/configuration/environments/manager/configuration.yml 2026-03-29 00:18:30.831645 | orchestrator | + [[ -n openstack_version: 2024.2 ]] 2026-03-29 00:18:30.831727 | orchestrator | + sed -i 's/openstack_version: .*/openstack_version: 2024.2/g' /opt/configuration/environments/manager/configuration.yml 2026-03-29 00:18:30.836722 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2026-03-29 00:18:30.837389 | orchestrator | ++ semver latest 7.0.0 2026-03-29 00:18:30.898119 | orchestrator | + [[ -1 -ge 0 ]] 2026-03-29 00:18:30.898222 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-03-29 00:18:30.898238 | orchestrator | + echo 'enable_osism_kubernetes: true' 2026-03-29 00:18:30.899473 | orchestrator | ++ semver latest 10.0.0-0 2026-03-29 00:18:30.966121 | orchestrator | + [[ -1 -ge 0 ]] 2026-03-29 00:18:30.966839 | orchestrator | ++ semver 2024.2 2025.1 2026-03-29 00:18:31.025821 | orchestrator | + [[ -1 -ge 0 ]] 2026-03-29 00:18:31.025918 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2026-03-29 00:18:31.109454 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-03-29 00:18:31.110973 | orchestrator | + source /opt/venv/bin/activate 2026-03-29 00:18:31.112211 | orchestrator | ++ deactivate nondestructive 2026-03-29 00:18:31.112242 | orchestrator | ++ '[' -n '' ']' 2026-03-29 00:18:31.112263 | orchestrator | ++ '[' -n '' ']' 2026-03-29 00:18:31.112275 | orchestrator | ++ hash -r 2026-03-29 00:18:31.112292 | orchestrator | ++ '[' -n '' ']' 2026-03-29 00:18:31.112303 | orchestrator | ++ unset VIRTUAL_ENV 2026-03-29 00:18:31.112314 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-03-29 00:18:31.112330 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-03-29 00:18:31.112567 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-03-29 00:18:31.112598 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-03-29 00:18:31.112614 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-03-29 00:18:31.112625 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-03-29 00:18:31.112673 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-03-29 00:18:31.112962 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-03-29 00:18:31.112985 | orchestrator | ++ export PATH 2026-03-29 00:18:31.113006 | orchestrator | ++ '[' -n '' ']' 2026-03-29 00:18:31.113022 | orchestrator | ++ '[' -z '' ']' 2026-03-29 00:18:31.113033 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-03-29 00:18:31.113044 | orchestrator | ++ PS1='(venv) ' 2026-03-29 00:18:31.113055 | orchestrator | ++ export PS1 2026-03-29 00:18:31.113066 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-03-29 00:18:31.113081 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-03-29 00:18:31.113092 | orchestrator | ++ hash -r 2026-03-29 00:18:31.113368 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2026-03-29 00:18:32.279726 | orchestrator | 2026-03-29 00:18:32.279853 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2026-03-29 00:18:32.279878 | orchestrator | 2026-03-29 00:18:32.279895 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-03-29 00:18:32.858684 | orchestrator | ok: [testbed-manager] 2026-03-29 00:18:32.858803 | orchestrator | 2026-03-29 00:18:32.858820 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-03-29 00:18:33.802622 | orchestrator | changed: [testbed-manager] 2026-03-29 00:18:33.802713 | orchestrator | 2026-03-29 00:18:33.802726 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2026-03-29 00:18:33.802736 | orchestrator | 2026-03-29 00:18:33.802746 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-29 00:18:37.146152 | orchestrator | ok: [testbed-manager] 2026-03-29 00:18:37.146258 | orchestrator | 2026-03-29 00:18:37.146272 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2026-03-29 00:18:37.197698 | orchestrator | ok: [testbed-manager] 2026-03-29 00:18:37.197791 | orchestrator | 2026-03-29 00:18:37.197808 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2026-03-29 00:18:37.621757 | orchestrator | changed: [testbed-manager] 2026-03-29 00:18:37.621865 | orchestrator | 2026-03-29 00:18:37.621883 | orchestrator | TASK [Add netbox_enable parameter] ********************************************* 2026-03-29 00:18:37.666378 | orchestrator | skipping: [testbed-manager] 2026-03-29 00:18:37.666484 | orchestrator | 2026-03-29 00:18:37.666543 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-03-29 00:18:37.994634 | orchestrator | changed: [testbed-manager] 2026-03-29 00:18:37.994734 | orchestrator | 2026-03-29 00:18:37.994748 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2026-03-29 00:18:38.327149 | orchestrator | ok: [testbed-manager] 2026-03-29 00:18:38.327252 | orchestrator | 2026-03-29 00:18:38.327270 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2026-03-29 00:18:38.450904 | orchestrator | skipping: [testbed-manager] 2026-03-29 00:18:38.450992 | orchestrator | 2026-03-29 00:18:38.451009 | orchestrator | PLAY [Apply role traefik] ****************************************************** 2026-03-29 00:18:38.451021 | orchestrator | 2026-03-29 00:18:38.451033 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-29 00:18:41.217862 | orchestrator | ok: [testbed-manager] 2026-03-29 00:18:41.218070 | orchestrator | 2026-03-29 00:18:41.218091 | orchestrator | TASK [Apply traefik role] ****************************************************** 2026-03-29 00:18:41.319081 | orchestrator | included: osism.services.traefik for testbed-manager 2026-03-29 00:18:41.319159 | orchestrator | 2026-03-29 00:18:41.319174 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2026-03-29 00:18:41.369989 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2026-03-29 00:18:41.370139 | orchestrator | 2026-03-29 00:18:41.370164 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2026-03-29 00:18:42.435848 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2026-03-29 00:18:42.435943 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2026-03-29 00:18:42.435954 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2026-03-29 00:18:42.435963 | orchestrator | 2026-03-29 00:18:42.435971 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2026-03-29 00:18:44.176531 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2026-03-29 00:18:44.176630 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2026-03-29 00:18:44.176646 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2026-03-29 00:18:44.176659 | orchestrator | 2026-03-29 00:18:44.176671 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2026-03-29 00:18:44.780660 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-29 00:18:44.780764 | orchestrator | changed: [testbed-manager] 2026-03-29 00:18:44.780782 | orchestrator | 2026-03-29 00:18:44.780795 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2026-03-29 00:18:45.435357 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-29 00:18:45.435448 | orchestrator | changed: [testbed-manager] 2026-03-29 00:18:45.435461 | orchestrator | 2026-03-29 00:18:45.435471 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2026-03-29 00:18:45.495462 | orchestrator | skipping: [testbed-manager] 2026-03-29 00:18:45.495577 | orchestrator | 2026-03-29 00:18:45.495591 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2026-03-29 00:18:45.842776 | orchestrator | ok: [testbed-manager] 2026-03-29 00:18:45.842878 | orchestrator | 2026-03-29 00:18:45.842894 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2026-03-29 00:18:45.907601 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2026-03-29 00:18:45.907707 | orchestrator | 2026-03-29 00:18:45.907723 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2026-03-29 00:18:47.031644 | orchestrator | changed: [testbed-manager] 2026-03-29 00:18:47.031735 | orchestrator | 2026-03-29 00:18:47.031752 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2026-03-29 00:18:47.835288 | orchestrator | changed: [testbed-manager] 2026-03-29 00:18:47.835426 | orchestrator | 2026-03-29 00:18:47.835451 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2026-03-29 00:19:01.493921 | orchestrator | changed: [testbed-manager] 2026-03-29 00:19:01.494136 | orchestrator | 2026-03-29 00:19:01.494188 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2026-03-29 00:19:01.554604 | orchestrator | skipping: [testbed-manager] 2026-03-29 00:19:01.554692 | orchestrator | 2026-03-29 00:19:01.554706 | orchestrator | PLAY [Deploy manager service] ************************************************** 2026-03-29 00:19:01.554719 | orchestrator | 2026-03-29 00:19:01.554730 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-29 00:19:03.507869 | orchestrator | ok: [testbed-manager] 2026-03-29 00:19:03.507970 | orchestrator | 2026-03-29 00:19:03.508018 | orchestrator | TASK [Apply manager role] ****************************************************** 2026-03-29 00:19:03.654769 | orchestrator | included: osism.services.manager for testbed-manager 2026-03-29 00:19:03.654862 | orchestrator | 2026-03-29 00:19:03.654876 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2026-03-29 00:19:03.719697 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2026-03-29 00:19:03.719812 | orchestrator | 2026-03-29 00:19:03.719827 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2026-03-29 00:19:06.467249 | orchestrator | ok: [testbed-manager] 2026-03-29 00:19:06.467351 | orchestrator | 2026-03-29 00:19:06.467367 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2026-03-29 00:19:06.525796 | orchestrator | ok: [testbed-manager] 2026-03-29 00:19:06.525900 | orchestrator | 2026-03-29 00:19:06.525916 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2026-03-29 00:19:06.658761 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2026-03-29 00:19:06.658888 | orchestrator | 2026-03-29 00:19:06.658919 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2026-03-29 00:19:09.521680 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2026-03-29 00:19:09.521805 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2026-03-29 00:19:09.521821 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2026-03-29 00:19:09.521833 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2026-03-29 00:19:09.521844 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2026-03-29 00:19:09.521856 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2026-03-29 00:19:09.521882 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2026-03-29 00:19:09.522616 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2026-03-29 00:19:09.522637 | orchestrator | 2026-03-29 00:19:09.522650 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2026-03-29 00:19:10.145746 | orchestrator | changed: [testbed-manager] 2026-03-29 00:19:10.145876 | orchestrator | 2026-03-29 00:19:10.145921 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2026-03-29 00:19:10.756762 | orchestrator | changed: [testbed-manager] 2026-03-29 00:19:10.756855 | orchestrator | 2026-03-29 00:19:10.756869 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2026-03-29 00:19:10.830922 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2026-03-29 00:19:10.831018 | orchestrator | 2026-03-29 00:19:10.831033 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2026-03-29 00:19:11.999002 | orchestrator | changed: [testbed-manager] => (item=ara) 2026-03-29 00:19:11.999109 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2026-03-29 00:19:11.999126 | orchestrator | 2026-03-29 00:19:11.999139 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2026-03-29 00:19:12.672228 | orchestrator | changed: [testbed-manager] 2026-03-29 00:19:12.672320 | orchestrator | 2026-03-29 00:19:12.672337 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2026-03-29 00:19:12.719958 | orchestrator | skipping: [testbed-manager] 2026-03-29 00:19:12.720047 | orchestrator | 2026-03-29 00:19:12.720062 | orchestrator | TASK [osism.services.manager : Include frontend config tasks] ****************** 2026-03-29 00:19:12.803310 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-frontend.yml for testbed-manager 2026-03-29 00:19:12.803406 | orchestrator | 2026-03-29 00:19:12.803419 | orchestrator | TASK [osism.services.manager : Copy frontend environment file] ***************** 2026-03-29 00:19:13.385382 | orchestrator | changed: [testbed-manager] 2026-03-29 00:19:13.385522 | orchestrator | 2026-03-29 00:19:13.385540 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2026-03-29 00:19:13.445180 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2026-03-29 00:19:13.445285 | orchestrator | 2026-03-29 00:19:13.445295 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2026-03-29 00:19:14.818342 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-29 00:19:14.818499 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-29 00:19:14.818526 | orchestrator | changed: [testbed-manager] 2026-03-29 00:19:14.818546 | orchestrator | 2026-03-29 00:19:14.818567 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2026-03-29 00:19:15.462130 | orchestrator | changed: [testbed-manager] 2026-03-29 00:19:15.462250 | orchestrator | 2026-03-29 00:19:15.462275 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2026-03-29 00:19:15.512098 | orchestrator | skipping: [testbed-manager] 2026-03-29 00:19:15.512213 | orchestrator | 2026-03-29 00:19:15.512236 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2026-03-29 00:19:15.630913 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2026-03-29 00:19:15.631013 | orchestrator | 2026-03-29 00:19:15.631028 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2026-03-29 00:19:16.152089 | orchestrator | changed: [testbed-manager] 2026-03-29 00:19:16.152196 | orchestrator | 2026-03-29 00:19:16.152234 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2026-03-29 00:19:16.563097 | orchestrator | changed: [testbed-manager] 2026-03-29 00:19:16.563211 | orchestrator | 2026-03-29 00:19:16.563228 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2026-03-29 00:19:17.786871 | orchestrator | changed: [testbed-manager] => (item=conductor) 2026-03-29 00:19:17.786979 | orchestrator | changed: [testbed-manager] => (item=openstack) 2026-03-29 00:19:17.786996 | orchestrator | 2026-03-29 00:19:17.787009 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2026-03-29 00:19:18.443482 | orchestrator | changed: [testbed-manager] 2026-03-29 00:19:18.443615 | orchestrator | 2026-03-29 00:19:18.443633 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2026-03-29 00:19:18.823347 | orchestrator | ok: [testbed-manager] 2026-03-29 00:19:18.823526 | orchestrator | 2026-03-29 00:19:18.823547 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2026-03-29 00:19:19.170910 | orchestrator | changed: [testbed-manager] 2026-03-29 00:19:19.171020 | orchestrator | 2026-03-29 00:19:19.171045 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2026-03-29 00:19:19.219613 | orchestrator | skipping: [testbed-manager] 2026-03-29 00:19:19.219721 | orchestrator | 2026-03-29 00:19:19.219745 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2026-03-29 00:19:19.286590 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2026-03-29 00:19:19.286674 | orchestrator | 2026-03-29 00:19:19.286686 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2026-03-29 00:19:19.326522 | orchestrator | ok: [testbed-manager] 2026-03-29 00:19:19.326611 | orchestrator | 2026-03-29 00:19:19.326626 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2026-03-29 00:19:21.376582 | orchestrator | changed: [testbed-manager] => (item=osism) 2026-03-29 00:19:21.376691 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2026-03-29 00:19:21.376710 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2026-03-29 00:19:21.376724 | orchestrator | 2026-03-29 00:19:21.376738 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2026-03-29 00:19:22.046341 | orchestrator | changed: [testbed-manager] 2026-03-29 00:19:22.046471 | orchestrator | 2026-03-29 00:19:22.046488 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2026-03-29 00:19:22.743705 | orchestrator | changed: [testbed-manager] 2026-03-29 00:19:22.743800 | orchestrator | 2026-03-29 00:19:22.743816 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2026-03-29 00:19:23.449928 | orchestrator | changed: [testbed-manager] 2026-03-29 00:19:23.450071 | orchestrator | 2026-03-29 00:19:23.450089 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2026-03-29 00:19:23.519964 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2026-03-29 00:19:23.520067 | orchestrator | 2026-03-29 00:19:23.520083 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2026-03-29 00:19:23.565796 | orchestrator | ok: [testbed-manager] 2026-03-29 00:19:23.566003 | orchestrator | 2026-03-29 00:19:23.566081 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2026-03-29 00:19:24.283496 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2026-03-29 00:19:24.283603 | orchestrator | 2026-03-29 00:19:24.283620 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2026-03-29 00:19:24.368941 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2026-03-29 00:19:24.369040 | orchestrator | 2026-03-29 00:19:24.369056 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2026-03-29 00:19:25.068871 | orchestrator | changed: [testbed-manager] 2026-03-29 00:19:25.068970 | orchestrator | 2026-03-29 00:19:25.068987 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2026-03-29 00:19:25.688737 | orchestrator | ok: [testbed-manager] 2026-03-29 00:19:25.688826 | orchestrator | 2026-03-29 00:19:25.688836 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2026-03-29 00:19:25.749585 | orchestrator | skipping: [testbed-manager] 2026-03-29 00:19:25.749663 | orchestrator | 2026-03-29 00:19:25.749672 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2026-03-29 00:19:25.804539 | orchestrator | ok: [testbed-manager] 2026-03-29 00:19:25.804610 | orchestrator | 2026-03-29 00:19:25.804619 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2026-03-29 00:19:26.647556 | orchestrator | changed: [testbed-manager] 2026-03-29 00:19:26.647674 | orchestrator | 2026-03-29 00:19:26.647699 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2026-03-29 00:20:31.526993 | orchestrator | changed: [testbed-manager] 2026-03-29 00:20:31.527124 | orchestrator | 2026-03-29 00:20:31.527142 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2026-03-29 00:20:32.433651 | orchestrator | ok: [testbed-manager] 2026-03-29 00:20:32.433751 | orchestrator | 2026-03-29 00:20:32.433766 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2026-03-29 00:20:32.484553 | orchestrator | skipping: [testbed-manager] 2026-03-29 00:20:32.484640 | orchestrator | 2026-03-29 00:20:32.484653 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2026-03-29 00:20:35.667259 | orchestrator | changed: [testbed-manager] 2026-03-29 00:20:35.667410 | orchestrator | 2026-03-29 00:20:35.667429 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2026-03-29 00:20:35.758795 | orchestrator | ok: [testbed-manager] 2026-03-29 00:20:35.758907 | orchestrator | 2026-03-29 00:20:35.758956 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-03-29 00:20:35.758972 | orchestrator | 2026-03-29 00:20:35.758983 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2026-03-29 00:20:35.814609 | orchestrator | skipping: [testbed-manager] 2026-03-29 00:20:35.814696 | orchestrator | 2026-03-29 00:20:35.814712 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2026-03-29 00:21:35.860530 | orchestrator | Pausing for 60 seconds 2026-03-29 00:21:35.860671 | orchestrator | changed: [testbed-manager] 2026-03-29 00:21:35.860700 | orchestrator | 2026-03-29 00:21:35.860722 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2026-03-29 00:21:38.753141 | orchestrator | changed: [testbed-manager] 2026-03-29 00:21:38.753250 | orchestrator | 2026-03-29 00:21:38.753299 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2026-03-29 00:22:20.053190 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2026-03-29 00:22:20.053373 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2026-03-29 00:22:20.053390 | orchestrator | changed: [testbed-manager] 2026-03-29 00:22:20.053431 | orchestrator | 2026-03-29 00:22:20.053445 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2026-03-29 00:22:25.078292 | orchestrator | changed: [testbed-manager] 2026-03-29 00:22:25.078400 | orchestrator | 2026-03-29 00:22:25.078416 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2026-03-29 00:22:25.156470 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2026-03-29 00:22:25.156568 | orchestrator | 2026-03-29 00:22:25.156584 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-03-29 00:22:25.156597 | orchestrator | 2026-03-29 00:22:25.156608 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2026-03-29 00:22:25.197045 | orchestrator | skipping: [testbed-manager] 2026-03-29 00:22:25.197133 | orchestrator | 2026-03-29 00:22:25.197146 | orchestrator | TASK [osism.services.manager : Include version verification tasks] ************* 2026-03-29 00:22:25.255663 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/verify-versions.yml for testbed-manager 2026-03-29 00:22:25.255751 | orchestrator | 2026-03-29 00:22:25.255765 | orchestrator | TASK [osism.services.manager : Deploy service manager version check script] **** 2026-03-29 00:22:25.918703 | orchestrator | changed: [testbed-manager] 2026-03-29 00:22:25.918836 | orchestrator | 2026-03-29 00:22:25.918862 | orchestrator | TASK [osism.services.manager : Execute service manager version check] ********** 2026-03-29 00:22:28.823799 | orchestrator | ok: [testbed-manager] 2026-03-29 00:22:28.823900 | orchestrator | 2026-03-29 00:22:28.823916 | orchestrator | TASK [osism.services.manager : Display version check results] ****************** 2026-03-29 00:22:28.894084 | orchestrator | ok: [testbed-manager] => { 2026-03-29 00:22:28.894271 | orchestrator | "version_check_result.stdout_lines": [ 2026-03-29 00:22:28.894300 | orchestrator | "=== OSISM Container Version Check ===", 2026-03-29 00:22:28.894318 | orchestrator | "Checking running containers against expected versions...", 2026-03-29 00:22:28.894338 | orchestrator | "", 2026-03-29 00:22:28.894357 | orchestrator | "Checking service: inventory_reconciler (Inventory Reconciler Service)", 2026-03-29 00:22:28.894373 | orchestrator | " Expected: registry.osism.tech/osism/inventory-reconciler:latest", 2026-03-29 00:22:28.894392 | orchestrator | " Enabled: true", 2026-03-29 00:22:28.894410 | orchestrator | " Running: registry.osism.tech/osism/inventory-reconciler:latest", 2026-03-29 00:22:28.894428 | orchestrator | " Status: ✅ MATCH", 2026-03-29 00:22:28.894447 | orchestrator | "", 2026-03-29 00:22:28.894466 | orchestrator | "Checking service: osism-ansible (OSISM Ansible Service)", 2026-03-29 00:22:28.894487 | orchestrator | " Expected: registry.osism.tech/osism/osism-ansible:latest", 2026-03-29 00:22:28.894506 | orchestrator | " Enabled: true", 2026-03-29 00:22:28.894521 | orchestrator | " Running: registry.osism.tech/osism/osism-ansible:latest", 2026-03-29 00:22:28.894532 | orchestrator | " Status: ✅ MATCH", 2026-03-29 00:22:28.894543 | orchestrator | "", 2026-03-29 00:22:28.894553 | orchestrator | "Checking service: osism-kubernetes (Osism-Kubernetes Service)", 2026-03-29 00:22:28.894565 | orchestrator | " Expected: registry.osism.tech/osism/osism-kubernetes:latest", 2026-03-29 00:22:28.894576 | orchestrator | " Enabled: true", 2026-03-29 00:22:28.894586 | orchestrator | " Running: registry.osism.tech/osism/osism-kubernetes:latest", 2026-03-29 00:22:28.894597 | orchestrator | " Status: ✅ MATCH", 2026-03-29 00:22:28.894608 | orchestrator | "", 2026-03-29 00:22:28.894619 | orchestrator | "Checking service: ceph-ansible (Ceph-Ansible Service)", 2026-03-29 00:22:28.894630 | orchestrator | " Expected: registry.osism.tech/osism/ceph-ansible:reef", 2026-03-29 00:22:28.894641 | orchestrator | " Enabled: true", 2026-03-29 00:22:28.894652 | orchestrator | " Running: registry.osism.tech/osism/ceph-ansible:reef", 2026-03-29 00:22:28.894663 | orchestrator | " Status: ✅ MATCH", 2026-03-29 00:22:28.894674 | orchestrator | "", 2026-03-29 00:22:28.894685 | orchestrator | "Checking service: kolla-ansible (Kolla-Ansible Service)", 2026-03-29 00:22:28.894695 | orchestrator | " Expected: registry.osism.tech/osism/kolla-ansible:2024.2", 2026-03-29 00:22:28.894737 | orchestrator | " Enabled: true", 2026-03-29 00:22:28.894749 | orchestrator | " Running: registry.osism.tech/osism/kolla-ansible:2024.2", 2026-03-29 00:22:28.894760 | orchestrator | " Status: ✅ MATCH", 2026-03-29 00:22:28.894770 | orchestrator | "", 2026-03-29 00:22:28.894781 | orchestrator | "Checking service: osismclient (OSISM Client)", 2026-03-29 00:22:28.894792 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-03-29 00:22:28.894803 | orchestrator | " Enabled: true", 2026-03-29 00:22:28.894813 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-03-29 00:22:28.894824 | orchestrator | " Status: ✅ MATCH", 2026-03-29 00:22:28.894834 | orchestrator | "", 2026-03-29 00:22:28.894845 | orchestrator | "Checking service: ara-server (ARA Server)", 2026-03-29 00:22:28.894856 | orchestrator | " Expected: registry.osism.tech/osism/ara-server:1.7.3", 2026-03-29 00:22:28.894866 | orchestrator | " Enabled: true", 2026-03-29 00:22:28.894877 | orchestrator | " Running: registry.osism.tech/osism/ara-server:1.7.3", 2026-03-29 00:22:28.894888 | orchestrator | " Status: ✅ MATCH", 2026-03-29 00:22:28.894898 | orchestrator | "", 2026-03-29 00:22:28.894909 | orchestrator | "Checking service: mariadb (MariaDB for ARA)", 2026-03-29 00:22:28.894920 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-03-29 00:22:28.894930 | orchestrator | " Enabled: true", 2026-03-29 00:22:28.894941 | orchestrator | " Running: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-03-29 00:22:28.894952 | orchestrator | " Status: ✅ MATCH", 2026-03-29 00:22:28.894963 | orchestrator | "", 2026-03-29 00:22:28.894983 | orchestrator | "Checking service: frontend (OSISM Frontend)", 2026-03-29 00:22:28.894994 | orchestrator | " Expected: registry.osism.tech/osism/osism-frontend:latest", 2026-03-29 00:22:28.895009 | orchestrator | " Enabled: true", 2026-03-29 00:22:28.895020 | orchestrator | " Running: registry.osism.tech/osism/osism-frontend:latest", 2026-03-29 00:22:28.895031 | orchestrator | " Status: ✅ MATCH", 2026-03-29 00:22:28.895042 | orchestrator | "", 2026-03-29 00:22:28.895053 | orchestrator | "Checking service: redis (Redis Cache)", 2026-03-29 00:22:28.895064 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-03-29 00:22:28.895075 | orchestrator | " Enabled: true", 2026-03-29 00:22:28.895085 | orchestrator | " Running: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-03-29 00:22:28.895096 | orchestrator | " Status: ✅ MATCH", 2026-03-29 00:22:28.895107 | orchestrator | "", 2026-03-29 00:22:28.895117 | orchestrator | "Checking service: api (OSISM API Service)", 2026-03-29 00:22:28.895128 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-03-29 00:22:28.895139 | orchestrator | " Enabled: true", 2026-03-29 00:22:28.895149 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-03-29 00:22:28.895160 | orchestrator | " Status: ✅ MATCH", 2026-03-29 00:22:28.895170 | orchestrator | "", 2026-03-29 00:22:28.895181 | orchestrator | "Checking service: listener (OpenStack Event Listener)", 2026-03-29 00:22:28.895192 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-03-29 00:22:28.895224 | orchestrator | " Enabled: true", 2026-03-29 00:22:28.895235 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-03-29 00:22:28.895246 | orchestrator | " Status: ✅ MATCH", 2026-03-29 00:22:28.895256 | orchestrator | "", 2026-03-29 00:22:28.895267 | orchestrator | "Checking service: openstack (OpenStack Integration)", 2026-03-29 00:22:28.895277 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-03-29 00:22:28.895288 | orchestrator | " Enabled: true", 2026-03-29 00:22:28.895299 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-03-29 00:22:28.895309 | orchestrator | " Status: ✅ MATCH", 2026-03-29 00:22:28.895320 | orchestrator | "", 2026-03-29 00:22:28.895330 | orchestrator | "Checking service: beat (Celery Beat Scheduler)", 2026-03-29 00:22:28.895341 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-03-29 00:22:28.895352 | orchestrator | " Enabled: true", 2026-03-29 00:22:28.895362 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-03-29 00:22:28.895373 | orchestrator | " Status: ✅ MATCH", 2026-03-29 00:22:28.895390 | orchestrator | "", 2026-03-29 00:22:28.895401 | orchestrator | "Checking service: flower (Celery Flower Monitor)", 2026-03-29 00:22:28.895440 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-03-29 00:22:28.895469 | orchestrator | " Enabled: true", 2026-03-29 00:22:28.895488 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-03-29 00:22:28.895507 | orchestrator | " Status: ✅ MATCH", 2026-03-29 00:22:28.895525 | orchestrator | "", 2026-03-29 00:22:28.895546 | orchestrator | "=== Summary ===", 2026-03-29 00:22:28.895565 | orchestrator | "Errors (version mismatches): 0", 2026-03-29 00:22:28.895584 | orchestrator | "Warnings (expected containers not running): 0", 2026-03-29 00:22:28.895605 | orchestrator | "", 2026-03-29 00:22:28.895626 | orchestrator | "✅ All running containers match expected versions!" 2026-03-29 00:22:28.895647 | orchestrator | ] 2026-03-29 00:22:28.895660 | orchestrator | } 2026-03-29 00:22:28.895696 | orchestrator | 2026-03-29 00:22:28.895708 | orchestrator | TASK [osism.services.manager : Skip version check due to service configuration] *** 2026-03-29 00:22:28.949516 | orchestrator | skipping: [testbed-manager] 2026-03-29 00:22:28.949625 | orchestrator | 2026-03-29 00:22:28.949651 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 00:22:28.949671 | orchestrator | testbed-manager : ok=70 changed=37 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0 2026-03-29 00:22:28.949690 | orchestrator | 2026-03-29 00:22:29.013882 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-03-29 00:22:29.013978 | orchestrator | + deactivate 2026-03-29 00:22:29.013993 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-03-29 00:22:29.014008 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-03-29 00:22:29.014079 | orchestrator | + export PATH 2026-03-29 00:22:29.014091 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-03-29 00:22:29.014103 | orchestrator | + '[' -n '' ']' 2026-03-29 00:22:29.014114 | orchestrator | + hash -r 2026-03-29 00:22:29.014125 | orchestrator | + '[' -n '' ']' 2026-03-29 00:22:29.014136 | orchestrator | + unset VIRTUAL_ENV 2026-03-29 00:22:29.014146 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-03-29 00:22:29.014157 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-03-29 00:22:29.014168 | orchestrator | + unset -f deactivate 2026-03-29 00:22:29.014180 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2026-03-29 00:22:29.021503 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-03-29 00:22:29.021557 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-03-29 00:22:29.021569 | orchestrator | + local max_attempts=60 2026-03-29 00:22:29.021580 | orchestrator | + local name=ceph-ansible 2026-03-29 00:22:29.021591 | orchestrator | + local attempt_num=1 2026-03-29 00:22:29.021701 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-29 00:22:29.046597 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-29 00:22:29.046676 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-03-29 00:22:29.046688 | orchestrator | + local max_attempts=60 2026-03-29 00:22:29.046698 | orchestrator | + local name=kolla-ansible 2026-03-29 00:22:29.046708 | orchestrator | + local attempt_num=1 2026-03-29 00:22:29.047294 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-03-29 00:22:29.068284 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-29 00:22:29.068352 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-03-29 00:22:29.068364 | orchestrator | + local max_attempts=60 2026-03-29 00:22:29.068374 | orchestrator | + local name=osism-ansible 2026-03-29 00:22:29.068383 | orchestrator | + local attempt_num=1 2026-03-29 00:22:29.068866 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-03-29 00:22:29.097544 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-29 00:22:29.097629 | orchestrator | + [[ true == \t\r\u\e ]] 2026-03-29 00:22:29.097642 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-03-29 00:22:29.714608 | orchestrator | + docker compose --project-directory /opt/manager ps 2026-03-29 00:22:29.874160 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2026-03-29 00:22:29.874319 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:reef "/entrypoint.sh osis…" ceph-ansible About a minute ago Up About a minute (healthy) 2026-03-29 00:22:29.874336 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:2024.2 "/entrypoint.sh osis…" kolla-ansible About a minute ago Up About a minute (healthy) 2026-03-29 00:22:29.874348 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" api About a minute ago Up About a minute (healthy) 192.168.16.5:8000->8000/tcp 2026-03-29 00:22:29.874360 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" ara-server About a minute ago Up About a minute (healthy) 8000/tcp 2026-03-29 00:22:29.874371 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" beat About a minute ago Up About a minute (healthy) 2026-03-29 00:22:29.874382 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" flower About a minute ago Up About a minute (healthy) 2026-03-29 00:22:29.874392 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:latest "/sbin/tini -- /entr…" inventory_reconciler About a minute ago Up 51 seconds (healthy) 2026-03-29 00:22:29.874419 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" listener About a minute ago Up About a minute (healthy) 2026-03-29 00:22:29.874431 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.8.4 "docker-entrypoint.s…" mariadb About a minute ago Up About a minute (healthy) 3306/tcp 2026-03-29 00:22:29.874442 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" openstack About a minute ago Up About a minute (healthy) 2026-03-29 00:22:29.874452 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.7-alpine "docker-entrypoint.s…" redis About a minute ago Up About a minute (healthy) 6379/tcp 2026-03-29 00:22:29.874463 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:latest "/entrypoint.sh osis…" osism-ansible About a minute ago Up About a minute (healthy) 2026-03-29 00:22:29.874473 | orchestrator | osism-frontend registry.osism.tech/osism/osism-frontend:latest "docker-entrypoint.s…" frontend About a minute ago Up About a minute 192.168.16.5:3000->3000/tcp 2026-03-29 00:22:29.874484 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:latest "/entrypoint.sh osis…" osism-kubernetes About a minute ago Up About a minute (healthy) 2026-03-29 00:22:29.874495 | orchestrator | osismclient registry.osism.tech/osism/osism:latest "/sbin/tini -- sleep…" osismclient About a minute ago Up About a minute (healthy) 2026-03-29 00:22:29.879707 | orchestrator | ++ semver latest 7.0.0 2026-03-29 00:22:29.914315 | orchestrator | + [[ -1 -ge 0 ]] 2026-03-29 00:22:29.914395 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-03-29 00:22:29.914409 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2026-03-29 00:22:29.916598 | orchestrator | + osism apply resolvconf -l testbed-manager 2026-03-29 00:22:42.145401 | orchestrator | 2026-03-29 00:22:42 | INFO  | Prepare task for execution of resolvconf. 2026-03-29 00:22:42.341557 | orchestrator | 2026-03-29 00:22:42 | INFO  | Task 93fb9b49-e05d-456e-a57e-8c65fa17467d (resolvconf) was prepared for execution. 2026-03-29 00:22:42.341650 | orchestrator | 2026-03-29 00:22:42 | INFO  | It takes a moment until task 93fb9b49-e05d-456e-a57e-8c65fa17467d (resolvconf) has been started and output is visible here. 2026-03-29 00:22:54.985833 | orchestrator | 2026-03-29 00:22:54.985970 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2026-03-29 00:22:54.985987 | orchestrator | 2026-03-29 00:22:54.985999 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-29 00:22:54.986011 | orchestrator | Sunday 29 March 2026 00:22:45 +0000 (0:00:00.171) 0:00:00.171 ********** 2026-03-29 00:22:54.986082 | orchestrator | ok: [testbed-manager] 2026-03-29 00:22:54.986095 | orchestrator | 2026-03-29 00:22:54.986106 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-03-29 00:22:54.986118 | orchestrator | Sunday 29 March 2026 00:22:48 +0000 (0:00:03.597) 0:00:03.769 ********** 2026-03-29 00:22:54.986129 | orchestrator | skipping: [testbed-manager] 2026-03-29 00:22:54.986141 | orchestrator | 2026-03-29 00:22:54.986152 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-03-29 00:22:54.986163 | orchestrator | Sunday 29 March 2026 00:22:49 +0000 (0:00:00.057) 0:00:03.826 ********** 2026-03-29 00:22:54.986207 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2026-03-29 00:22:54.986220 | orchestrator | 2026-03-29 00:22:54.986231 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-03-29 00:22:54.986241 | orchestrator | Sunday 29 March 2026 00:22:49 +0000 (0:00:00.090) 0:00:03.917 ********** 2026-03-29 00:22:54.986263 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2026-03-29 00:22:54.986275 | orchestrator | 2026-03-29 00:22:54.986286 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-03-29 00:22:54.986297 | orchestrator | Sunday 29 March 2026 00:22:49 +0000 (0:00:00.066) 0:00:03.984 ********** 2026-03-29 00:22:54.986307 | orchestrator | ok: [testbed-manager] 2026-03-29 00:22:54.986320 | orchestrator | 2026-03-29 00:22:54.986333 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-03-29 00:22:54.986346 | orchestrator | Sunday 29 March 2026 00:22:50 +0000 (0:00:01.082) 0:00:05.066 ********** 2026-03-29 00:22:54.986359 | orchestrator | skipping: [testbed-manager] 2026-03-29 00:22:54.986371 | orchestrator | 2026-03-29 00:22:54.986383 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-03-29 00:22:54.986395 | orchestrator | Sunday 29 March 2026 00:22:50 +0000 (0:00:00.063) 0:00:05.130 ********** 2026-03-29 00:22:54.986408 | orchestrator | ok: [testbed-manager] 2026-03-29 00:22:54.986420 | orchestrator | 2026-03-29 00:22:54.986432 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-03-29 00:22:54.986445 | orchestrator | Sunday 29 March 2026 00:22:50 +0000 (0:00:00.531) 0:00:05.661 ********** 2026-03-29 00:22:54.986457 | orchestrator | skipping: [testbed-manager] 2026-03-29 00:22:54.986469 | orchestrator | 2026-03-29 00:22:54.986481 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-03-29 00:22:54.986495 | orchestrator | Sunday 29 March 2026 00:22:50 +0000 (0:00:00.071) 0:00:05.733 ********** 2026-03-29 00:22:54.986507 | orchestrator | changed: [testbed-manager] 2026-03-29 00:22:54.986519 | orchestrator | 2026-03-29 00:22:54.986531 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-03-29 00:22:54.986543 | orchestrator | Sunday 29 March 2026 00:22:51 +0000 (0:00:00.568) 0:00:06.301 ********** 2026-03-29 00:22:54.986555 | orchestrator | changed: [testbed-manager] 2026-03-29 00:22:54.986570 | orchestrator | 2026-03-29 00:22:54.986588 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-03-29 00:22:54.986607 | orchestrator | Sunday 29 March 2026 00:22:52 +0000 (0:00:01.043) 0:00:07.345 ********** 2026-03-29 00:22:54.986624 | orchestrator | ok: [testbed-manager] 2026-03-29 00:22:54.986641 | orchestrator | 2026-03-29 00:22:54.986688 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-03-29 00:22:54.986709 | orchestrator | Sunday 29 March 2026 00:22:53 +0000 (0:00:00.987) 0:00:08.333 ********** 2026-03-29 00:22:54.986726 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2026-03-29 00:22:54.986744 | orchestrator | 2026-03-29 00:22:54.986763 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-03-29 00:22:54.986780 | orchestrator | Sunday 29 March 2026 00:22:53 +0000 (0:00:00.079) 0:00:08.413 ********** 2026-03-29 00:22:54.986798 | orchestrator | changed: [testbed-manager] 2026-03-29 00:22:54.986816 | orchestrator | 2026-03-29 00:22:54.986835 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 00:22:54.986856 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-29 00:22:54.986875 | orchestrator | 2026-03-29 00:22:54.986893 | orchestrator | 2026-03-29 00:22:54.986907 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-29 00:22:54.986918 | orchestrator | Sunday 29 March 2026 00:22:54 +0000 (0:00:01.170) 0:00:09.583 ********** 2026-03-29 00:22:54.986928 | orchestrator | =============================================================================== 2026-03-29 00:22:54.986938 | orchestrator | Gathering Facts --------------------------------------------------------- 3.60s 2026-03-29 00:22:54.986949 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.17s 2026-03-29 00:22:54.986960 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 1.08s 2026-03-29 00:22:54.986970 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 1.04s 2026-03-29 00:22:54.986980 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 0.99s 2026-03-29 00:22:54.986991 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.57s 2026-03-29 00:22:54.987029 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.53s 2026-03-29 00:22:54.987050 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.09s 2026-03-29 00:22:54.987069 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.08s 2026-03-29 00:22:54.987087 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.07s 2026-03-29 00:22:54.987105 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.07s 2026-03-29 00:22:54.987122 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.06s 2026-03-29 00:22:54.987140 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.06s 2026-03-29 00:22:55.169310 | orchestrator | + osism apply sshconfig 2026-03-29 00:23:06.442358 | orchestrator | 2026-03-29 00:23:06 | INFO  | Prepare task for execution of sshconfig. 2026-03-29 00:23:06.514623 | orchestrator | 2026-03-29 00:23:06 | INFO  | Task 810a50c6-bd19-4b76-a352-850adf344188 (sshconfig) was prepared for execution. 2026-03-29 00:23:06.514722 | orchestrator | 2026-03-29 00:23:06 | INFO  | It takes a moment until task 810a50c6-bd19-4b76-a352-850adf344188 (sshconfig) has been started and output is visible here. 2026-03-29 00:23:17.183702 | orchestrator | 2026-03-29 00:23:17.183815 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2026-03-29 00:23:17.183832 | orchestrator | 2026-03-29 00:23:17.183845 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2026-03-29 00:23:17.183856 | orchestrator | Sunday 29 March 2026 00:23:09 +0000 (0:00:00.175) 0:00:00.175 ********** 2026-03-29 00:23:17.183867 | orchestrator | ok: [testbed-manager] 2026-03-29 00:23:17.183879 | orchestrator | 2026-03-29 00:23:17.183890 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2026-03-29 00:23:17.183901 | orchestrator | Sunday 29 March 2026 00:23:10 +0000 (0:00:00.882) 0:00:01.057 ********** 2026-03-29 00:23:17.183940 | orchestrator | changed: [testbed-manager] 2026-03-29 00:23:17.183952 | orchestrator | 2026-03-29 00:23:17.183963 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2026-03-29 00:23:17.183974 | orchestrator | Sunday 29 March 2026 00:23:10 +0000 (0:00:00.482) 0:00:01.540 ********** 2026-03-29 00:23:17.183984 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2026-03-29 00:23:17.183995 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2026-03-29 00:23:17.184006 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2026-03-29 00:23:17.184016 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2026-03-29 00:23:17.184027 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2026-03-29 00:23:17.184037 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2026-03-29 00:23:17.184048 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2026-03-29 00:23:17.184059 | orchestrator | 2026-03-29 00:23:17.184069 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2026-03-29 00:23:17.184080 | orchestrator | Sunday 29 March 2026 00:23:16 +0000 (0:00:05.537) 0:00:07.078 ********** 2026-03-29 00:23:17.184091 | orchestrator | skipping: [testbed-manager] 2026-03-29 00:23:17.184101 | orchestrator | 2026-03-29 00:23:17.184112 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2026-03-29 00:23:17.184122 | orchestrator | Sunday 29 March 2026 00:23:16 +0000 (0:00:00.105) 0:00:07.184 ********** 2026-03-29 00:23:17.184133 | orchestrator | changed: [testbed-manager] 2026-03-29 00:23:17.184194 | orchestrator | 2026-03-29 00:23:17.184207 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 00:23:17.184220 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-29 00:23:17.184231 | orchestrator | 2026-03-29 00:23:17.184243 | orchestrator | 2026-03-29 00:23:17.184255 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-29 00:23:17.184268 | orchestrator | Sunday 29 March 2026 00:23:17 +0000 (0:00:00.547) 0:00:07.731 ********** 2026-03-29 00:23:17.184280 | orchestrator | =============================================================================== 2026-03-29 00:23:17.184293 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 5.54s 2026-03-29 00:23:17.184305 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.88s 2026-03-29 00:23:17.184317 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.55s 2026-03-29 00:23:17.184330 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.48s 2026-03-29 00:23:17.184342 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.11s 2026-03-29 00:23:17.350704 | orchestrator | + osism apply known-hosts 2026-03-29 00:23:28.604002 | orchestrator | 2026-03-29 00:23:28 | INFO  | Prepare task for execution of known-hosts. 2026-03-29 00:23:28.678637 | orchestrator | 2026-03-29 00:23:28 | INFO  | Task ce117695-d46b-4098-9d0e-a32eee0e1888 (known-hosts) was prepared for execution. 2026-03-29 00:23:28.678728 | orchestrator | 2026-03-29 00:23:28 | INFO  | It takes a moment until task ce117695-d46b-4098-9d0e-a32eee0e1888 (known-hosts) has been started and output is visible here. 2026-03-29 00:23:43.469863 | orchestrator | 2026-03-29 00:23:43.469982 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2026-03-29 00:23:43.469999 | orchestrator | 2026-03-29 00:23:43.470011 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2026-03-29 00:23:43.470083 | orchestrator | Sunday 29 March 2026 00:23:31 +0000 (0:00:00.172) 0:00:00.172 ********** 2026-03-29 00:23:43.470095 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-03-29 00:23:43.470107 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-03-29 00:23:43.470148 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-03-29 00:23:43.470185 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-03-29 00:23:43.470196 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-03-29 00:23:43.470207 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-03-29 00:23:43.470217 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-03-29 00:23:43.470228 | orchestrator | 2026-03-29 00:23:43.470239 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2026-03-29 00:23:43.470250 | orchestrator | Sunday 29 March 2026 00:23:37 +0000 (0:00:06.266) 0:00:06.439 ********** 2026-03-29 00:23:43.470273 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-03-29 00:23:43.470288 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-03-29 00:23:43.470300 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-03-29 00:23:43.470310 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-03-29 00:23:43.470321 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-03-29 00:23:43.470332 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-03-29 00:23:43.470343 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-03-29 00:23:43.470353 | orchestrator | 2026-03-29 00:23:43.470366 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-29 00:23:43.470379 | orchestrator | Sunday 29 March 2026 00:23:37 +0000 (0:00:00.162) 0:00:06.601 ********** 2026-03-29 00:23:43.470392 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIM1gWNGcYuGIfI2I7At9md5AYsFjlQa5XtK4hWotERZ5) 2026-03-29 00:23:43.470410 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCoZnS/yEfdhMHVOwmQJTfTMuvv525V1BCIWifLUhlSgBqIou4P3lYJZn9DP754f96a14dasNeKOqA7BEonUxuTW+Wurux4PJ5kSVVwSsyfvEKBrd4LxoLGWoV2z8deuXTNDQco0UB54KP9feQrBDzsHB4ceVfu1lRgHBiNhe1TslkikExanctbqHEUV2lq/m0VNGoKaXtztsJwRR/2hOMJuDs4gP150xpJ8+qujSkKDBM1I5FBRhyiVddne8EAIPdEXZLQ9VK7nHK2BaYfDSulZAddvyYWpoBOqWhKlZ0ddnluJMyoMLQb5nu3nPanOHjXJzw5QzcS6VqHEfjl1EZlHaLgQuyrShXRbqI5aA+WhXe49NZwivsyGYJywnCErnbRzMFV5u0NlbI/txt0wph08X8zSp/gSPS1fkFFFwYC52XC0x9JdzOeI0pmZcqm5FBrNW8g9llE3KQnjhhdSPLvIF+k2nXEbIXi+7pSUB/RNPI+U5giGrP2G+U1LIUlDnk=) 2026-03-29 00:23:43.470427 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNAmyQnjZHri1WGXAIE7oQHIwePwVJ69LhCNRrWTB7lSC6+q0LbDLZLCtK0/u2Y6LuTDCAzW6d/Ji4OkIDLIOf0=) 2026-03-29 00:23:43.470440 | orchestrator | 2026-03-29 00:23:43.470453 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-29 00:23:43.470465 | orchestrator | Sunday 29 March 2026 00:23:39 +0000 (0:00:01.217) 0:00:07.818 ********** 2026-03-29 00:23:43.470503 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDIUxFcikZPQek/rmhH/e3EnWEA4BtS0bwl1EMvbAUGj4stsBP3bru2x2kOMNfXtqZ2nofidYRpLqLrEz3LVOBxhZBPuVRedjmqdsPhkW2zrrPHN3p0TaOhZyTGgwqHiK3zFR/rZ9V9r44mRyeBM6nYp+bjtDs/VlChPcXYKu/raKVY6yXTwgdS4d+VOYLmgfkm06WZKIyVR5aU4Lr6UxW1/3ejDA3geMiDKbvbE+mZWIlkqWNeJN2DLTIncJQPryV74hqcbjU2QL+oGAa6WPxwV2ci4RbepaJfKYA0WHuocFg8NTYg3WLerjerZgJ9LL+aAVh5OL98hIGjOQedSWFW2vNSiTilYzMRBk8iupYVmC5boKiexLgbJXNgurR8URmLDWZfPnRkdw8pjAu1NBUREO+UynT5JKyQZH4h6VVEqiKUSAwCqJ5OP0czMIeUSF92ZV4slwyBXkb3mIFubgIb3U2YJ3NrC9tMTqSucl/3KIepp7tYC2t4DCo8HnTJLHk=) 2026-03-29 00:23:43.470526 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBA4I88lHlD5LLGhPDAabondNWlCE30OgRoHXK0IKIdXKS2pYPp8iiHbIkra2+3OSFR59ihKFjJtHG2pUGDoX2Mw=) 2026-03-29 00:23:43.470540 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJYAQNOPpviLmrXSZBJpRVPqFTPW384SaZyn+jbJX2mD) 2026-03-29 00:23:43.470552 | orchestrator | 2026-03-29 00:23:43.470565 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-29 00:23:43.470578 | orchestrator | Sunday 29 March 2026 00:23:40 +0000 (0:00:00.970) 0:00:08.789 ********** 2026-03-29 00:23:43.470590 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFkUlHVFNrY2eV+QD4OB1o9PX2/LnbqtyTeQhcTif8/RD2qljURuGS9utSptV6rxWvP6+JvC1Be8GUw8/DiEMc4=) 2026-03-29 00:23:43.470603 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIH9LXPeX9ltWUA0Hhize5cMGEio+HqBgz/daMgXvmQOx) 2026-03-29 00:23:43.470744 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDGflTPaU1AjR//wrSv/zwsbhBk9+e/MGr7/6Xn7BAdLCo/vb1pxhUn8/cVBzvoOhD1vKboDGVWLPibFMcqlWI5pp0/rXvSgVYyfZkpe9qSMDmcililBVHchiQUf5drI9k4hBH25fHIs/C14+aDKmQJWRg9r4yY4fo5vEEhjdPnfJrBNMo/pemNDgwzcGWgocO+1NneVXhwKD+uGeUubKAssHqveDz2M6xoyU7YpE0GndGyaEHgzt6mSRMUUOMFD+9X+ie3nGpMJ3vX7AELXbhPzUHhFrlaiCFDJAo2iwILG0JtUQGAM8LBT0LsumtdVCgqu6xiOhQmx3w1fASglD1qzC8s5fQDGfk4Dhk7ZR5+hmSOyEGIPywGQ2sAAtnxNmfNz/FD0Yu+1s6BkuU2653Jep0TgzU68ZVi5cFWZxGLiY+58MVfRND8FDS8rdenW3NgRx2FFVptF87KM1x12MtuapNLk6FNIbnk/635epIfI3eQooBuO3SS9JYy98HS2HM=) 2026-03-29 00:23:43.470759 | orchestrator | 2026-03-29 00:23:43.470770 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-29 00:23:43.470781 | orchestrator | Sunday 29 March 2026 00:23:41 +0000 (0:00:00.954) 0:00:09.743 ********** 2026-03-29 00:23:43.470793 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDWLjr0bzmENg+5cXCGSPN1MczCKG5sW9mhOo1LkR1lFehGgtHOllMi5MSso3VwZVFcRa0x7mhbGj4fErYH7HOKd+1ACPhT2QWnLdNXY6975tvyYbghmvXPf3TQUdngnuigFYWXc2gzctNhhDBfWeIHFgCI/wOuIMl4NXlRHG3tKrLOy0/5X7eXoCDaEhbmRWOOwYxTcjdsOwShPMdzVBMIVl2rnjaSD3R2FdTDLFvP3PCCr7u7GpAXSApiJatreUbBQwRgJxBXgGUfOuh7bg7e+7TSLx1kTwqhjaGCaw72KAHItAPOvi7lwdQbKvhCdf1c3THrC9Xsja8Kx5gFAmBmy5Ofcj6w3/ldgyVEWxBcFGdHf0kYK+azSpxJNDVwFrz/JRNJ6CkWbiw7ZAwz5sPQt45qXpihxstZyeMLu97v7SSs5465pP8uW7/4CdfMvDTsgCLbiUdpnSNE+4R8VOHB4FZLtzOE2cuwAxqD/OnXiTAuNFv8BvFNFeshf5aamQk=) 2026-03-29 00:23:43.470804 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICyJmq8PidiEtyz52joK6m+BQ1SaNTm4evuZYMZpcUkw) 2026-03-29 00:23:43.470816 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBAmyTJRlhV18iI4ozstY+BKJgafi9lBUHewJiErI4HID3M7Lao2chJdQ9CrlxUeqpUj8oQJ2z1XcSj9FQrJKc84=) 2026-03-29 00:23:43.470826 | orchestrator | 2026-03-29 00:23:43.470838 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-29 00:23:43.470848 | orchestrator | Sunday 29 March 2026 00:23:42 +0000 (0:00:00.993) 0:00:10.737 ********** 2026-03-29 00:23:43.470860 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCpdAM4KhHEcFZ9mVBdgFGZZN/UJVkYjRY52dZArXYRj09bz39cP+I5bnasHHjhlVwMLw9rErluycbyyLz8l4LjV6WkrL7sHsxi2KQe6H8Fapcn55TBR/wmpXyhRBi9s+OAiUeLC591bD9zjeGgnDRdgpO+AClMZPqm6n14k0vr4AzjA/QOcdUKh1juaGEX8i+3oZ5IgpmBtISCTX2lC4gVRITc7OWL/vTTahacttjTb8+e4SyHbX5pdR9n42minD7YCtB5tvFMl4AJez+yiRscRXVxt42KuIMYGZ213VQnbkLUiAVvXK4yWYor4fgV5MvGDwQE11/uDLhE6mxrolFvN2uOvTotspY2AqpCJri+90+4GUdc9iHJ3lbuRb6hgOWS6gp4q+rCfDgO2iMIZxUD+lkdNfDThJ4XDucppEjTIhamEdKHDk9oTgCyK+aJE3QGrRo4Qt6HxpQV7p3gLhJTqLiFumsP3qmMP3z5eRXtwHAdtEl35XtmrfDuP2Buj6M=) 2026-03-29 00:23:43.470881 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNIcxRXZCpnGsLh5woAPMC9X81ZhtI9qCrLBz556o5EKsEDavbzY0V5O09HhwKQqgBxUQeNE5E37DrsD8y36dNw=) 2026-03-29 00:23:43.470892 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKwB4dY0ga1amdTGlomNC1zFVmdOPtKGGqVC/MIH/s6X) 2026-03-29 00:23:43.470903 | orchestrator | 2026-03-29 00:23:43.470914 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-29 00:23:43.470925 | orchestrator | Sunday 29 March 2026 00:23:43 +0000 (0:00:01.020) 0:00:11.758 ********** 2026-03-29 00:23:43.470946 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCv/anOPOMCXeNzF0z8hTsJ9MU0FpQyicmzQt5FXdE7LOi7iioRDhuBqPHR2BCq+59tMOTTpW6ruoBC4uwZn9AkQAetcu9k7rzCD8XYu+pml6G1jCMATz9EvEXZ/0eg45tsmLJLMVGdAZtS7IBAb2X+JlPOAgsmt9SZ0rGTmbVbLhu8fyibfyGhctuuUxmVQoLiUIy2uIG3bBwxuTdPqBYY3mMaQUjg4+OtdrNy5wIadkXl2g30ipxOKnoKXfte/TC2Mf9p5GyTjrylBxeVcPDVJ0jehWzHMB1V5fVlkcaxpxS8FGm2tUnayR9v4UWuKaSf1KqkMe9Gaykqh0TPfSYDEGa4wDGdxl1jIqSmdCzWLXHRtGZgWgJTQNuU7h4YNFqeY91Nluxa9gQQZtd9sGmtyq3aO8nkzvprdXbzrpSVC4xzgCQ5WhRE/1cm/rdqqF7fpa3V75wN4ptYtOpch2iwSN8HadwUAmu8b4DDFfz8ne4AxYVX2Hhgto031c8KTlE=) 2026-03-29 00:23:54.455546 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNIRPU9cHjtBUnf6LM1jKd4ak3CyoCtjjmi/pnyJsck/D7eh8Pj6qHgaTr0N0PhFdTqMvsQgIFVyq9lrNXwCZUc=) 2026-03-29 00:23:54.455652 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGRwsIFYXaiI/e8+fKyJeJ6ksNToCZZ48OpHerV2fpe7) 2026-03-29 00:23:54.455665 | orchestrator | 2026-03-29 00:23:54.455700 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-29 00:23:54.455710 | orchestrator | Sunday 29 March 2026 00:23:44 +0000 (0:00:00.998) 0:00:12.756 ********** 2026-03-29 00:23:54.455720 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCPuLmKnS70IglzzwILLFNRGQs8eHBUl9FHA4OVwbQ3pg6onaCGfrr1/zttfVeXysD9qJb3kc81VljS7ZCFk+QXbAksiQZgxc8PVZEe3nSbjMPIM/yDZaUBmwzZMHZybcLfK05LDJ6Kb9lt4V9ofiaFSjrbkm5s6HwhD5atCu1JWGtyDFLySxiJSn9+D5aelBMvvKebRP0FL8eWE8PrPbKYLaufj9CDSGXBIoNo0GwaxjIuSUTI36g8RnZ9jgne26fhXD/gMAuLZOVaawTNg6WVTRQSjxxi6HyLsERx/59HLle0agm7xxWsDJVQuF5fACN9DCcEtsGm1P4AhKEQWXmDRtlda1SLJf5LGULE+fkIy+d/YE5ajVtS8LBUihEd/+S0dInl/jht5+exeqnzwFz0xu0LTATdsCSK+UqvgGR8goZYypj7KhA66Y9TO3ngmpsDgEh9JVHVBIND3zoyZ6R1vo1MlG5REooX5hVh12pwOfulXEqHyrahSEekKHI+rms=) 2026-03-29 00:23:54.455729 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFEr9m00rAuIt0DNPVKtDgWB0jr7PD6gwBUObCZySr9vWRvaBzE1jyGmnuFVDs7wuzZ8XvebJ+HibIK4MHBEY4g=) 2026-03-29 00:23:54.455736 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMSJKoNcqRllHyCSnCf1c6lBlGQYxm408NCcy/tBOru+) 2026-03-29 00:23:54.455743 | orchestrator | 2026-03-29 00:23:54.455750 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2026-03-29 00:23:54.455757 | orchestrator | Sunday 29 March 2026 00:23:45 +0000 (0:00:00.998) 0:00:13.755 ********** 2026-03-29 00:23:54.455764 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-03-29 00:23:54.455772 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-03-29 00:23:54.455778 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-03-29 00:23:54.455785 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-03-29 00:23:54.455792 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-03-29 00:23:54.455812 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-03-29 00:23:54.455819 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-03-29 00:23:54.455855 | orchestrator | 2026-03-29 00:23:54.455867 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2026-03-29 00:23:54.455881 | orchestrator | Sunday 29 March 2026 00:23:50 +0000 (0:00:05.152) 0:00:18.908 ********** 2026-03-29 00:23:54.455895 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-03-29 00:23:54.455909 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-03-29 00:23:54.455917 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-03-29 00:23:54.455924 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-03-29 00:23:54.455930 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-03-29 00:23:54.455937 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-03-29 00:23:54.455943 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-03-29 00:23:54.455949 | orchestrator | 2026-03-29 00:23:54.455956 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-29 00:23:54.455963 | orchestrator | Sunday 29 March 2026 00:23:50 +0000 (0:00:00.167) 0:00:19.075 ********** 2026-03-29 00:23:54.455969 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIM1gWNGcYuGIfI2I7At9md5AYsFjlQa5XtK4hWotERZ5) 2026-03-29 00:23:54.455996 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCoZnS/yEfdhMHVOwmQJTfTMuvv525V1BCIWifLUhlSgBqIou4P3lYJZn9DP754f96a14dasNeKOqA7BEonUxuTW+Wurux4PJ5kSVVwSsyfvEKBrd4LxoLGWoV2z8deuXTNDQco0UB54KP9feQrBDzsHB4ceVfu1lRgHBiNhe1TslkikExanctbqHEUV2lq/m0VNGoKaXtztsJwRR/2hOMJuDs4gP150xpJ8+qujSkKDBM1I5FBRhyiVddne8EAIPdEXZLQ9VK7nHK2BaYfDSulZAddvyYWpoBOqWhKlZ0ddnluJMyoMLQb5nu3nPanOHjXJzw5QzcS6VqHEfjl1EZlHaLgQuyrShXRbqI5aA+WhXe49NZwivsyGYJywnCErnbRzMFV5u0NlbI/txt0wph08X8zSp/gSPS1fkFFFwYC52XC0x9JdzOeI0pmZcqm5FBrNW8g9llE3KQnjhhdSPLvIF+k2nXEbIXi+7pSUB/RNPI+U5giGrP2G+U1LIUlDnk=) 2026-03-29 00:23:54.456003 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNAmyQnjZHri1WGXAIE7oQHIwePwVJ69LhCNRrWTB7lSC6+q0LbDLZLCtK0/u2Y6LuTDCAzW6d/Ji4OkIDLIOf0=) 2026-03-29 00:23:54.456010 | orchestrator | 2026-03-29 00:23:54.456017 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-29 00:23:54.456024 | orchestrator | Sunday 29 March 2026 00:23:51 +0000 (0:00:01.027) 0:00:20.103 ********** 2026-03-29 00:23:54.456030 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBA4I88lHlD5LLGhPDAabondNWlCE30OgRoHXK0IKIdXKS2pYPp8iiHbIkra2+3OSFR59ihKFjJtHG2pUGDoX2Mw=) 2026-03-29 00:23:54.456037 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDIUxFcikZPQek/rmhH/e3EnWEA4BtS0bwl1EMvbAUGj4stsBP3bru2x2kOMNfXtqZ2nofidYRpLqLrEz3LVOBxhZBPuVRedjmqdsPhkW2zrrPHN3p0TaOhZyTGgwqHiK3zFR/rZ9V9r44mRyeBM6nYp+bjtDs/VlChPcXYKu/raKVY6yXTwgdS4d+VOYLmgfkm06WZKIyVR5aU4Lr6UxW1/3ejDA3geMiDKbvbE+mZWIlkqWNeJN2DLTIncJQPryV74hqcbjU2QL+oGAa6WPxwV2ci4RbepaJfKYA0WHuocFg8NTYg3WLerjerZgJ9LL+aAVh5OL98hIGjOQedSWFW2vNSiTilYzMRBk8iupYVmC5boKiexLgbJXNgurR8URmLDWZfPnRkdw8pjAu1NBUREO+UynT5JKyQZH4h6VVEqiKUSAwCqJ5OP0czMIeUSF92ZV4slwyBXkb3mIFubgIb3U2YJ3NrC9tMTqSucl/3KIepp7tYC2t4DCo8HnTJLHk=) 2026-03-29 00:23:54.456051 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJYAQNOPpviLmrXSZBJpRVPqFTPW384SaZyn+jbJX2mD) 2026-03-29 00:23:54.456058 | orchestrator | 2026-03-29 00:23:54.456064 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-29 00:23:54.456071 | orchestrator | Sunday 29 March 2026 00:23:52 +0000 (0:00:01.025) 0:00:21.128 ********** 2026-03-29 00:23:54.456077 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFkUlHVFNrY2eV+QD4OB1o9PX2/LnbqtyTeQhcTif8/RD2qljURuGS9utSptV6rxWvP6+JvC1Be8GUw8/DiEMc4=) 2026-03-29 00:23:54.456085 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDGflTPaU1AjR//wrSv/zwsbhBk9+e/MGr7/6Xn7BAdLCo/vb1pxhUn8/cVBzvoOhD1vKboDGVWLPibFMcqlWI5pp0/rXvSgVYyfZkpe9qSMDmcililBVHchiQUf5drI9k4hBH25fHIs/C14+aDKmQJWRg9r4yY4fo5vEEhjdPnfJrBNMo/pemNDgwzcGWgocO+1NneVXhwKD+uGeUubKAssHqveDz2M6xoyU7YpE0GndGyaEHgzt6mSRMUUOMFD+9X+ie3nGpMJ3vX7AELXbhPzUHhFrlaiCFDJAo2iwILG0JtUQGAM8LBT0LsumtdVCgqu6xiOhQmx3w1fASglD1qzC8s5fQDGfk4Dhk7ZR5+hmSOyEGIPywGQ2sAAtnxNmfNz/FD0Yu+1s6BkuU2653Jep0TgzU68ZVi5cFWZxGLiY+58MVfRND8FDS8rdenW3NgRx2FFVptF87KM1x12MtuapNLk6FNIbnk/635epIfI3eQooBuO3SS9JYy98HS2HM=) 2026-03-29 00:23:54.456092 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIH9LXPeX9ltWUA0Hhize5cMGEio+HqBgz/daMgXvmQOx) 2026-03-29 00:23:54.456099 | orchestrator | 2026-03-29 00:23:54.456136 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-29 00:23:54.456145 | orchestrator | Sunday 29 March 2026 00:23:53 +0000 (0:00:01.063) 0:00:22.192 ********** 2026-03-29 00:23:54.456158 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDWLjr0bzmENg+5cXCGSPN1MczCKG5sW9mhOo1LkR1lFehGgtHOllMi5MSso3VwZVFcRa0x7mhbGj4fErYH7HOKd+1ACPhT2QWnLdNXY6975tvyYbghmvXPf3TQUdngnuigFYWXc2gzctNhhDBfWeIHFgCI/wOuIMl4NXlRHG3tKrLOy0/5X7eXoCDaEhbmRWOOwYxTcjdsOwShPMdzVBMIVl2rnjaSD3R2FdTDLFvP3PCCr7u7GpAXSApiJatreUbBQwRgJxBXgGUfOuh7bg7e+7TSLx1kTwqhjaGCaw72KAHItAPOvi7lwdQbKvhCdf1c3THrC9Xsja8Kx5gFAmBmy5Ofcj6w3/ldgyVEWxBcFGdHf0kYK+azSpxJNDVwFrz/JRNJ6CkWbiw7ZAwz5sPQt45qXpihxstZyeMLu97v7SSs5465pP8uW7/4CdfMvDTsgCLbiUdpnSNE+4R8VOHB4FZLtzOE2cuwAxqD/OnXiTAuNFv8BvFNFeshf5aamQk=) 2026-03-29 00:23:54.456166 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBAmyTJRlhV18iI4ozstY+BKJgafi9lBUHewJiErI4HID3M7Lao2chJdQ9CrlxUeqpUj8oQJ2z1XcSj9FQrJKc84=) 2026-03-29 00:23:54.456182 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICyJmq8PidiEtyz52joK6m+BQ1SaNTm4evuZYMZpcUkw) 2026-03-29 00:23:58.047557 | orchestrator | 2026-03-29 00:23:58.047656 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-29 00:23:58.047671 | orchestrator | Sunday 29 March 2026 00:23:54 +0000 (0:00:00.951) 0:00:23.143 ********** 2026-03-29 00:23:58.047700 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCpdAM4KhHEcFZ9mVBdgFGZZN/UJVkYjRY52dZArXYRj09bz39cP+I5bnasHHjhlVwMLw9rErluycbyyLz8l4LjV6WkrL7sHsxi2KQe6H8Fapcn55TBR/wmpXyhRBi9s+OAiUeLC591bD9zjeGgnDRdgpO+AClMZPqm6n14k0vr4AzjA/QOcdUKh1juaGEX8i+3oZ5IgpmBtISCTX2lC4gVRITc7OWL/vTTahacttjTb8+e4SyHbX5pdR9n42minD7YCtB5tvFMl4AJez+yiRscRXVxt42KuIMYGZ213VQnbkLUiAVvXK4yWYor4fgV5MvGDwQE11/uDLhE6mxrolFvN2uOvTotspY2AqpCJri+90+4GUdc9iHJ3lbuRb6hgOWS6gp4q+rCfDgO2iMIZxUD+lkdNfDThJ4XDucppEjTIhamEdKHDk9oTgCyK+aJE3QGrRo4Qt6HxpQV7p3gLhJTqLiFumsP3qmMP3z5eRXtwHAdtEl35XtmrfDuP2Buj6M=) 2026-03-29 00:23:58.047715 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKwB4dY0ga1amdTGlomNC1zFVmdOPtKGGqVC/MIH/s6X) 2026-03-29 00:23:58.047750 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNIcxRXZCpnGsLh5woAPMC9X81ZhtI9qCrLBz556o5EKsEDavbzY0V5O09HhwKQqgBxUQeNE5E37DrsD8y36dNw=) 2026-03-29 00:23:58.047761 | orchestrator | 2026-03-29 00:23:58.047771 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-29 00:23:58.047780 | orchestrator | Sunday 29 March 2026 00:23:55 +0000 (0:00:00.909) 0:00:24.053 ********** 2026-03-29 00:23:58.047790 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCv/anOPOMCXeNzF0z8hTsJ9MU0FpQyicmzQt5FXdE7LOi7iioRDhuBqPHR2BCq+59tMOTTpW6ruoBC4uwZn9AkQAetcu9k7rzCD8XYu+pml6G1jCMATz9EvEXZ/0eg45tsmLJLMVGdAZtS7IBAb2X+JlPOAgsmt9SZ0rGTmbVbLhu8fyibfyGhctuuUxmVQoLiUIy2uIG3bBwxuTdPqBYY3mMaQUjg4+OtdrNy5wIadkXl2g30ipxOKnoKXfte/TC2Mf9p5GyTjrylBxeVcPDVJ0jehWzHMB1V5fVlkcaxpxS8FGm2tUnayR9v4UWuKaSf1KqkMe9Gaykqh0TPfSYDEGa4wDGdxl1jIqSmdCzWLXHRtGZgWgJTQNuU7h4YNFqeY91Nluxa9gQQZtd9sGmtyq3aO8nkzvprdXbzrpSVC4xzgCQ5WhRE/1cm/rdqqF7fpa3V75wN4ptYtOpch2iwSN8HadwUAmu8b4DDFfz8ne4AxYVX2Hhgto031c8KTlE=) 2026-03-29 00:23:58.047800 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGRwsIFYXaiI/e8+fKyJeJ6ksNToCZZ48OpHerV2fpe7) 2026-03-29 00:23:58.047810 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNIRPU9cHjtBUnf6LM1jKd4ak3CyoCtjjmi/pnyJsck/D7eh8Pj6qHgaTr0N0PhFdTqMvsQgIFVyq9lrNXwCZUc=) 2026-03-29 00:23:58.047820 | orchestrator | 2026-03-29 00:23:58.047829 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-29 00:23:58.047839 | orchestrator | Sunday 29 March 2026 00:23:56 +0000 (0:00:00.901) 0:00:24.954 ********** 2026-03-29 00:23:58.047849 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMSJKoNcqRllHyCSnCf1c6lBlGQYxm408NCcy/tBOru+) 2026-03-29 00:23:58.047858 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCPuLmKnS70IglzzwILLFNRGQs8eHBUl9FHA4OVwbQ3pg6onaCGfrr1/zttfVeXysD9qJb3kc81VljS7ZCFk+QXbAksiQZgxc8PVZEe3nSbjMPIM/yDZaUBmwzZMHZybcLfK05LDJ6Kb9lt4V9ofiaFSjrbkm5s6HwhD5atCu1JWGtyDFLySxiJSn9+D5aelBMvvKebRP0FL8eWE8PrPbKYLaufj9CDSGXBIoNo0GwaxjIuSUTI36g8RnZ9jgne26fhXD/gMAuLZOVaawTNg6WVTRQSjxxi6HyLsERx/59HLle0agm7xxWsDJVQuF5fACN9DCcEtsGm1P4AhKEQWXmDRtlda1SLJf5LGULE+fkIy+d/YE5ajVtS8LBUihEd/+S0dInl/jht5+exeqnzwFz0xu0LTATdsCSK+UqvgGR8goZYypj7KhA66Y9TO3ngmpsDgEh9JVHVBIND3zoyZ6R1vo1MlG5REooX5hVh12pwOfulXEqHyrahSEekKHI+rms=) 2026-03-29 00:23:58.047869 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFEr9m00rAuIt0DNPVKtDgWB0jr7PD6gwBUObCZySr9vWRvaBzE1jyGmnuFVDs7wuzZ8XvebJ+HibIK4MHBEY4g=) 2026-03-29 00:23:58.047878 | orchestrator | 2026-03-29 00:23:58.047888 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2026-03-29 00:23:58.047897 | orchestrator | Sunday 29 March 2026 00:23:57 +0000 (0:00:00.932) 0:00:25.887 ********** 2026-03-29 00:23:58.047908 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-03-29 00:23:58.047917 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-03-29 00:23:58.047927 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-03-29 00:23:58.047936 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-03-29 00:23:58.047945 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-03-29 00:23:58.047955 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-03-29 00:23:58.047964 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-03-29 00:23:58.047974 | orchestrator | skipping: [testbed-manager] 2026-03-29 00:23:58.047984 | orchestrator | 2026-03-29 00:23:58.048010 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2026-03-29 00:23:58.048021 | orchestrator | Sunday 29 March 2026 00:23:57 +0000 (0:00:00.154) 0:00:26.042 ********** 2026-03-29 00:23:58.048037 | orchestrator | skipping: [testbed-manager] 2026-03-29 00:23:58.048047 | orchestrator | 2026-03-29 00:23:58.048056 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2026-03-29 00:23:58.048066 | orchestrator | Sunday 29 March 2026 00:23:57 +0000 (0:00:00.034) 0:00:26.076 ********** 2026-03-29 00:23:58.048076 | orchestrator | skipping: [testbed-manager] 2026-03-29 00:23:58.048085 | orchestrator | 2026-03-29 00:23:58.048095 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2026-03-29 00:23:58.048143 | orchestrator | Sunday 29 March 2026 00:23:57 +0000 (0:00:00.035) 0:00:26.112 ********** 2026-03-29 00:23:58.048154 | orchestrator | changed: [testbed-manager] 2026-03-29 00:23:58.048163 | orchestrator | 2026-03-29 00:23:58.048173 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 00:23:58.048183 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-29 00:23:58.048194 | orchestrator | 2026-03-29 00:23:58.048204 | orchestrator | 2026-03-29 00:23:58.048213 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-29 00:23:58.048223 | orchestrator | Sunday 29 March 2026 00:23:57 +0000 (0:00:00.455) 0:00:26.568 ********** 2026-03-29 00:23:58.048232 | orchestrator | =============================================================================== 2026-03-29 00:23:58.048242 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 6.27s 2026-03-29 00:23:58.048251 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 5.15s 2026-03-29 00:23:58.048261 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.22s 2026-03-29 00:23:58.048271 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.06s 2026-03-29 00:23:58.048280 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.03s 2026-03-29 00:23:58.048290 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.03s 2026-03-29 00:23:58.048299 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.02s 2026-03-29 00:23:58.048309 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.00s 2026-03-29 00:23:58.048318 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.00s 2026-03-29 00:23:58.048328 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.99s 2026-03-29 00:23:58.048337 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.97s 2026-03-29 00:23:58.048353 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.95s 2026-03-29 00:23:58.048363 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.95s 2026-03-29 00:23:58.048373 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.93s 2026-03-29 00:23:58.048382 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.91s 2026-03-29 00:23:58.048392 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.90s 2026-03-29 00:23:58.048401 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.46s 2026-03-29 00:23:58.048411 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.17s 2026-03-29 00:23:58.048420 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.16s 2026-03-29 00:23:58.048430 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.15s 2026-03-29 00:23:58.172410 | orchestrator | + osism apply squid 2026-03-29 00:24:09.383448 | orchestrator | 2026-03-29 00:24:09 | INFO  | Prepare task for execution of squid. 2026-03-29 00:24:09.458821 | orchestrator | 2026-03-29 00:24:09 | INFO  | Task d6997d91-7c79-4cc7-9a1c-b75552d865e6 (squid) was prepared for execution. 2026-03-29 00:24:09.458916 | orchestrator | 2026-03-29 00:24:09 | INFO  | It takes a moment until task d6997d91-7c79-4cc7-9a1c-b75552d865e6 (squid) has been started and output is visible here. 2026-03-29 00:26:01.682971 | orchestrator | 2026-03-29 00:26:01.683175 | orchestrator | PLAY [Apply role squid] ******************************************************** 2026-03-29 00:26:01.683193 | orchestrator | 2026-03-29 00:26:01.683207 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2026-03-29 00:26:01.683219 | orchestrator | Sunday 29 March 2026 00:24:12 +0000 (0:00:00.187) 0:00:00.187 ********** 2026-03-29 00:26:01.683231 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2026-03-29 00:26:01.683243 | orchestrator | 2026-03-29 00:26:01.683254 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2026-03-29 00:26:01.683265 | orchestrator | Sunday 29 March 2026 00:24:12 +0000 (0:00:00.073) 0:00:00.260 ********** 2026-03-29 00:26:01.683276 | orchestrator | ok: [testbed-manager] 2026-03-29 00:26:01.683287 | orchestrator | 2026-03-29 00:26:01.683298 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2026-03-29 00:26:01.683310 | orchestrator | Sunday 29 March 2026 00:24:14 +0000 (0:00:02.199) 0:00:02.460 ********** 2026-03-29 00:26:01.683321 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2026-03-29 00:26:01.683332 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2026-03-29 00:26:01.683343 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2026-03-29 00:26:01.683354 | orchestrator | 2026-03-29 00:26:01.683365 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2026-03-29 00:26:01.683376 | orchestrator | Sunday 29 March 2026 00:24:16 +0000 (0:00:01.169) 0:00:03.630 ********** 2026-03-29 00:26:01.683387 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2026-03-29 00:26:01.683397 | orchestrator | 2026-03-29 00:26:01.683408 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2026-03-29 00:26:01.683419 | orchestrator | Sunday 29 March 2026 00:24:17 +0000 (0:00:01.024) 0:00:04.655 ********** 2026-03-29 00:26:01.683430 | orchestrator | ok: [testbed-manager] 2026-03-29 00:26:01.683441 | orchestrator | 2026-03-29 00:26:01.683451 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2026-03-29 00:26:01.683480 | orchestrator | Sunday 29 March 2026 00:24:17 +0000 (0:00:00.349) 0:00:05.005 ********** 2026-03-29 00:26:01.683494 | orchestrator | changed: [testbed-manager] 2026-03-29 00:26:01.683507 | orchestrator | 2026-03-29 00:26:01.683520 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2026-03-29 00:26:01.683532 | orchestrator | Sunday 29 March 2026 00:24:18 +0000 (0:00:00.882) 0:00:05.888 ********** 2026-03-29 00:26:01.683547 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2026-03-29 00:26:01.683567 | orchestrator | ok: [testbed-manager] 2026-03-29 00:26:01.683598 | orchestrator | 2026-03-29 00:26:01.683618 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2026-03-29 00:26:01.683636 | orchestrator | Sunday 29 March 2026 00:24:48 +0000 (0:00:30.628) 0:00:36.516 ********** 2026-03-29 00:26:01.683654 | orchestrator | changed: [testbed-manager] 2026-03-29 00:26:01.683674 | orchestrator | 2026-03-29 00:26:01.683694 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2026-03-29 00:26:01.683712 | orchestrator | Sunday 29 March 2026 00:25:00 +0000 (0:00:11.841) 0:00:48.358 ********** 2026-03-29 00:26:01.683731 | orchestrator | Pausing for 60 seconds 2026-03-29 00:26:01.683746 | orchestrator | changed: [testbed-manager] 2026-03-29 00:26:01.683758 | orchestrator | 2026-03-29 00:26:01.683768 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2026-03-29 00:26:01.683779 | orchestrator | Sunday 29 March 2026 00:26:00 +0000 (0:01:00.090) 0:01:48.448 ********** 2026-03-29 00:26:01.683790 | orchestrator | ok: [testbed-manager] 2026-03-29 00:26:01.683801 | orchestrator | 2026-03-29 00:26:01.683812 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2026-03-29 00:26:01.683849 | orchestrator | Sunday 29 March 2026 00:26:00 +0000 (0:00:00.074) 0:01:48.522 ********** 2026-03-29 00:26:01.683861 | orchestrator | changed: [testbed-manager] 2026-03-29 00:26:01.683871 | orchestrator | 2026-03-29 00:26:01.683882 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 00:26:01.683893 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-29 00:26:01.683904 | orchestrator | 2026-03-29 00:26:01.683915 | orchestrator | 2026-03-29 00:26:01.683926 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-29 00:26:01.683937 | orchestrator | Sunday 29 March 2026 00:26:01 +0000 (0:00:00.598) 0:01:49.121 ********** 2026-03-29 00:26:01.683947 | orchestrator | =============================================================================== 2026-03-29 00:26:01.683958 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.09s 2026-03-29 00:26:01.683969 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 30.63s 2026-03-29 00:26:01.684002 | orchestrator | osism.services.squid : Restart squid service --------------------------- 11.84s 2026-03-29 00:26:01.684014 | orchestrator | osism.services.squid : Install required packages ------------------------ 2.20s 2026-03-29 00:26:01.684024 | orchestrator | osism.services.squid : Create required directories ---------------------- 1.17s 2026-03-29 00:26:01.684035 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 1.02s 2026-03-29 00:26:01.684046 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 0.88s 2026-03-29 00:26:01.684056 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.60s 2026-03-29 00:26:01.684067 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.35s 2026-03-29 00:26:01.684078 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.07s 2026-03-29 00:26:01.684088 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.07s 2026-03-29 00:26:01.859499 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-03-29 00:26:01.859594 | orchestrator | + /opt/configuration/scripts/set-kolla-namespace.sh kolla 2026-03-29 00:26:01.864750 | orchestrator | + set -e 2026-03-29 00:26:01.864813 | orchestrator | + NAMESPACE=kolla 2026-03-29 00:26:01.864828 | orchestrator | + sed -i 's#docker_namespace: .*#docker_namespace: kolla#g' /opt/configuration/inventory/group_vars/all/kolla.yml 2026-03-29 00:26:01.868186 | orchestrator | ++ semver latest 9.0.0 2026-03-29 00:26:01.912492 | orchestrator | + [[ -1 -lt 0 ]] 2026-03-29 00:26:01.912580 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-03-29 00:26:01.913095 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2026-03-29 00:26:13.337800 | orchestrator | 2026-03-29 00:26:13 | INFO  | Prepare task for execution of operator. 2026-03-29 00:26:13.409782 | orchestrator | 2026-03-29 00:26:13 | INFO  | Task bf221a8d-fd1a-4d8a-9156-db8607c1a9e1 (operator) was prepared for execution. 2026-03-29 00:26:13.409903 | orchestrator | 2026-03-29 00:26:13 | INFO  | It takes a moment until task bf221a8d-fd1a-4d8a-9156-db8607c1a9e1 (operator) has been started and output is visible here. 2026-03-29 00:26:29.108559 | orchestrator | 2026-03-29 00:26:29.108683 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2026-03-29 00:26:29.108701 | orchestrator | 2026-03-29 00:26:29.108712 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-29 00:26:29.108723 | orchestrator | Sunday 29 March 2026 00:26:16 +0000 (0:00:00.194) 0:00:00.194 ********** 2026-03-29 00:26:29.108732 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:26:29.108744 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:26:29.108754 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:26:29.108763 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:26:29.108773 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:26:29.108783 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:26:29.108796 | orchestrator | 2026-03-29 00:26:29.108806 | orchestrator | TASK [Do not require tty for all users] **************************************** 2026-03-29 00:26:29.108843 | orchestrator | Sunday 29 March 2026 00:26:20 +0000 (0:00:04.312) 0:00:04.507 ********** 2026-03-29 00:26:29.108861 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:26:29.108877 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:26:29.108894 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:26:29.108910 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:26:29.108923 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:26:29.108933 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:26:29.108942 | orchestrator | 2026-03-29 00:26:29.109018 | orchestrator | PLAY [Apply role operator] ***************************************************** 2026-03-29 00:26:29.109030 | orchestrator | 2026-03-29 00:26:29.109039 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-03-29 00:26:29.109055 | orchestrator | Sunday 29 March 2026 00:26:21 +0000 (0:00:00.790) 0:00:05.297 ********** 2026-03-29 00:26:29.109072 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:26:29.109089 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:26:29.109105 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:26:29.109124 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:26:29.109140 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:26:29.109156 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:26:29.109167 | orchestrator | 2026-03-29 00:26:29.109179 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-03-29 00:26:29.109209 | orchestrator | Sunday 29 March 2026 00:26:21 +0000 (0:00:00.142) 0:00:05.439 ********** 2026-03-29 00:26:29.109221 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:26:29.109231 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:26:29.109242 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:26:29.109253 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:26:29.109264 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:26:29.109275 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:26:29.109286 | orchestrator | 2026-03-29 00:26:29.109297 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-03-29 00:26:29.109308 | orchestrator | Sunday 29 March 2026 00:26:21 +0000 (0:00:00.135) 0:00:05.575 ********** 2026-03-29 00:26:29.109323 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:26:29.109340 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:26:29.109359 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:26:29.109376 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:26:29.109394 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:26:29.109412 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:26:29.109429 | orchestrator | 2026-03-29 00:26:29.109446 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-03-29 00:26:29.109463 | orchestrator | Sunday 29 March 2026 00:26:22 +0000 (0:00:00.697) 0:00:06.272 ********** 2026-03-29 00:26:29.109479 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:26:29.109496 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:26:29.109512 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:26:29.109528 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:26:29.109545 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:26:29.109562 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:26:29.109578 | orchestrator | 2026-03-29 00:26:29.109594 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-03-29 00:26:29.109611 | orchestrator | Sunday 29 March 2026 00:26:23 +0000 (0:00:00.940) 0:00:07.212 ********** 2026-03-29 00:26:29.109628 | orchestrator | changed: [testbed-node-1] => (item=adm) 2026-03-29 00:26:29.109645 | orchestrator | changed: [testbed-node-0] => (item=adm) 2026-03-29 00:26:29.109662 | orchestrator | changed: [testbed-node-3] => (item=adm) 2026-03-29 00:26:29.109671 | orchestrator | changed: [testbed-node-2] => (item=adm) 2026-03-29 00:26:29.109681 | orchestrator | changed: [testbed-node-4] => (item=adm) 2026-03-29 00:26:29.109693 | orchestrator | changed: [testbed-node-5] => (item=adm) 2026-03-29 00:26:29.109709 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2026-03-29 00:26:29.109726 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2026-03-29 00:26:29.109742 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2026-03-29 00:26:29.109768 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2026-03-29 00:26:29.109778 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2026-03-29 00:26:29.109787 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2026-03-29 00:26:29.109797 | orchestrator | 2026-03-29 00:26:29.109809 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-03-29 00:26:29.109825 | orchestrator | Sunday 29 March 2026 00:26:24 +0000 (0:00:01.161) 0:00:08.374 ********** 2026-03-29 00:26:29.109842 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:26:29.109857 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:26:29.109873 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:26:29.109889 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:26:29.109905 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:26:29.109923 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:26:29.109939 | orchestrator | 2026-03-29 00:26:29.109983 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-03-29 00:26:29.109998 | orchestrator | Sunday 29 March 2026 00:26:25 +0000 (0:00:01.264) 0:00:09.639 ********** 2026-03-29 00:26:29.110008 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2026-03-29 00:26:29.110080 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2026-03-29 00:26:29.110098 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2026-03-29 00:26:29.110116 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2026-03-29 00:26:29.110133 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2026-03-29 00:26:29.110175 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2026-03-29 00:26:29.110195 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2026-03-29 00:26:29.110212 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2026-03-29 00:26:29.110230 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2026-03-29 00:26:29.110242 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2026-03-29 00:26:29.110251 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2026-03-29 00:26:29.110266 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2026-03-29 00:26:29.110281 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2026-03-29 00:26:29.110297 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2026-03-29 00:26:29.110314 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2026-03-29 00:26:29.110329 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2026-03-29 00:26:29.110353 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2026-03-29 00:26:29.110363 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2026-03-29 00:26:29.110373 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2026-03-29 00:26:29.110382 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2026-03-29 00:26:29.110392 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2026-03-29 00:26:29.110401 | orchestrator | 2026-03-29 00:26:29.110411 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-03-29 00:26:29.110421 | orchestrator | Sunday 29 March 2026 00:26:27 +0000 (0:00:01.285) 0:00:10.925 ********** 2026-03-29 00:26:29.110431 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:26:29.110440 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:26:29.110449 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:26:29.110459 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:26:29.110468 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:26:29.110477 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:26:29.110487 | orchestrator | 2026-03-29 00:26:29.110496 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-03-29 00:26:29.110516 | orchestrator | Sunday 29 March 2026 00:26:27 +0000 (0:00:00.128) 0:00:11.053 ********** 2026-03-29 00:26:29.110525 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:26:29.110534 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:26:29.110544 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:26:29.110553 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:26:29.110562 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:26:29.110572 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:26:29.110581 | orchestrator | 2026-03-29 00:26:29.110590 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-03-29 00:26:29.110600 | orchestrator | Sunday 29 March 2026 00:26:27 +0000 (0:00:00.144) 0:00:11.198 ********** 2026-03-29 00:26:29.110609 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:26:29.110618 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:26:29.110627 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:26:29.110637 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:26:29.110646 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:26:29.110655 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:26:29.110664 | orchestrator | 2026-03-29 00:26:29.110674 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-03-29 00:26:29.110683 | orchestrator | Sunday 29 March 2026 00:26:27 +0000 (0:00:00.540) 0:00:11.738 ********** 2026-03-29 00:26:29.110692 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:26:29.110701 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:26:29.110710 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:26:29.110720 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:26:29.110729 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:26:29.110738 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:26:29.110747 | orchestrator | 2026-03-29 00:26:29.110757 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-03-29 00:26:29.110766 | orchestrator | Sunday 29 March 2026 00:26:28 +0000 (0:00:00.175) 0:00:11.914 ********** 2026-03-29 00:26:29.110776 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-29 00:26:29.110785 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-29 00:26:29.110794 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:26:29.110803 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:26:29.110813 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-03-29 00:26:29.110822 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:26:29.110831 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-29 00:26:29.110840 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-03-29 00:26:29.110850 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:26:29.110859 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:26:29.110868 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-29 00:26:29.110877 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:26:29.110886 | orchestrator | 2026-03-29 00:26:29.110896 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-03-29 00:26:29.110905 | orchestrator | Sunday 29 March 2026 00:26:28 +0000 (0:00:00.723) 0:00:12.637 ********** 2026-03-29 00:26:29.110915 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:26:29.110924 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:26:29.110933 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:26:29.110942 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:26:29.110952 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:26:29.110987 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:26:29.110996 | orchestrator | 2026-03-29 00:26:29.111005 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-03-29 00:26:29.111015 | orchestrator | Sunday 29 March 2026 00:26:29 +0000 (0:00:00.133) 0:00:12.770 ********** 2026-03-29 00:26:29.111024 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:26:29.111034 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:26:29.111043 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:26:29.111052 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:26:29.111075 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:26:30.276448 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:26:30.276550 | orchestrator | 2026-03-29 00:26:30.276566 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-03-29 00:26:30.276579 | orchestrator | Sunday 29 March 2026 00:26:29 +0000 (0:00:00.120) 0:00:12.890 ********** 2026-03-29 00:26:30.276590 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:26:30.276601 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:26:30.276612 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:26:30.276623 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:26:30.276633 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:26:30.276644 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:26:30.276654 | orchestrator | 2026-03-29 00:26:30.276665 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-03-29 00:26:30.276676 | orchestrator | Sunday 29 March 2026 00:26:29 +0000 (0:00:00.135) 0:00:13.025 ********** 2026-03-29 00:26:30.276687 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:26:30.276697 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:26:30.276708 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:26:30.276719 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:26:30.276729 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:26:30.276739 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:26:30.276750 | orchestrator | 2026-03-29 00:26:30.276761 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-03-29 00:26:30.276771 | orchestrator | Sunday 29 March 2026 00:26:29 +0000 (0:00:00.640) 0:00:13.665 ********** 2026-03-29 00:26:30.276782 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:26:30.276792 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:26:30.276803 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:26:30.276814 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:26:30.276824 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:26:30.276835 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:26:30.276845 | orchestrator | 2026-03-29 00:26:30.276856 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 00:26:30.276891 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-29 00:26:30.276904 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-29 00:26:30.276915 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-29 00:26:30.276926 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-29 00:26:30.276936 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-29 00:26:30.276947 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-29 00:26:30.277004 | orchestrator | 2026-03-29 00:26:30.277017 | orchestrator | 2026-03-29 00:26:30.277030 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-29 00:26:30.277042 | orchestrator | Sunday 29 March 2026 00:26:30 +0000 (0:00:00.204) 0:00:13.870 ********** 2026-03-29 00:26:30.277054 | orchestrator | =============================================================================== 2026-03-29 00:26:30.277066 | orchestrator | Gathering Facts --------------------------------------------------------- 4.31s 2026-03-29 00:26:30.277079 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.29s 2026-03-29 00:26:30.277092 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 1.26s 2026-03-29 00:26:30.277127 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 1.16s 2026-03-29 00:26:30.277140 | orchestrator | osism.commons.operator : Create user ------------------------------------ 0.94s 2026-03-29 00:26:30.277152 | orchestrator | Do not require tty for all users ---------------------------------------- 0.79s 2026-03-29 00:26:30.277164 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.72s 2026-03-29 00:26:30.277176 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.70s 2026-03-29 00:26:30.277188 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.64s 2026-03-29 00:26:30.277200 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.54s 2026-03-29 00:26:30.277212 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.20s 2026-03-29 00:26:30.277224 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.18s 2026-03-29 00:26:30.277236 | orchestrator | osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file --- 0.14s 2026-03-29 00:26:30.277249 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.14s 2026-03-29 00:26:30.277261 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.14s 2026-03-29 00:26:30.277273 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.14s 2026-03-29 00:26:30.277284 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.13s 2026-03-29 00:26:30.277297 | orchestrator | osism.commons.operator : Set custom environment variables in .bashrc configuration file --- 0.13s 2026-03-29 00:26:30.277309 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.12s 2026-03-29 00:26:30.438558 | orchestrator | + osism apply --environment custom facts 2026-03-29 00:26:31.655265 | orchestrator | 2026-03-29 00:26:31 | INFO  | Trying to run play facts in environment custom 2026-03-29 00:26:41.793993 | orchestrator | 2026-03-29 00:26:41 | INFO  | Prepare task for execution of facts. 2026-03-29 00:26:41.870457 | orchestrator | 2026-03-29 00:26:41 | INFO  | Task 8f655d76-3654-473f-82e4-cc1c8623365e (facts) was prepared for execution. 2026-03-29 00:26:41.870553 | orchestrator | 2026-03-29 00:26:41 | INFO  | It takes a moment until task 8f655d76-3654-473f-82e4-cc1c8623365e (facts) has been started and output is visible here. 2026-03-29 00:27:25.443250 | orchestrator | 2026-03-29 00:27:25.443363 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2026-03-29 00:27:25.443381 | orchestrator | 2026-03-29 00:27:25.443393 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-03-29 00:27:25.443420 | orchestrator | Sunday 29 March 2026 00:26:44 +0000 (0:00:00.105) 0:00:00.105 ********** 2026-03-29 00:27:25.443432 | orchestrator | ok: [testbed-manager] 2026-03-29 00:27:25.443444 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:27:25.443455 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:27:25.443466 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:27:25.443477 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:27:25.443487 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:27:25.443498 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:27:25.443508 | orchestrator | 2026-03-29 00:27:25.443519 | orchestrator | TASK [Copy fact file] ********************************************************** 2026-03-29 00:27:25.443529 | orchestrator | Sunday 29 March 2026 00:26:46 +0000 (0:00:01.409) 0:00:01.514 ********** 2026-03-29 00:27:25.443540 | orchestrator | ok: [testbed-manager] 2026-03-29 00:27:25.443551 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:27:25.443561 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:27:25.443572 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:27:25.443583 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:27:25.443594 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:27:25.443604 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:27:25.443615 | orchestrator | 2026-03-29 00:27:25.443652 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2026-03-29 00:27:25.443663 | orchestrator | 2026-03-29 00:27:25.443674 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-03-29 00:27:25.443684 | orchestrator | Sunday 29 March 2026 00:26:47 +0000 (0:00:01.268) 0:00:02.783 ********** 2026-03-29 00:27:25.443695 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:27:25.443706 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:27:25.443716 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:27:25.443727 | orchestrator | 2026-03-29 00:27:25.443737 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-03-29 00:27:25.443748 | orchestrator | Sunday 29 March 2026 00:26:47 +0000 (0:00:00.076) 0:00:02.860 ********** 2026-03-29 00:27:25.443759 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:27:25.443772 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:27:25.443784 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:27:25.443796 | orchestrator | 2026-03-29 00:27:25.443808 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-03-29 00:27:25.443820 | orchestrator | Sunday 29 March 2026 00:26:47 +0000 (0:00:00.167) 0:00:03.027 ********** 2026-03-29 00:27:25.443832 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:27:25.443843 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:27:25.443855 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:27:25.443868 | orchestrator | 2026-03-29 00:27:25.443880 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-03-29 00:27:25.443893 | orchestrator | Sunday 29 March 2026 00:26:47 +0000 (0:00:00.181) 0:00:03.208 ********** 2026-03-29 00:27:25.443973 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-29 00:27:25.443987 | orchestrator | 2026-03-29 00:27:25.444000 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-03-29 00:27:25.444013 | orchestrator | Sunday 29 March 2026 00:26:47 +0000 (0:00:00.112) 0:00:03.321 ********** 2026-03-29 00:27:25.444025 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:27:25.444036 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:27:25.444047 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:27:25.444057 | orchestrator | 2026-03-29 00:27:25.444068 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-03-29 00:27:25.444079 | orchestrator | Sunday 29 March 2026 00:26:48 +0000 (0:00:00.400) 0:00:03.721 ********** 2026-03-29 00:27:25.444089 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:27:25.444100 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:27:25.444111 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:27:25.444121 | orchestrator | 2026-03-29 00:27:25.444132 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-03-29 00:27:25.444143 | orchestrator | Sunday 29 March 2026 00:26:48 +0000 (0:00:00.092) 0:00:03.814 ********** 2026-03-29 00:27:25.444153 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:27:25.444163 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:27:25.444174 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:27:25.444185 | orchestrator | 2026-03-29 00:27:25.444195 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-03-29 00:27:25.444206 | orchestrator | Sunday 29 March 2026 00:26:49 +0000 (0:00:01.069) 0:00:04.884 ********** 2026-03-29 00:27:25.444216 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:27:25.444227 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:27:25.444238 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:27:25.444248 | orchestrator | 2026-03-29 00:27:25.444259 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-03-29 00:27:25.444270 | orchestrator | Sunday 29 March 2026 00:26:49 +0000 (0:00:00.477) 0:00:05.361 ********** 2026-03-29 00:27:25.444280 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:27:25.444291 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:27:25.444302 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:27:25.444312 | orchestrator | 2026-03-29 00:27:25.444332 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-03-29 00:27:25.444343 | orchestrator | Sunday 29 March 2026 00:26:51 +0000 (0:00:01.119) 0:00:06.481 ********** 2026-03-29 00:27:25.444354 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:27:25.444364 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:27:25.444375 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:27:25.444385 | orchestrator | 2026-03-29 00:27:25.444396 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2026-03-29 00:27:25.444406 | orchestrator | Sunday 29 March 2026 00:27:08 +0000 (0:00:17.141) 0:00:23.622 ********** 2026-03-29 00:27:25.444417 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:27:25.444427 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:27:25.444438 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:27:25.444448 | orchestrator | 2026-03-29 00:27:25.444459 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2026-03-29 00:27:25.444488 | orchestrator | Sunday 29 March 2026 00:27:08 +0000 (0:00:00.103) 0:00:23.725 ********** 2026-03-29 00:27:25.444499 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:27:25.444510 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:27:25.444520 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:27:25.444531 | orchestrator | 2026-03-29 00:27:25.444622 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-03-29 00:27:25.444636 | orchestrator | Sunday 29 March 2026 00:27:16 +0000 (0:00:08.182) 0:00:31.908 ********** 2026-03-29 00:27:25.444647 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:27:25.444658 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:27:25.444668 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:27:25.444679 | orchestrator | 2026-03-29 00:27:25.444690 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-03-29 00:27:25.444700 | orchestrator | Sunday 29 March 2026 00:27:16 +0000 (0:00:00.437) 0:00:32.346 ********** 2026-03-29 00:27:25.444711 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2026-03-29 00:27:25.444722 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2026-03-29 00:27:25.444733 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2026-03-29 00:27:25.444743 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2026-03-29 00:27:25.444754 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2026-03-29 00:27:25.444764 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2026-03-29 00:27:25.444775 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2026-03-29 00:27:25.444785 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2026-03-29 00:27:25.444796 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2026-03-29 00:27:25.444806 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2026-03-29 00:27:25.444817 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2026-03-29 00:27:25.444827 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2026-03-29 00:27:25.444838 | orchestrator | 2026-03-29 00:27:25.444848 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-03-29 00:27:25.444859 | orchestrator | Sunday 29 March 2026 00:27:20 +0000 (0:00:03.535) 0:00:35.881 ********** 2026-03-29 00:27:25.444869 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:27:25.444880 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:27:25.444890 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:27:25.444924 | orchestrator | 2026-03-29 00:27:25.444935 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-03-29 00:27:25.444946 | orchestrator | 2026-03-29 00:27:25.444956 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-29 00:27:25.445009 | orchestrator | Sunday 29 March 2026 00:27:21 +0000 (0:00:01.289) 0:00:37.171 ********** 2026-03-29 00:27:25.445021 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:27:25.445040 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:27:25.445051 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:27:25.445062 | orchestrator | ok: [testbed-manager] 2026-03-29 00:27:25.445072 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:27:25.445083 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:27:25.445093 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:27:25.445104 | orchestrator | 2026-03-29 00:27:25.445114 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 00:27:25.445126 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-29 00:27:25.445138 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-29 00:27:25.445150 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-29 00:27:25.445161 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-29 00:27:25.445172 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-29 00:27:25.445183 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-29 00:27:25.445193 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-29 00:27:25.445204 | orchestrator | 2026-03-29 00:27:25.445214 | orchestrator | 2026-03-29 00:27:25.445225 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-29 00:27:25.445235 | orchestrator | Sunday 29 March 2026 00:27:25 +0000 (0:00:03.673) 0:00:40.844 ********** 2026-03-29 00:27:25.445246 | orchestrator | =============================================================================== 2026-03-29 00:27:25.445257 | orchestrator | osism.commons.repository : Update package cache ------------------------ 17.14s 2026-03-29 00:27:25.445267 | orchestrator | Install required packages (Debian) -------------------------------------- 8.18s 2026-03-29 00:27:25.445278 | orchestrator | Gathers facts about hosts ----------------------------------------------- 3.67s 2026-03-29 00:27:25.445288 | orchestrator | Copy fact files --------------------------------------------------------- 3.54s 2026-03-29 00:27:25.445299 | orchestrator | Create custom facts directory ------------------------------------------- 1.41s 2026-03-29 00:27:25.445310 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 1.29s 2026-03-29 00:27:25.445330 | orchestrator | Copy fact file ---------------------------------------------------------- 1.27s 2026-03-29 00:27:25.614553 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 1.12s 2026-03-29 00:27:25.614665 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 1.07s 2026-03-29 00:27:25.614681 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.48s 2026-03-29 00:27:25.614693 | orchestrator | Create custom facts directory ------------------------------------------- 0.44s 2026-03-29 00:27:25.614704 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.40s 2026-03-29 00:27:25.614714 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.18s 2026-03-29 00:27:25.614725 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.17s 2026-03-29 00:27:25.614736 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.11s 2026-03-29 00:27:25.614747 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.10s 2026-03-29 00:27:25.614758 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.09s 2026-03-29 00:27:25.614768 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.08s 2026-03-29 00:27:25.789042 | orchestrator | + osism apply bootstrap 2026-03-29 00:27:37.065471 | orchestrator | 2026-03-29 00:27:37 | INFO  | Prepare task for execution of bootstrap. 2026-03-29 00:27:37.141225 | orchestrator | 2026-03-29 00:27:37 | INFO  | Task ea059ec8-707e-4caf-99fa-dbebc21cead8 (bootstrap) was prepared for execution. 2026-03-29 00:27:37.141337 | orchestrator | 2026-03-29 00:27:37 | INFO  | It takes a moment until task ea059ec8-707e-4caf-99fa-dbebc21cead8 (bootstrap) has been started and output is visible here. 2026-03-29 00:27:53.044583 | orchestrator | 2026-03-29 00:27:53.044673 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2026-03-29 00:27:53.044684 | orchestrator | 2026-03-29 00:27:53.044691 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2026-03-29 00:27:53.044697 | orchestrator | Sunday 29 March 2026 00:27:40 +0000 (0:00:00.187) 0:00:00.187 ********** 2026-03-29 00:27:53.044703 | orchestrator | ok: [testbed-manager] 2026-03-29 00:27:53.044710 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:27:53.044716 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:27:53.044722 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:27:53.044728 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:27:53.044734 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:27:53.044739 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:27:53.044745 | orchestrator | 2026-03-29 00:27:53.044751 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-03-29 00:27:53.044757 | orchestrator | 2026-03-29 00:27:53.044762 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-29 00:27:53.044768 | orchestrator | Sunday 29 March 2026 00:27:40 +0000 (0:00:00.305) 0:00:00.493 ********** 2026-03-29 00:27:53.044778 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:27:53.044793 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:27:53.044804 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:27:53.044814 | orchestrator | ok: [testbed-manager] 2026-03-29 00:27:53.044823 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:27:53.044832 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:27:53.044842 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:27:53.044851 | orchestrator | 2026-03-29 00:27:53.044861 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2026-03-29 00:27:53.044912 | orchestrator | 2026-03-29 00:27:53.044920 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-29 00:27:53.044926 | orchestrator | Sunday 29 March 2026 00:27:45 +0000 (0:00:04.822) 0:00:05.316 ********** 2026-03-29 00:27:53.044933 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-03-29 00:27:53.044939 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-03-29 00:27:53.044945 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2026-03-29 00:27:53.044951 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-03-29 00:27:53.044956 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-29 00:27:53.044962 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-03-29 00:27:53.044968 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2026-03-29 00:27:53.044974 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-29 00:27:53.044980 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-03-29 00:27:53.044985 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-03-29 00:27:53.044991 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2026-03-29 00:27:53.044997 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-29 00:27:53.045003 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-03-29 00:27:53.045008 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-03-29 00:27:53.045014 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-03-29 00:27:53.045020 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-03-29 00:27:53.045048 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-03-29 00:27:53.045054 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-03-29 00:27:53.045060 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-03-29 00:27:53.045065 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-03-29 00:27:53.045071 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-03-29 00:27:53.045076 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2026-03-29 00:27:53.045082 | orchestrator | skipping: [testbed-manager] 2026-03-29 00:27:53.045088 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-03-29 00:27:53.045093 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-03-29 00:27:53.045099 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-03-29 00:27:53.045104 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:27:53.045111 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-03-29 00:27:53.045116 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-03-29 00:27:53.045123 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2026-03-29 00:27:53.045130 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-29 00:27:53.045137 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-03-29 00:27:53.045143 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:27:53.045150 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-03-29 00:27:53.045157 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-03-29 00:27:53.045163 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:27:53.045170 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-29 00:27:53.045177 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2026-03-29 00:27:53.045183 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-03-29 00:27:53.045190 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-29 00:27:53.045196 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-03-29 00:27:53.045203 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-03-29 00:27:53.045210 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-29 00:27:53.045219 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-03-29 00:27:53.045228 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-03-29 00:27:53.045244 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-29 00:27:53.045273 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-03-29 00:27:53.045284 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-29 00:27:53.045294 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-03-29 00:27:53.045304 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:27:53.045314 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-03-29 00:27:53.045324 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-03-29 00:27:53.045333 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:27:53.045343 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-03-29 00:27:53.045353 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-03-29 00:27:53.045362 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:27:53.045373 | orchestrator | 2026-03-29 00:27:53.045383 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2026-03-29 00:27:53.045393 | orchestrator | 2026-03-29 00:27:53.045403 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2026-03-29 00:27:53.045412 | orchestrator | Sunday 29 March 2026 00:27:46 +0000 (0:00:00.461) 0:00:05.777 ********** 2026-03-29 00:27:53.045420 | orchestrator | ok: [testbed-manager] 2026-03-29 00:27:53.045428 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:27:53.045447 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:27:53.045457 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:27:53.045468 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:27:53.045478 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:27:53.045534 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:27:53.045545 | orchestrator | 2026-03-29 00:27:53.045555 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2026-03-29 00:27:53.045564 | orchestrator | Sunday 29 March 2026 00:27:47 +0000 (0:00:01.386) 0:00:07.164 ********** 2026-03-29 00:27:53.045574 | orchestrator | ok: [testbed-manager] 2026-03-29 00:27:53.045584 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:27:53.045593 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:27:53.045602 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:27:53.045611 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:27:53.045620 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:27:53.045630 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:27:53.045640 | orchestrator | 2026-03-29 00:27:53.045649 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2026-03-29 00:27:53.045659 | orchestrator | Sunday 29 March 2026 00:27:48 +0000 (0:00:01.140) 0:00:08.304 ********** 2026-03-29 00:27:53.045670 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-29 00:27:53.045682 | orchestrator | 2026-03-29 00:27:53.045692 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2026-03-29 00:27:53.045701 | orchestrator | Sunday 29 March 2026 00:27:48 +0000 (0:00:00.292) 0:00:08.596 ********** 2026-03-29 00:27:53.045711 | orchestrator | changed: [testbed-manager] 2026-03-29 00:27:53.045720 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:27:53.045730 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:27:53.045739 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:27:53.045749 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:27:53.045759 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:27:53.045768 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:27:53.045778 | orchestrator | 2026-03-29 00:27:53.045788 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2026-03-29 00:27:53.045797 | orchestrator | Sunday 29 March 2026 00:27:50 +0000 (0:00:01.485) 0:00:10.082 ********** 2026-03-29 00:27:53.045807 | orchestrator | skipping: [testbed-manager] 2026-03-29 00:27:53.045818 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-29 00:27:53.045829 | orchestrator | 2026-03-29 00:27:53.045839 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2026-03-29 00:27:53.045866 | orchestrator | Sunday 29 March 2026 00:27:50 +0000 (0:00:00.274) 0:00:10.357 ********** 2026-03-29 00:27:53.045925 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:27:53.045936 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:27:53.045946 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:27:53.045955 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:27:53.045970 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:27:53.045979 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:27:53.045988 | orchestrator | 2026-03-29 00:27:53.045998 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2026-03-29 00:27:53.046008 | orchestrator | Sunday 29 March 2026 00:27:51 +0000 (0:00:01.185) 0:00:11.542 ********** 2026-03-29 00:27:53.046102 | orchestrator | skipping: [testbed-manager] 2026-03-29 00:27:53.046113 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:27:53.046123 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:27:53.046133 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:27:53.046158 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:27:53.046168 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:27:53.046186 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:27:53.046196 | orchestrator | 2026-03-29 00:27:53.046206 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2026-03-29 00:27:53.046217 | orchestrator | Sunday 29 March 2026 00:27:52 +0000 (0:00:00.686) 0:00:12.228 ********** 2026-03-29 00:27:53.046226 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:27:53.046236 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:27:53.046245 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:27:53.046254 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:27:53.046264 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:27:53.046273 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:27:53.046283 | orchestrator | ok: [testbed-manager] 2026-03-29 00:27:53.046293 | orchestrator | 2026-03-29 00:27:53.046303 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-03-29 00:27:53.046314 | orchestrator | Sunday 29 March 2026 00:27:52 +0000 (0:00:00.407) 0:00:12.636 ********** 2026-03-29 00:27:53.046323 | orchestrator | skipping: [testbed-manager] 2026-03-29 00:27:53.046333 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:27:53.046353 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:28:04.935017 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:28:04.935119 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:28:04.935133 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:28:04.935143 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:28:04.935152 | orchestrator | 2026-03-29 00:28:04.935163 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-03-29 00:28:04.935173 | orchestrator | Sunday 29 March 2026 00:27:53 +0000 (0:00:00.201) 0:00:12.837 ********** 2026-03-29 00:28:04.935184 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-29 00:28:04.935208 | orchestrator | 2026-03-29 00:28:04.935217 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-03-29 00:28:04.935227 | orchestrator | Sunday 29 March 2026 00:27:53 +0000 (0:00:00.308) 0:00:13.146 ********** 2026-03-29 00:28:04.935236 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-29 00:28:04.935245 | orchestrator | 2026-03-29 00:28:04.935254 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-03-29 00:28:04.935263 | orchestrator | Sunday 29 March 2026 00:27:53 +0000 (0:00:00.289) 0:00:13.436 ********** 2026-03-29 00:28:04.935271 | orchestrator | ok: [testbed-manager] 2026-03-29 00:28:04.935281 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:28:04.935290 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:28:04.935298 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:28:04.935307 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:28:04.935315 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:28:04.935324 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:28:04.935333 | orchestrator | 2026-03-29 00:28:04.935341 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-03-29 00:28:04.935350 | orchestrator | Sunday 29 March 2026 00:27:55 +0000 (0:00:01.269) 0:00:14.706 ********** 2026-03-29 00:28:04.935359 | orchestrator | skipping: [testbed-manager] 2026-03-29 00:28:04.935369 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:28:04.935377 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:28:04.935386 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:28:04.935395 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:28:04.935403 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:28:04.935412 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:28:04.935420 | orchestrator | 2026-03-29 00:28:04.935429 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-03-29 00:28:04.935460 | orchestrator | Sunday 29 March 2026 00:27:55 +0000 (0:00:00.212) 0:00:14.918 ********** 2026-03-29 00:28:04.935472 | orchestrator | ok: [testbed-manager] 2026-03-29 00:28:04.935482 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:28:04.935492 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:28:04.935502 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:28:04.935512 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:28:04.935522 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:28:04.935532 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:28:04.935548 | orchestrator | 2026-03-29 00:28:04.935570 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-03-29 00:28:04.935587 | orchestrator | Sunday 29 March 2026 00:27:55 +0000 (0:00:00.555) 0:00:15.473 ********** 2026-03-29 00:28:04.935603 | orchestrator | skipping: [testbed-manager] 2026-03-29 00:28:04.935617 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:28:04.935632 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:28:04.935644 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:28:04.935660 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:28:04.935674 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:28:04.935689 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:28:04.935702 | orchestrator | 2026-03-29 00:28:04.935719 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-03-29 00:28:04.935735 | orchestrator | Sunday 29 March 2026 00:27:56 +0000 (0:00:00.253) 0:00:15.726 ********** 2026-03-29 00:28:04.935749 | orchestrator | ok: [testbed-manager] 2026-03-29 00:28:04.935762 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:28:04.935789 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:28:04.935805 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:28:04.935820 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:28:04.935835 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:28:04.935851 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:28:04.935965 | orchestrator | 2026-03-29 00:28:04.935984 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-03-29 00:28:04.935999 | orchestrator | Sunday 29 March 2026 00:27:56 +0000 (0:00:00.599) 0:00:16.326 ********** 2026-03-29 00:28:04.936013 | orchestrator | ok: [testbed-manager] 2026-03-29 00:28:04.936028 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:28:04.936043 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:28:04.936057 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:28:04.936071 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:28:04.936084 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:28:04.936098 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:28:04.936112 | orchestrator | 2026-03-29 00:28:04.936125 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-03-29 00:28:04.936140 | orchestrator | Sunday 29 March 2026 00:27:57 +0000 (0:00:01.166) 0:00:17.493 ********** 2026-03-29 00:28:04.936155 | orchestrator | ok: [testbed-manager] 2026-03-29 00:28:04.936170 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:28:04.936185 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:28:04.936198 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:28:04.936213 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:28:04.936227 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:28:04.936241 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:28:04.936256 | orchestrator | 2026-03-29 00:28:04.936270 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-03-29 00:28:04.936285 | orchestrator | Sunday 29 March 2026 00:27:58 +0000 (0:00:01.054) 0:00:18.547 ********** 2026-03-29 00:28:04.936326 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-29 00:28:04.936345 | orchestrator | 2026-03-29 00:28:04.936360 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-03-29 00:28:04.936375 | orchestrator | Sunday 29 March 2026 00:27:59 +0000 (0:00:00.350) 0:00:18.897 ********** 2026-03-29 00:28:04.936405 | orchestrator | skipping: [testbed-manager] 2026-03-29 00:28:04.936420 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:28:04.936434 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:28:04.936448 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:28:04.936463 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:28:04.936478 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:28:04.936493 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:28:04.936509 | orchestrator | 2026-03-29 00:28:04.936524 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-03-29 00:28:04.936538 | orchestrator | Sunday 29 March 2026 00:28:00 +0000 (0:00:01.271) 0:00:20.169 ********** 2026-03-29 00:28:04.936553 | orchestrator | ok: [testbed-manager] 2026-03-29 00:28:04.936567 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:28:04.936580 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:28:04.936596 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:28:04.936610 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:28:04.936624 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:28:04.936638 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:28:04.936652 | orchestrator | 2026-03-29 00:28:04.936667 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-03-29 00:28:04.936682 | orchestrator | Sunday 29 March 2026 00:28:00 +0000 (0:00:00.227) 0:00:20.397 ********** 2026-03-29 00:28:04.936698 | orchestrator | ok: [testbed-manager] 2026-03-29 00:28:04.936712 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:28:04.936727 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:28:04.936742 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:28:04.936756 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:28:04.936770 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:28:04.936784 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:28:04.936798 | orchestrator | 2026-03-29 00:28:04.936813 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-03-29 00:28:04.936828 | orchestrator | Sunday 29 March 2026 00:28:00 +0000 (0:00:00.241) 0:00:20.639 ********** 2026-03-29 00:28:04.936842 | orchestrator | ok: [testbed-manager] 2026-03-29 00:28:04.936857 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:28:04.936898 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:28:04.936914 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:28:04.936928 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:28:04.936942 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:28:04.936956 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:28:04.936970 | orchestrator | 2026-03-29 00:28:04.936980 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-03-29 00:28:04.936988 | orchestrator | Sunday 29 March 2026 00:28:01 +0000 (0:00:00.224) 0:00:20.864 ********** 2026-03-29 00:28:04.936998 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-29 00:28:04.937009 | orchestrator | 2026-03-29 00:28:04.937018 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-03-29 00:28:04.937026 | orchestrator | Sunday 29 March 2026 00:28:01 +0000 (0:00:00.297) 0:00:21.161 ********** 2026-03-29 00:28:04.937035 | orchestrator | ok: [testbed-manager] 2026-03-29 00:28:04.937043 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:28:04.937052 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:28:04.937060 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:28:04.937069 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:28:04.937077 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:28:04.937086 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:28:04.937094 | orchestrator | 2026-03-29 00:28:04.937103 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-03-29 00:28:04.937111 | orchestrator | Sunday 29 March 2026 00:28:01 +0000 (0:00:00.539) 0:00:21.701 ********** 2026-03-29 00:28:04.937120 | orchestrator | skipping: [testbed-manager] 2026-03-29 00:28:04.937129 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:28:04.937148 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:28:04.937157 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:28:04.937166 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:28:04.937175 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:28:04.937183 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:28:04.937192 | orchestrator | 2026-03-29 00:28:04.937200 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-03-29 00:28:04.937209 | orchestrator | Sunday 29 March 2026 00:28:02 +0000 (0:00:00.216) 0:00:21.917 ********** 2026-03-29 00:28:04.937218 | orchestrator | ok: [testbed-manager] 2026-03-29 00:28:04.937226 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:28:04.937235 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:28:04.937243 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:28:04.937251 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:28:04.937260 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:28:04.937268 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:28:04.937277 | orchestrator | 2026-03-29 00:28:04.937285 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-03-29 00:28:04.937294 | orchestrator | Sunday 29 March 2026 00:28:03 +0000 (0:00:01.092) 0:00:23.010 ********** 2026-03-29 00:28:04.937302 | orchestrator | ok: [testbed-manager] 2026-03-29 00:28:04.937311 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:28:04.937319 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:28:04.937328 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:28:04.937336 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:28:04.937345 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:28:04.937353 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:28:04.937361 | orchestrator | 2026-03-29 00:28:04.937370 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-03-29 00:28:04.937378 | orchestrator | Sunday 29 March 2026 00:28:03 +0000 (0:00:00.596) 0:00:23.606 ********** 2026-03-29 00:28:04.937387 | orchestrator | ok: [testbed-manager] 2026-03-29 00:28:04.937396 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:28:04.937404 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:28:04.937413 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:28:04.937433 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:28:47.351405 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:28:47.351509 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:28:47.351521 | orchestrator | 2026-03-29 00:28:47.351529 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-03-29 00:28:47.351538 | orchestrator | Sunday 29 March 2026 00:28:05 +0000 (0:00:01.114) 0:00:24.720 ********** 2026-03-29 00:28:47.351545 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:28:47.351551 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:28:47.351555 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:28:47.351560 | orchestrator | changed: [testbed-manager] 2026-03-29 00:28:47.351563 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:28:47.351567 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:28:47.351571 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:28:47.351575 | orchestrator | 2026-03-29 00:28:47.351580 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2026-03-29 00:28:47.351584 | orchestrator | Sunday 29 March 2026 00:28:23 +0000 (0:00:18.549) 0:00:43.270 ********** 2026-03-29 00:28:47.351588 | orchestrator | ok: [testbed-manager] 2026-03-29 00:28:47.351592 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:28:47.351596 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:28:47.351600 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:28:47.351604 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:28:47.351607 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:28:47.351611 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:28:47.351615 | orchestrator | 2026-03-29 00:28:47.351619 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2026-03-29 00:28:47.351622 | orchestrator | Sunday 29 March 2026 00:28:23 +0000 (0:00:00.215) 0:00:43.485 ********** 2026-03-29 00:28:47.351626 | orchestrator | ok: [testbed-manager] 2026-03-29 00:28:47.351650 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:28:47.351654 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:28:47.351657 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:28:47.351661 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:28:47.351665 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:28:47.351668 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:28:47.351673 | orchestrator | 2026-03-29 00:28:47.351678 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2026-03-29 00:28:47.351685 | orchestrator | Sunday 29 March 2026 00:28:24 +0000 (0:00:00.217) 0:00:43.703 ********** 2026-03-29 00:28:47.351691 | orchestrator | ok: [testbed-manager] 2026-03-29 00:28:47.351697 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:28:47.351703 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:28:47.351708 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:28:47.351714 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:28:47.351718 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:28:47.351722 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:28:47.351725 | orchestrator | 2026-03-29 00:28:47.351729 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2026-03-29 00:28:47.351733 | orchestrator | Sunday 29 March 2026 00:28:24 +0000 (0:00:00.212) 0:00:43.916 ********** 2026-03-29 00:28:47.351739 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-29 00:28:47.351745 | orchestrator | 2026-03-29 00:28:47.351765 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2026-03-29 00:28:47.351772 | orchestrator | Sunday 29 March 2026 00:28:24 +0000 (0:00:00.282) 0:00:44.199 ********** 2026-03-29 00:28:47.351777 | orchestrator | ok: [testbed-manager] 2026-03-29 00:28:47.351781 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:28:47.351784 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:28:47.351788 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:28:47.351792 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:28:47.351795 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:28:47.351799 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:28:47.351803 | orchestrator | 2026-03-29 00:28:47.351807 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2026-03-29 00:28:47.351811 | orchestrator | Sunday 29 March 2026 00:28:26 +0000 (0:00:01.878) 0:00:46.077 ********** 2026-03-29 00:28:47.351814 | orchestrator | changed: [testbed-manager] 2026-03-29 00:28:47.351841 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:28:47.351845 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:28:47.351849 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:28:47.351853 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:28:47.351857 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:28:47.351864 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:28:47.351868 | orchestrator | 2026-03-29 00:28:47.351872 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2026-03-29 00:28:47.351876 | orchestrator | Sunday 29 March 2026 00:28:27 +0000 (0:00:01.111) 0:00:47.188 ********** 2026-03-29 00:28:47.351880 | orchestrator | ok: [testbed-manager] 2026-03-29 00:28:47.351884 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:28:47.351887 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:28:47.351891 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:28:47.351895 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:28:47.351898 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:28:47.351902 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:28:47.351906 | orchestrator | 2026-03-29 00:28:47.351913 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2026-03-29 00:28:47.351917 | orchestrator | Sunday 29 March 2026 00:28:28 +0000 (0:00:01.007) 0:00:48.196 ********** 2026-03-29 00:28:47.351926 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-29 00:28:47.351942 | orchestrator | 2026-03-29 00:28:47.351949 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2026-03-29 00:28:47.351957 | orchestrator | Sunday 29 March 2026 00:28:28 +0000 (0:00:00.291) 0:00:48.487 ********** 2026-03-29 00:28:47.351964 | orchestrator | changed: [testbed-manager] 2026-03-29 00:28:47.351970 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:28:47.351976 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:28:47.351983 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:28:47.351993 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:28:47.352000 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:28:47.352006 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:28:47.352012 | orchestrator | 2026-03-29 00:28:47.352036 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2026-03-29 00:28:47.352044 | orchestrator | Sunday 29 March 2026 00:28:29 +0000 (0:00:01.048) 0:00:49.535 ********** 2026-03-29 00:28:47.352052 | orchestrator | skipping: [testbed-manager] 2026-03-29 00:28:47.352059 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:28:47.352064 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:28:47.352073 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:28:47.352080 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:28:47.352086 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:28:47.352093 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:28:47.352099 | orchestrator | 2026-03-29 00:28:47.352106 | orchestrator | TASK [osism.services.rsyslog : Include logrotate tasks] ************************ 2026-03-29 00:28:47.352112 | orchestrator | Sunday 29 March 2026 00:28:30 +0000 (0:00:00.222) 0:00:49.758 ********** 2026-03-29 00:28:47.352119 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/logrotate.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-29 00:28:47.352126 | orchestrator | 2026-03-29 00:28:47.352132 | orchestrator | TASK [osism.services.rsyslog : Ensure logrotate package is installed] ********** 2026-03-29 00:28:47.352140 | orchestrator | Sunday 29 March 2026 00:28:30 +0000 (0:00:00.275) 0:00:50.033 ********** 2026-03-29 00:28:47.352148 | orchestrator | ok: [testbed-manager] 2026-03-29 00:28:47.352158 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:28:47.352164 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:28:47.352170 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:28:47.352178 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:28:47.352184 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:28:47.352190 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:28:47.352197 | orchestrator | 2026-03-29 00:28:47.352206 | orchestrator | TASK [osism.services.rsyslog : Configure logrotate for rsyslog] **************** 2026-03-29 00:28:47.352216 | orchestrator | Sunday 29 March 2026 00:28:32 +0000 (0:00:01.883) 0:00:51.917 ********** 2026-03-29 00:28:47.352222 | orchestrator | changed: [testbed-manager] 2026-03-29 00:28:47.352228 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:28:47.352235 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:28:47.352242 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:28:47.352248 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:28:47.352255 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:28:47.352262 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:28:47.352269 | orchestrator | 2026-03-29 00:28:47.352275 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2026-03-29 00:28:47.352281 | orchestrator | Sunday 29 March 2026 00:28:33 +0000 (0:00:01.179) 0:00:53.097 ********** 2026-03-29 00:28:47.352288 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:28:47.352293 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:28:47.352300 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:28:47.352308 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:28:47.352316 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:28:47.352326 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:28:47.352344 | orchestrator | changed: [testbed-manager] 2026-03-29 00:28:47.352351 | orchestrator | 2026-03-29 00:28:47.352357 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2026-03-29 00:28:47.352363 | orchestrator | Sunday 29 March 2026 00:28:44 +0000 (0:00:11.007) 0:01:04.104 ********** 2026-03-29 00:28:47.352368 | orchestrator | ok: [testbed-manager] 2026-03-29 00:28:47.352375 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:28:47.352381 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:28:47.352386 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:28:47.352393 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:28:47.352398 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:28:47.352404 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:28:47.352410 | orchestrator | 2026-03-29 00:28:47.352418 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2026-03-29 00:28:47.352425 | orchestrator | Sunday 29 March 2026 00:28:45 +0000 (0:00:01.240) 0:01:05.345 ********** 2026-03-29 00:28:47.352431 | orchestrator | ok: [testbed-manager] 2026-03-29 00:28:47.352439 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:28:47.352446 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:28:47.352452 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:28:47.352458 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:28:47.352464 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:28:47.352470 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:28:47.352476 | orchestrator | 2026-03-29 00:28:47.352489 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2026-03-29 00:28:47.352496 | orchestrator | Sunday 29 March 2026 00:28:46 +0000 (0:00:01.100) 0:01:06.446 ********** 2026-03-29 00:28:47.352502 | orchestrator | ok: [testbed-manager] 2026-03-29 00:28:47.352508 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:28:47.352515 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:28:47.352521 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:28:47.352527 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:28:47.352532 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:28:47.352538 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:28:47.352544 | orchestrator | 2026-03-29 00:28:47.352552 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2026-03-29 00:28:47.352559 | orchestrator | Sunday 29 March 2026 00:28:46 +0000 (0:00:00.169) 0:01:06.615 ********** 2026-03-29 00:28:47.352568 | orchestrator | ok: [testbed-manager] 2026-03-29 00:28:47.352574 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:28:47.352580 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:28:47.352585 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:28:47.352591 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:28:47.352598 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:28:47.352604 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:28:47.352610 | orchestrator | 2026-03-29 00:28:47.352616 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2026-03-29 00:28:47.352622 | orchestrator | Sunday 29 March 2026 00:28:47 +0000 (0:00:00.181) 0:01:06.796 ********** 2026-03-29 00:28:47.352629 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-29 00:28:47.352636 | orchestrator | 2026-03-29 00:28:47.352653 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2026-03-29 00:31:06.046218 | orchestrator | Sunday 29 March 2026 00:28:47 +0000 (0:00:00.250) 0:01:07.047 ********** 2026-03-29 00:31:06.046296 | orchestrator | ok: [testbed-manager] 2026-03-29 00:31:06.046304 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:31:06.046310 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:31:06.046315 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:31:06.046320 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:31:06.046325 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:31:06.046330 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:31:06.046334 | orchestrator | 2026-03-29 00:31:06.046340 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2026-03-29 00:31:06.046364 | orchestrator | Sunday 29 March 2026 00:28:49 +0000 (0:00:02.195) 0:01:09.243 ********** 2026-03-29 00:31:06.046369 | orchestrator | changed: [testbed-manager] 2026-03-29 00:31:06.046375 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:31:06.046380 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:31:06.046384 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:31:06.046389 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:31:06.046394 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:31:06.046398 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:31:06.046403 | orchestrator | 2026-03-29 00:31:06.046408 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2026-03-29 00:31:06.046413 | orchestrator | Sunday 29 March 2026 00:28:50 +0000 (0:00:00.613) 0:01:09.857 ********** 2026-03-29 00:31:06.046418 | orchestrator | ok: [testbed-manager] 2026-03-29 00:31:06.046422 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:31:06.046427 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:31:06.046431 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:31:06.046436 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:31:06.046441 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:31:06.046445 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:31:06.046450 | orchestrator | 2026-03-29 00:31:06.046454 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2026-03-29 00:31:06.046459 | orchestrator | Sunday 29 March 2026 00:28:50 +0000 (0:00:00.212) 0:01:10.069 ********** 2026-03-29 00:31:06.046463 | orchestrator | ok: [testbed-manager] 2026-03-29 00:31:06.046468 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:31:06.046472 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:31:06.046477 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:31:06.046481 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:31:06.046485 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:31:06.046490 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:31:06.046494 | orchestrator | 2026-03-29 00:31:06.046499 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2026-03-29 00:31:06.046504 | orchestrator | Sunday 29 March 2026 00:28:51 +0000 (0:00:01.472) 0:01:11.542 ********** 2026-03-29 00:31:06.046508 | orchestrator | changed: [testbed-manager] 2026-03-29 00:31:06.046513 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:31:06.046517 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:31:06.046522 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:31:06.046526 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:31:06.046531 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:31:06.046535 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:31:06.046540 | orchestrator | 2026-03-29 00:31:06.046545 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2026-03-29 00:31:06.046549 | orchestrator | Sunday 29 March 2026 00:28:54 +0000 (0:00:02.212) 0:01:13.755 ********** 2026-03-29 00:31:06.046554 | orchestrator | ok: [testbed-manager] 2026-03-29 00:31:06.046558 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:31:06.046563 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:31:06.046567 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:31:06.046572 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:31:06.046576 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:31:06.046581 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:31:06.046585 | orchestrator | 2026-03-29 00:31:06.046590 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2026-03-29 00:31:06.046595 | orchestrator | Sunday 29 March 2026 00:28:56 +0000 (0:00:02.833) 0:01:16.588 ********** 2026-03-29 00:31:06.046599 | orchestrator | ok: [testbed-manager] 2026-03-29 00:31:06.046604 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:31:06.046608 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:31:06.046613 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:31:06.046617 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:31:06.046623 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:31:06.046630 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:31:06.046638 | orchestrator | 2026-03-29 00:31:06.046649 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2026-03-29 00:31:06.046727 | orchestrator | Sunday 29 March 2026 00:29:34 +0000 (0:00:38.029) 0:01:54.618 ********** 2026-03-29 00:31:06.046737 | orchestrator | changed: [testbed-manager] 2026-03-29 00:31:06.046745 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:31:06.046752 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:31:06.046759 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:31:06.046763 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:31:06.046768 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:31:06.046772 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:31:06.046777 | orchestrator | 2026-03-29 00:31:06.046782 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2026-03-29 00:31:06.046787 | orchestrator | Sunday 29 March 2026 00:30:51 +0000 (0:01:16.898) 0:03:11.517 ********** 2026-03-29 00:31:06.046793 | orchestrator | ok: [testbed-manager] 2026-03-29 00:31:06.046798 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:31:06.046803 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:31:06.046809 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:31:06.046814 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:31:06.046820 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:31:06.046825 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:31:06.046830 | orchestrator | 2026-03-29 00:31:06.046836 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2026-03-29 00:31:06.046841 | orchestrator | Sunday 29 March 2026 00:30:54 +0000 (0:00:02.331) 0:03:13.848 ********** 2026-03-29 00:31:06.046846 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:31:06.046852 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:31:06.046857 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:31:06.046863 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:31:06.046868 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:31:06.046873 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:31:06.046879 | orchestrator | changed: [testbed-manager] 2026-03-29 00:31:06.046884 | orchestrator | 2026-03-29 00:31:06.046890 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2026-03-29 00:31:06.046895 | orchestrator | Sunday 29 March 2026 00:31:05 +0000 (0:00:10.863) 0:03:24.712 ********** 2026-03-29 00:31:06.046921 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2026-03-29 00:31:06.046934 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2026-03-29 00:31:06.046942 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2026-03-29 00:31:06.046949 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-03-29 00:31:06.046959 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'network', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-03-29 00:31:06.046965 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2026-03-29 00:31:06.046973 | orchestrator | 2026-03-29 00:31:06.046979 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2026-03-29 00:31:06.046985 | orchestrator | Sunday 29 March 2026 00:31:05 +0000 (0:00:00.327) 0:03:25.039 ********** 2026-03-29 00:31:06.046990 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-03-29 00:31:06.046997 | orchestrator | skipping: [testbed-manager] 2026-03-29 00:31:06.047005 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-03-29 00:31:06.047016 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-03-29 00:31:06.047027 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:31:06.047035 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:31:06.047049 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-03-29 00:31:06.047056 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:31:06.047064 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-29 00:31:06.047072 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-29 00:31:06.047079 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-29 00:31:06.047086 | orchestrator | 2026-03-29 00:31:06.047094 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2026-03-29 00:31:06.047101 | orchestrator | Sunday 29 March 2026 00:31:05 +0000 (0:00:00.635) 0:03:25.674 ********** 2026-03-29 00:31:06.047110 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-03-29 00:31:06.047119 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-03-29 00:31:06.047128 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-03-29 00:31:06.047137 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-03-29 00:31:06.047145 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-03-29 00:31:06.047159 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-03-29 00:31:13.048909 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-03-29 00:31:13.048994 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-03-29 00:31:13.049002 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-03-29 00:31:13.049008 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-03-29 00:31:13.049015 | orchestrator | skipping: [testbed-manager] 2026-03-29 00:31:13.049022 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-03-29 00:31:13.049027 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-03-29 00:31:13.049032 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-03-29 00:31:13.049054 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-03-29 00:31:13.049059 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-03-29 00:31:13.049065 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-03-29 00:31:13.049070 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-03-29 00:31:13.049075 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-03-29 00:31:13.049080 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-03-29 00:31:13.049085 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-03-29 00:31:13.049090 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-03-29 00:31:13.049096 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-03-29 00:31:13.049101 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:31:13.049106 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-03-29 00:31:13.049111 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-03-29 00:31:13.049116 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-03-29 00:31:13.049121 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-03-29 00:31:13.049126 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-03-29 00:31:13.049131 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-03-29 00:31:13.049136 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-03-29 00:31:13.049141 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-03-29 00:31:13.049146 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-03-29 00:31:13.049151 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-03-29 00:31:13.049156 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-03-29 00:31:13.049172 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-03-29 00:31:13.049177 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-03-29 00:31:13.049182 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-03-29 00:31:13.049187 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-03-29 00:31:13.049192 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-03-29 00:31:13.049197 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:31:13.049202 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-03-29 00:31:13.049207 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-03-29 00:31:13.049212 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:31:13.049217 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-03-29 00:31:13.049228 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-03-29 00:31:13.049233 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-03-29 00:31:13.049243 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-03-29 00:31:13.049248 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-03-29 00:31:13.049265 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-03-29 00:31:13.049271 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-03-29 00:31:13.049276 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-03-29 00:31:13.049281 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-03-29 00:31:13.049286 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-03-29 00:31:13.049291 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-03-29 00:31:13.049296 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-03-29 00:31:13.049301 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-03-29 00:31:13.049306 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-03-29 00:31:13.049311 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-03-29 00:31:13.049316 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-03-29 00:31:13.049321 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-03-29 00:31:13.049326 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-03-29 00:31:13.049331 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-03-29 00:31:13.049336 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-03-29 00:31:13.049341 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-03-29 00:31:13.049346 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-03-29 00:31:13.049351 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-03-29 00:31:13.049356 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-03-29 00:31:13.049361 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-03-29 00:31:13.049366 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-03-29 00:31:13.049371 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-03-29 00:31:13.049376 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-03-29 00:31:13.049382 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-03-29 00:31:13.049387 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-03-29 00:31:13.049392 | orchestrator | 2026-03-29 00:31:13.049397 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2026-03-29 00:31:13.049402 | orchestrator | Sunday 29 March 2026 00:31:11 +0000 (0:00:05.930) 0:03:31.605 ********** 2026-03-29 00:31:13.049407 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-29 00:31:13.049413 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-29 00:31:13.049418 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-29 00:31:13.049426 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-29 00:31:13.049435 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-29 00:31:13.049440 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-29 00:31:13.049445 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-29 00:31:13.049450 | orchestrator | 2026-03-29 00:31:13.049455 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2026-03-29 00:31:13.049460 | orchestrator | Sunday 29 March 2026 00:31:12 +0000 (0:00:00.631) 0:03:32.236 ********** 2026-03-29 00:31:13.049465 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-29 00:31:13.049472 | orchestrator | skipping: [testbed-manager] 2026-03-29 00:31:13.049478 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-29 00:31:13.049484 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:31:13.049490 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-29 00:31:13.049496 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:31:13.049502 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-29 00:31:13.049508 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:31:13.049514 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-29 00:31:13.049520 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-29 00:31:13.049530 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-29 00:31:26.478890 | orchestrator | 2026-03-29 00:31:26.478995 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on network] ***************** 2026-03-29 00:31:26.479010 | orchestrator | Sunday 29 March 2026 00:31:13 +0000 (0:00:00.547) 0:03:32.784 ********** 2026-03-29 00:31:26.479019 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-29 00:31:26.479027 | orchestrator | skipping: [testbed-manager] 2026-03-29 00:31:26.479036 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-29 00:31:26.479043 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:31:26.479049 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-29 00:31:26.479055 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:31:26.479062 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-29 00:31:26.479068 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:31:26.479074 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-29 00:31:26.479081 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-29 00:31:26.479087 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-29 00:31:26.479093 | orchestrator | 2026-03-29 00:31:26.479111 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2026-03-29 00:31:26.479118 | orchestrator | Sunday 29 March 2026 00:31:14 +0000 (0:00:01.475) 0:03:34.260 ********** 2026-03-29 00:31:26.479124 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-03-29 00:31:26.479130 | orchestrator | skipping: [testbed-manager] 2026-03-29 00:31:26.479137 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-03-29 00:31:26.479144 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:31:26.479151 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-03-29 00:31:26.479173 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:31:26.479178 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-03-29 00:31:26.479182 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:31:26.479186 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-03-29 00:31:26.479190 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-03-29 00:31:26.479194 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-03-29 00:31:26.479197 | orchestrator | 2026-03-29 00:31:26.479201 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2026-03-29 00:31:26.479205 | orchestrator | Sunday 29 March 2026 00:31:15 +0000 (0:00:00.652) 0:03:34.913 ********** 2026-03-29 00:31:26.479208 | orchestrator | skipping: [testbed-manager] 2026-03-29 00:31:26.479212 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:31:26.479216 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:31:26.479220 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:31:26.479224 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:31:26.479227 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:31:26.479231 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:31:26.479235 | orchestrator | 2026-03-29 00:31:26.479247 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2026-03-29 00:31:26.479251 | orchestrator | Sunday 29 March 2026 00:31:15 +0000 (0:00:00.234) 0:03:35.148 ********** 2026-03-29 00:31:26.479255 | orchestrator | ok: [testbed-manager] 2026-03-29 00:31:26.479260 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:31:26.479264 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:31:26.479268 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:31:26.479271 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:31:26.479275 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:31:26.479279 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:31:26.479282 | orchestrator | 2026-03-29 00:31:26.479286 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2026-03-29 00:31:26.479290 | orchestrator | Sunday 29 March 2026 00:31:20 +0000 (0:00:05.161) 0:03:40.309 ********** 2026-03-29 00:31:26.479295 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2026-03-29 00:31:26.479302 | orchestrator | skipping: [testbed-manager] 2026-03-29 00:31:26.479314 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2026-03-29 00:31:26.479321 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2026-03-29 00:31:26.479326 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:31:26.479332 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2026-03-29 00:31:26.479338 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:31:26.479344 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:31:26.479350 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2026-03-29 00:31:26.479356 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2026-03-29 00:31:26.479361 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:31:26.479367 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:31:26.479372 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2026-03-29 00:31:26.479378 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:31:26.479383 | orchestrator | 2026-03-29 00:31:26.479389 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2026-03-29 00:31:26.479395 | orchestrator | Sunday 29 March 2026 00:31:20 +0000 (0:00:00.330) 0:03:40.639 ********** 2026-03-29 00:31:26.479400 | orchestrator | ok: [testbed-manager] => (item=cron) 2026-03-29 00:31:26.479406 | orchestrator | ok: [testbed-node-2] => (item=cron) 2026-03-29 00:31:26.479412 | orchestrator | ok: [testbed-node-0] => (item=cron) 2026-03-29 00:31:26.479435 | orchestrator | ok: [testbed-node-1] => (item=cron) 2026-03-29 00:31:26.479442 | orchestrator | ok: [testbed-node-3] => (item=cron) 2026-03-29 00:31:26.479449 | orchestrator | ok: [testbed-node-5] => (item=cron) 2026-03-29 00:31:26.479463 | orchestrator | ok: [testbed-node-4] => (item=cron) 2026-03-29 00:31:26.479471 | orchestrator | 2026-03-29 00:31:26.479478 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2026-03-29 00:31:26.479485 | orchestrator | Sunday 29 March 2026 00:31:22 +0000 (0:00:01.118) 0:03:41.758 ********** 2026-03-29 00:31:26.479492 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-29 00:31:26.479498 | orchestrator | 2026-03-29 00:31:26.479503 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2026-03-29 00:31:26.479508 | orchestrator | Sunday 29 March 2026 00:31:22 +0000 (0:00:00.422) 0:03:42.180 ********** 2026-03-29 00:31:26.479512 | orchestrator | ok: [testbed-manager] 2026-03-29 00:31:26.479516 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:31:26.479520 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:31:26.479524 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:31:26.479528 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:31:26.479533 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:31:26.479538 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:31:26.479543 | orchestrator | 2026-03-29 00:31:26.479549 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2026-03-29 00:31:26.479555 | orchestrator | Sunday 29 March 2026 00:31:23 +0000 (0:00:01.405) 0:03:43.586 ********** 2026-03-29 00:31:26.479562 | orchestrator | ok: [testbed-manager] 2026-03-29 00:31:26.479568 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:31:26.479574 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:31:26.479581 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:31:26.479587 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:31:26.479593 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:31:26.479614 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:31:26.479619 | orchestrator | 2026-03-29 00:31:26.479624 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2026-03-29 00:31:26.479628 | orchestrator | Sunday 29 March 2026 00:31:24 +0000 (0:00:00.659) 0:03:44.246 ********** 2026-03-29 00:31:26.479632 | orchestrator | changed: [testbed-manager] 2026-03-29 00:31:26.479710 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:31:26.479716 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:31:26.479720 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:31:26.479725 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:31:26.479729 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:31:26.479733 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:31:26.479738 | orchestrator | 2026-03-29 00:31:26.479742 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2026-03-29 00:31:26.479746 | orchestrator | Sunday 29 March 2026 00:31:25 +0000 (0:00:00.635) 0:03:44.882 ********** 2026-03-29 00:31:26.479751 | orchestrator | ok: [testbed-manager] 2026-03-29 00:31:26.479755 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:31:26.479760 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:31:26.479764 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:31:26.479767 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:31:26.479771 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:31:26.479775 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:31:26.479779 | orchestrator | 2026-03-29 00:31:26.479782 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2026-03-29 00:31:26.479786 | orchestrator | Sunday 29 March 2026 00:31:25 +0000 (0:00:00.719) 0:03:45.601 ********** 2026-03-29 00:31:26.479797 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1774742750.0786097, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-29 00:31:26.479808 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1774742770.5655015, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-29 00:31:26.479812 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1774742760.2059145, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-29 00:31:26.479830 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1774742772.3501225, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-29 00:31:32.123212 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1774742743.804863, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-29 00:31:32.123307 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1774742749.3048427, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-29 00:31:32.123325 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1774742775.188749, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-29 00:31:32.123336 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-29 00:31:32.123388 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-29 00:31:32.123396 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-29 00:31:32.123402 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-29 00:31:32.123429 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-29 00:31:32.123436 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-29 00:31:32.123443 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-29 00:31:32.123450 | orchestrator | 2026-03-29 00:31:32.123457 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2026-03-29 00:31:32.123465 | orchestrator | Sunday 29 March 2026 00:31:26 +0000 (0:00:01.061) 0:03:46.663 ********** 2026-03-29 00:31:32.123472 | orchestrator | changed: [testbed-manager] 2026-03-29 00:31:32.123479 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:31:32.123485 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:31:32.123497 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:31:32.123503 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:31:32.123509 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:31:32.123515 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:31:32.123521 | orchestrator | 2026-03-29 00:31:32.123528 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2026-03-29 00:31:32.123534 | orchestrator | Sunday 29 March 2026 00:31:28 +0000 (0:00:01.149) 0:03:47.813 ********** 2026-03-29 00:31:32.123540 | orchestrator | changed: [testbed-manager] 2026-03-29 00:31:32.123546 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:31:32.123552 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:31:32.123558 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:31:32.123567 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:31:32.123574 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:31:32.123580 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:31:32.123586 | orchestrator | 2026-03-29 00:31:32.123592 | orchestrator | TASK [osism.commons.motd : Copy issue.net file] ******************************** 2026-03-29 00:31:32.123598 | orchestrator | Sunday 29 March 2026 00:31:29 +0000 (0:00:01.158) 0:03:48.971 ********** 2026-03-29 00:31:32.123604 | orchestrator | changed: [testbed-manager] 2026-03-29 00:31:32.123610 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:31:32.123616 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:31:32.123622 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:31:32.123628 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:31:32.123665 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:31:32.123671 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:31:32.123677 | orchestrator | 2026-03-29 00:31:32.123684 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2026-03-29 00:31:32.123690 | orchestrator | Sunday 29 March 2026 00:31:30 +0000 (0:00:01.387) 0:03:50.359 ********** 2026-03-29 00:31:32.123696 | orchestrator | skipping: [testbed-manager] 2026-03-29 00:31:32.123702 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:31:32.123708 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:31:32.123714 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:31:32.123720 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:31:32.123726 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:31:32.123731 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:31:32.123737 | orchestrator | 2026-03-29 00:31:32.123744 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2026-03-29 00:31:32.123750 | orchestrator | Sunday 29 March 2026 00:31:30 +0000 (0:00:00.261) 0:03:50.620 ********** 2026-03-29 00:31:32.123758 | orchestrator | ok: [testbed-manager] 2026-03-29 00:31:32.123766 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:31:32.123773 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:31:32.123779 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:31:32.123786 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:31:32.123793 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:31:32.123800 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:31:32.123807 | orchestrator | 2026-03-29 00:31:32.123813 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2026-03-29 00:31:32.123820 | orchestrator | Sunday 29 March 2026 00:31:31 +0000 (0:00:00.778) 0:03:51.399 ********** 2026-03-29 00:31:32.123829 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-29 00:31:32.123838 | orchestrator | 2026-03-29 00:31:32.123845 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2026-03-29 00:31:32.123857 | orchestrator | Sunday 29 March 2026 00:31:32 +0000 (0:00:00.422) 0:03:51.822 ********** 2026-03-29 00:32:50.279988 | orchestrator | ok: [testbed-manager] 2026-03-29 00:32:50.280113 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:32:50.280129 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:32:50.280137 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:32:50.280217 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:32:50.280228 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:32:50.280235 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:32:50.280243 | orchestrator | 2026-03-29 00:32:50.280254 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2026-03-29 00:32:50.280264 | orchestrator | Sunday 29 March 2026 00:31:41 +0000 (0:00:09.313) 0:04:01.135 ********** 2026-03-29 00:32:50.280272 | orchestrator | ok: [testbed-manager] 2026-03-29 00:32:50.280279 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:32:50.280287 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:32:50.280295 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:32:50.280302 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:32:50.280310 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:32:50.280318 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:32:50.280325 | orchestrator | 2026-03-29 00:32:50.280333 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2026-03-29 00:32:50.280340 | orchestrator | Sunday 29 March 2026 00:31:42 +0000 (0:00:01.349) 0:04:02.484 ********** 2026-03-29 00:32:50.280348 | orchestrator | ok: [testbed-manager] 2026-03-29 00:32:50.280355 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:32:50.280362 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:32:50.280369 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:32:50.280376 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:32:50.280383 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:32:50.280390 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:32:50.280397 | orchestrator | 2026-03-29 00:32:50.280405 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2026-03-29 00:32:50.280412 | orchestrator | Sunday 29 March 2026 00:31:43 +0000 (0:00:01.008) 0:04:03.493 ********** 2026-03-29 00:32:50.280419 | orchestrator | ok: [testbed-manager] 2026-03-29 00:32:50.280426 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:32:50.280434 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:32:50.280440 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:32:50.280444 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:32:50.280448 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:32:50.280453 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:32:50.280457 | orchestrator | 2026-03-29 00:32:50.280462 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2026-03-29 00:32:50.280467 | orchestrator | Sunday 29 March 2026 00:31:44 +0000 (0:00:00.317) 0:04:03.811 ********** 2026-03-29 00:32:50.280471 | orchestrator | ok: [testbed-manager] 2026-03-29 00:32:50.280485 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:32:50.280489 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:32:50.280494 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:32:50.280498 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:32:50.280503 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:32:50.280507 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:32:50.280511 | orchestrator | 2026-03-29 00:32:50.280564 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2026-03-29 00:32:50.280573 | orchestrator | Sunday 29 March 2026 00:31:44 +0000 (0:00:00.297) 0:04:04.108 ********** 2026-03-29 00:32:50.280580 | orchestrator | ok: [testbed-manager] 2026-03-29 00:32:50.280587 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:32:50.280594 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:32:50.280601 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:32:50.280609 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:32:50.280617 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:32:50.280625 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:32:50.280632 | orchestrator | 2026-03-29 00:32:50.280640 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2026-03-29 00:32:50.280647 | orchestrator | Sunday 29 March 2026 00:31:44 +0000 (0:00:00.315) 0:04:04.424 ********** 2026-03-29 00:32:50.280655 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:32:50.280662 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:32:50.280670 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:32:50.280687 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:32:50.280694 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:32:50.280702 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:32:50.280709 | orchestrator | ok: [testbed-manager] 2026-03-29 00:32:50.280716 | orchestrator | 2026-03-29 00:32:50.280724 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2026-03-29 00:32:50.280731 | orchestrator | Sunday 29 March 2026 00:31:49 +0000 (0:00:04.809) 0:04:09.233 ********** 2026-03-29 00:32:50.280741 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-29 00:32:50.280750 | orchestrator | 2026-03-29 00:32:50.280758 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2026-03-29 00:32:50.280766 | orchestrator | Sunday 29 March 2026 00:31:49 +0000 (0:00:00.382) 0:04:09.616 ********** 2026-03-29 00:32:50.280773 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2026-03-29 00:32:50.280780 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2026-03-29 00:32:50.280786 | orchestrator | skipping: [testbed-manager] 2026-03-29 00:32:50.280791 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2026-03-29 00:32:50.280796 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2026-03-29 00:32:50.280801 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2026-03-29 00:32:50.280806 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2026-03-29 00:32:50.280811 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:32:50.280816 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2026-03-29 00:32:50.280820 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2026-03-29 00:32:50.280825 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:32:50.280830 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:32:50.280835 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2026-03-29 00:32:50.280840 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2026-03-29 00:32:50.280845 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:32:50.280850 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2026-03-29 00:32:50.280869 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2026-03-29 00:32:50.280875 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:32:50.280880 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2026-03-29 00:32:50.280885 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2026-03-29 00:32:50.280890 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:32:50.280894 | orchestrator | 2026-03-29 00:32:50.280899 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2026-03-29 00:32:50.280903 | orchestrator | Sunday 29 March 2026 00:31:50 +0000 (0:00:00.334) 0:04:09.950 ********** 2026-03-29 00:32:50.280908 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-29 00:32:50.280912 | orchestrator | 2026-03-29 00:32:50.280916 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2026-03-29 00:32:50.280921 | orchestrator | Sunday 29 March 2026 00:31:50 +0000 (0:00:00.512) 0:04:10.463 ********** 2026-03-29 00:32:50.280925 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2026-03-29 00:32:50.280930 | orchestrator | skipping: [testbed-manager] 2026-03-29 00:32:50.280934 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2026-03-29 00:32:50.280938 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2026-03-29 00:32:50.280956 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:32:50.280961 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:32:50.280965 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2026-03-29 00:32:50.280974 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2026-03-29 00:32:50.280978 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:32:50.280982 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:32:50.280987 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2026-03-29 00:32:50.280991 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:32:50.280996 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2026-03-29 00:32:50.281000 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:32:50.281004 | orchestrator | 2026-03-29 00:32:50.281008 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2026-03-29 00:32:50.281013 | orchestrator | Sunday 29 March 2026 00:31:51 +0000 (0:00:00.312) 0:04:10.775 ********** 2026-03-29 00:32:50.281017 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-29 00:32:50.281021 | orchestrator | 2026-03-29 00:32:50.281026 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2026-03-29 00:32:50.281030 | orchestrator | Sunday 29 March 2026 00:31:51 +0000 (0:00:00.400) 0:04:11.175 ********** 2026-03-29 00:32:50.281037 | orchestrator | changed: [testbed-manager] 2026-03-29 00:32:50.281041 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:32:50.281045 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:32:50.281050 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:32:50.281054 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:32:50.281058 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:32:50.281062 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:32:50.281066 | orchestrator | 2026-03-29 00:32:50.281070 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2026-03-29 00:32:50.281075 | orchestrator | Sunday 29 March 2026 00:32:25 +0000 (0:00:33.696) 0:04:44.872 ********** 2026-03-29 00:32:50.281079 | orchestrator | changed: [testbed-manager] 2026-03-29 00:32:50.281083 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:32:50.281087 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:32:50.281092 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:32:50.281099 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:32:50.281106 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:32:50.281113 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:32:50.281119 | orchestrator | 2026-03-29 00:32:50.281126 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2026-03-29 00:32:50.281133 | orchestrator | Sunday 29 March 2026 00:32:33 +0000 (0:00:08.712) 0:04:53.584 ********** 2026-03-29 00:32:50.281141 | orchestrator | changed: [testbed-manager] 2026-03-29 00:32:50.281149 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:32:50.281153 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:32:50.281158 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:32:50.281162 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:32:50.281166 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:32:50.281170 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:32:50.281174 | orchestrator | 2026-03-29 00:32:50.281179 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2026-03-29 00:32:50.281183 | orchestrator | Sunday 29 March 2026 00:32:42 +0000 (0:00:08.296) 0:05:01.881 ********** 2026-03-29 00:32:50.281187 | orchestrator | ok: [testbed-manager] 2026-03-29 00:32:50.281191 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:32:50.281195 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:32:50.281200 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:32:50.281204 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:32:50.281208 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:32:50.281212 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:32:50.281216 | orchestrator | 2026-03-29 00:32:50.281220 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2026-03-29 00:32:50.281228 | orchestrator | Sunday 29 March 2026 00:32:44 +0000 (0:00:01.833) 0:05:03.714 ********** 2026-03-29 00:32:50.281233 | orchestrator | changed: [testbed-manager] 2026-03-29 00:32:50.281237 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:32:50.281241 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:32:50.281245 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:32:50.281249 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:32:50.281254 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:32:50.281258 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:32:50.281262 | orchestrator | 2026-03-29 00:32:50.281270 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2026-03-29 00:33:01.861842 | orchestrator | Sunday 29 March 2026 00:32:50 +0000 (0:00:06.262) 0:05:09.977 ********** 2026-03-29 00:33:01.861959 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-29 00:33:01.861977 | orchestrator | 2026-03-29 00:33:01.861990 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2026-03-29 00:33:01.862003 | orchestrator | Sunday 29 March 2026 00:32:50 +0000 (0:00:00.408) 0:05:10.386 ********** 2026-03-29 00:33:01.862059 | orchestrator | changed: [testbed-manager] 2026-03-29 00:33:01.862101 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:33:01.862113 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:33:01.862124 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:33:01.862135 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:33:01.862146 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:33:01.862157 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:33:01.862168 | orchestrator | 2026-03-29 00:33:01.862179 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2026-03-29 00:33:01.862190 | orchestrator | Sunday 29 March 2026 00:32:51 +0000 (0:00:00.682) 0:05:11.068 ********** 2026-03-29 00:33:01.862201 | orchestrator | ok: [testbed-manager] 2026-03-29 00:33:01.862213 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:33:01.862224 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:33:01.862235 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:33:01.862246 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:33:01.862256 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:33:01.862267 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:33:01.862278 | orchestrator | 2026-03-29 00:33:01.862289 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2026-03-29 00:33:01.862300 | orchestrator | Sunday 29 March 2026 00:32:53 +0000 (0:00:02.021) 0:05:13.090 ********** 2026-03-29 00:33:01.862311 | orchestrator | changed: [testbed-manager] 2026-03-29 00:33:01.862322 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:33:01.862332 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:33:01.862343 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:33:01.862354 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:33:01.862365 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:33:01.862377 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:33:01.862389 | orchestrator | 2026-03-29 00:33:01.862403 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2026-03-29 00:33:01.862416 | orchestrator | Sunday 29 March 2026 00:32:54 +0000 (0:00:00.813) 0:05:13.904 ********** 2026-03-29 00:33:01.862429 | orchestrator | skipping: [testbed-manager] 2026-03-29 00:33:01.862442 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:33:01.862455 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:33:01.862468 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:33:01.862480 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:33:01.862522 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:33:01.862535 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:33:01.862548 | orchestrator | 2026-03-29 00:33:01.862561 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2026-03-29 00:33:01.862589 | orchestrator | Sunday 29 March 2026 00:32:54 +0000 (0:00:00.259) 0:05:14.164 ********** 2026-03-29 00:33:01.862629 | orchestrator | skipping: [testbed-manager] 2026-03-29 00:33:01.862642 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:33:01.862655 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:33:01.862667 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:33:01.862680 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:33:01.862693 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:33:01.862705 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:33:01.862718 | orchestrator | 2026-03-29 00:33:01.862732 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2026-03-29 00:33:01.862743 | orchestrator | Sunday 29 March 2026 00:32:54 +0000 (0:00:00.373) 0:05:14.537 ********** 2026-03-29 00:33:01.862754 | orchestrator | ok: [testbed-manager] 2026-03-29 00:33:01.862766 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:33:01.862776 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:33:01.862787 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:33:01.862798 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:33:01.862809 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:33:01.862820 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:33:01.862831 | orchestrator | 2026-03-29 00:33:01.862842 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2026-03-29 00:33:01.862853 | orchestrator | Sunday 29 March 2026 00:32:55 +0000 (0:00:00.422) 0:05:14.960 ********** 2026-03-29 00:33:01.862864 | orchestrator | skipping: [testbed-manager] 2026-03-29 00:33:01.862875 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:33:01.862887 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:33:01.862898 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:33:01.862909 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:33:01.862920 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:33:01.862931 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:33:01.862941 | orchestrator | 2026-03-29 00:33:01.862952 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2026-03-29 00:33:01.862964 | orchestrator | Sunday 29 March 2026 00:32:55 +0000 (0:00:00.265) 0:05:15.225 ********** 2026-03-29 00:33:01.862975 | orchestrator | ok: [testbed-manager] 2026-03-29 00:33:01.862986 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:33:01.862997 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:33:01.863008 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:33:01.863019 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:33:01.863030 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:33:01.863041 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:33:01.863052 | orchestrator | 2026-03-29 00:33:01.863064 | orchestrator | TASK [osism.services.docker : Print used docker version] *********************** 2026-03-29 00:33:01.863075 | orchestrator | Sunday 29 March 2026 00:32:55 +0000 (0:00:00.296) 0:05:15.522 ********** 2026-03-29 00:33:01.863086 | orchestrator | ok: [testbed-manager] =>  2026-03-29 00:33:01.863097 | orchestrator |  docker_version: 5:27.5.1 2026-03-29 00:33:01.863108 | orchestrator | ok: [testbed-node-0] =>  2026-03-29 00:33:01.863120 | orchestrator |  docker_version: 5:27.5.1 2026-03-29 00:33:01.863131 | orchestrator | ok: [testbed-node-1] =>  2026-03-29 00:33:01.863142 | orchestrator |  docker_version: 5:27.5.1 2026-03-29 00:33:01.863154 | orchestrator | ok: [testbed-node-2] =>  2026-03-29 00:33:01.863165 | orchestrator |  docker_version: 5:27.5.1 2026-03-29 00:33:01.863193 | orchestrator | ok: [testbed-node-3] =>  2026-03-29 00:33:01.863205 | orchestrator |  docker_version: 5:27.5.1 2026-03-29 00:33:01.863216 | orchestrator | ok: [testbed-node-4] =>  2026-03-29 00:33:01.863228 | orchestrator |  docker_version: 5:27.5.1 2026-03-29 00:33:01.863239 | orchestrator | ok: [testbed-node-5] =>  2026-03-29 00:33:01.863251 | orchestrator |  docker_version: 5:27.5.1 2026-03-29 00:33:01.863262 | orchestrator | 2026-03-29 00:33:01.863273 | orchestrator | TASK [osism.services.docker : Print used docker cli version] ******************* 2026-03-29 00:33:01.863285 | orchestrator | Sunday 29 March 2026 00:32:56 +0000 (0:00:00.296) 0:05:15.818 ********** 2026-03-29 00:33:01.863296 | orchestrator | ok: [testbed-manager] =>  2026-03-29 00:33:01.863317 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-29 00:33:01.863328 | orchestrator | ok: [testbed-node-0] =>  2026-03-29 00:33:01.863339 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-29 00:33:01.863350 | orchestrator | ok: [testbed-node-1] =>  2026-03-29 00:33:01.863361 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-29 00:33:01.863372 | orchestrator | ok: [testbed-node-2] =>  2026-03-29 00:33:01.863383 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-29 00:33:01.863394 | orchestrator | ok: [testbed-node-3] =>  2026-03-29 00:33:01.863405 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-29 00:33:01.863416 | orchestrator | ok: [testbed-node-4] =>  2026-03-29 00:33:01.863427 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-29 00:33:01.863438 | orchestrator | ok: [testbed-node-5] =>  2026-03-29 00:33:01.863449 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-29 00:33:01.863460 | orchestrator | 2026-03-29 00:33:01.863471 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2026-03-29 00:33:01.863482 | orchestrator | Sunday 29 March 2026 00:32:56 +0000 (0:00:00.308) 0:05:16.127 ********** 2026-03-29 00:33:01.863521 | orchestrator | skipping: [testbed-manager] 2026-03-29 00:33:01.863534 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:33:01.863545 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:33:01.863556 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:33:01.863567 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:33:01.863578 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:33:01.863589 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:33:01.863601 | orchestrator | 2026-03-29 00:33:01.863612 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2026-03-29 00:33:01.863624 | orchestrator | Sunday 29 March 2026 00:32:56 +0000 (0:00:00.250) 0:05:16.378 ********** 2026-03-29 00:33:01.863635 | orchestrator | skipping: [testbed-manager] 2026-03-29 00:33:01.863646 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:33:01.863657 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:33:01.863668 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:33:01.863679 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:33:01.863690 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:33:01.863701 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:33:01.863712 | orchestrator | 2026-03-29 00:33:01.863723 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2026-03-29 00:33:01.863734 | orchestrator | Sunday 29 March 2026 00:32:56 +0000 (0:00:00.251) 0:05:16.629 ********** 2026-03-29 00:33:01.863752 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-29 00:33:01.863766 | orchestrator | 2026-03-29 00:33:01.863778 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2026-03-29 00:33:01.863789 | orchestrator | Sunday 29 March 2026 00:32:57 +0000 (0:00:00.419) 0:05:17.049 ********** 2026-03-29 00:33:01.863800 | orchestrator | ok: [testbed-manager] 2026-03-29 00:33:01.863812 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:33:01.863823 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:33:01.863834 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:33:01.863845 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:33:01.863856 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:33:01.863867 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:33:01.863878 | orchestrator | 2026-03-29 00:33:01.863889 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2026-03-29 00:33:01.863900 | orchestrator | Sunday 29 March 2026 00:32:58 +0000 (0:00:00.832) 0:05:17.881 ********** 2026-03-29 00:33:01.863911 | orchestrator | ok: [testbed-manager] 2026-03-29 00:33:01.863922 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:33:01.863933 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:33:01.863944 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:33:01.863955 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:33:01.863973 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:33:01.863984 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:33:01.863996 | orchestrator | 2026-03-29 00:33:01.864007 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2026-03-29 00:33:01.864019 | orchestrator | Sunday 29 March 2026 00:33:01 +0000 (0:00:03.299) 0:05:21.181 ********** 2026-03-29 00:33:01.864030 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2026-03-29 00:33:01.864042 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2026-03-29 00:33:01.864053 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2026-03-29 00:33:01.864064 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2026-03-29 00:33:01.864075 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2026-03-29 00:33:01.864086 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2026-03-29 00:33:01.864097 | orchestrator | skipping: [testbed-manager] 2026-03-29 00:33:01.864109 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2026-03-29 00:33:01.864119 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2026-03-29 00:33:01.864130 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2026-03-29 00:33:01.864141 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:33:01.864152 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2026-03-29 00:33:01.864163 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2026-03-29 00:33:01.864173 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2026-03-29 00:33:01.864184 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:33:01.864195 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2026-03-29 00:33:01.864214 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2026-03-29 00:34:05.023105 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2026-03-29 00:34:05.023238 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:34:05.023266 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2026-03-29 00:34:05.023285 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2026-03-29 00:34:05.023302 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2026-03-29 00:34:05.023321 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:34:05.023339 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:34:05.023358 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2026-03-29 00:34:05.023377 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2026-03-29 00:34:05.023461 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2026-03-29 00:34:05.023481 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:34:05.023500 | orchestrator | 2026-03-29 00:34:05.023519 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2026-03-29 00:34:05.023540 | orchestrator | Sunday 29 March 2026 00:33:02 +0000 (0:00:00.596) 0:05:21.778 ********** 2026-03-29 00:34:05.023559 | orchestrator | ok: [testbed-manager] 2026-03-29 00:34:05.023577 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:34:05.023595 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:34:05.023615 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:34:05.023632 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:34:05.023649 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:34:05.023668 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:34:05.023686 | orchestrator | 2026-03-29 00:34:05.023706 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2026-03-29 00:34:05.023723 | orchestrator | Sunday 29 March 2026 00:33:09 +0000 (0:00:07.479) 0:05:29.257 ********** 2026-03-29 00:34:05.023744 | orchestrator | ok: [testbed-manager] 2026-03-29 00:34:05.023763 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:34:05.023780 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:34:05.023797 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:34:05.023816 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:34:05.023834 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:34:05.023892 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:34:05.023913 | orchestrator | 2026-03-29 00:34:05.023932 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2026-03-29 00:34:05.023950 | orchestrator | Sunday 29 March 2026 00:33:10 +0000 (0:00:01.061) 0:05:30.319 ********** 2026-03-29 00:34:05.023969 | orchestrator | ok: [testbed-manager] 2026-03-29 00:34:05.023987 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:34:05.024006 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:34:05.024024 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:34:05.024042 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:34:05.024061 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:34:05.024079 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:34:05.024097 | orchestrator | 2026-03-29 00:34:05.024115 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2026-03-29 00:34:05.024134 | orchestrator | Sunday 29 March 2026 00:33:19 +0000 (0:00:08.754) 0:05:39.073 ********** 2026-03-29 00:34:05.024153 | orchestrator | changed: [testbed-manager] 2026-03-29 00:34:05.024172 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:34:05.024209 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:34:05.024228 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:34:05.024246 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:34:05.024263 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:34:05.024282 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:34:05.024300 | orchestrator | 2026-03-29 00:34:05.024318 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2026-03-29 00:34:05.024338 | orchestrator | Sunday 29 March 2026 00:33:22 +0000 (0:00:03.466) 0:05:42.539 ********** 2026-03-29 00:34:05.024358 | orchestrator | ok: [testbed-manager] 2026-03-29 00:34:05.024377 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:34:05.024426 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:34:05.024445 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:34:05.024462 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:34:05.024481 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:34:05.024499 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:34:05.024519 | orchestrator | 2026-03-29 00:34:05.024540 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2026-03-29 00:34:05.024560 | orchestrator | Sunday 29 March 2026 00:33:24 +0000 (0:00:01.378) 0:05:43.918 ********** 2026-03-29 00:34:05.024580 | orchestrator | ok: [testbed-manager] 2026-03-29 00:34:05.024599 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:34:05.024620 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:34:05.024638 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:34:05.024657 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:34:05.024676 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:34:05.024694 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:34:05.024712 | orchestrator | 2026-03-29 00:34:05.024730 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2026-03-29 00:34:05.024749 | orchestrator | Sunday 29 March 2026 00:33:25 +0000 (0:00:01.462) 0:05:45.381 ********** 2026-03-29 00:34:05.024765 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:34:05.024781 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:34:05.024799 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:34:05.024816 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:34:05.024834 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:34:05.024852 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:34:05.024869 | orchestrator | changed: [testbed-manager] 2026-03-29 00:34:05.024888 | orchestrator | 2026-03-29 00:34:05.024905 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2026-03-29 00:34:05.024923 | orchestrator | Sunday 29 March 2026 00:33:26 +0000 (0:00:00.623) 0:05:46.005 ********** 2026-03-29 00:34:05.024941 | orchestrator | ok: [testbed-manager] 2026-03-29 00:34:05.024960 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:34:05.024979 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:34:05.025015 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:34:05.025034 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:34:05.025052 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:34:05.025070 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:34:05.025087 | orchestrator | 2026-03-29 00:34:05.025105 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2026-03-29 00:34:05.025156 | orchestrator | Sunday 29 March 2026 00:33:36 +0000 (0:00:10.300) 0:05:56.306 ********** 2026-03-29 00:34:05.025178 | orchestrator | changed: [testbed-manager] 2026-03-29 00:34:05.025198 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:34:05.025216 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:34:05.025234 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:34:05.025252 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:34:05.025271 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:34:05.025288 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:34:05.025305 | orchestrator | 2026-03-29 00:34:05.025324 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2026-03-29 00:34:05.025342 | orchestrator | Sunday 29 March 2026 00:33:37 +0000 (0:00:01.159) 0:05:57.465 ********** 2026-03-29 00:34:05.025359 | orchestrator | ok: [testbed-manager] 2026-03-29 00:34:05.025378 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:34:05.025472 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:34:05.025491 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:34:05.025510 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:34:05.025528 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:34:05.025546 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:34:05.025564 | orchestrator | 2026-03-29 00:34:05.025581 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2026-03-29 00:34:05.025593 | orchestrator | Sunday 29 March 2026 00:33:46 +0000 (0:00:09.179) 0:06:06.645 ********** 2026-03-29 00:34:05.025603 | orchestrator | ok: [testbed-manager] 2026-03-29 00:34:05.025614 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:34:05.025624 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:34:05.025635 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:34:05.025645 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:34:05.025656 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:34:05.025666 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:34:05.025677 | orchestrator | 2026-03-29 00:34:05.025687 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2026-03-29 00:34:05.025698 | orchestrator | Sunday 29 March 2026 00:33:58 +0000 (0:00:11.502) 0:06:18.147 ********** 2026-03-29 00:34:05.025709 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2026-03-29 00:34:05.025720 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2026-03-29 00:34:05.025730 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2026-03-29 00:34:05.025741 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2026-03-29 00:34:05.025751 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2026-03-29 00:34:05.025762 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2026-03-29 00:34:05.025772 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2026-03-29 00:34:05.025783 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2026-03-29 00:34:05.025793 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2026-03-29 00:34:05.025804 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2026-03-29 00:34:05.025814 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2026-03-29 00:34:05.025825 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2026-03-29 00:34:05.025835 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2026-03-29 00:34:05.025846 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2026-03-29 00:34:05.025856 | orchestrator | 2026-03-29 00:34:05.025868 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2026-03-29 00:34:05.025886 | orchestrator | Sunday 29 March 2026 00:33:59 +0000 (0:00:01.243) 0:06:19.390 ********** 2026-03-29 00:34:05.025905 | orchestrator | skipping: [testbed-manager] 2026-03-29 00:34:05.025936 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:34:05.025954 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:34:05.025971 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:34:05.025988 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:34:05.026005 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:34:05.026114 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:34:05.026135 | orchestrator | 2026-03-29 00:34:05.026154 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2026-03-29 00:34:05.026166 | orchestrator | Sunday 29 March 2026 00:34:00 +0000 (0:00:00.688) 0:06:20.079 ********** 2026-03-29 00:34:05.026177 | orchestrator | ok: [testbed-manager] 2026-03-29 00:34:05.026188 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:34:05.026198 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:34:05.026209 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:34:05.026220 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:34:05.026230 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:34:05.026241 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:34:05.026251 | orchestrator | 2026-03-29 00:34:05.026262 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2026-03-29 00:34:05.026275 | orchestrator | Sunday 29 March 2026 00:34:04 +0000 (0:00:03.903) 0:06:23.983 ********** 2026-03-29 00:34:05.026286 | orchestrator | skipping: [testbed-manager] 2026-03-29 00:34:05.026296 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:34:05.026307 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:34:05.026318 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:34:05.026328 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:34:05.026338 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:34:05.026349 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:34:05.026359 | orchestrator | 2026-03-29 00:34:05.026591 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2026-03-29 00:34:05.026693 | orchestrator | Sunday 29 March 2026 00:34:04 +0000 (0:00:00.485) 0:06:24.469 ********** 2026-03-29 00:34:05.026705 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2026-03-29 00:34:05.026712 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2026-03-29 00:34:05.026719 | orchestrator | skipping: [testbed-manager] 2026-03-29 00:34:05.026726 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2026-03-29 00:34:05.026732 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2026-03-29 00:34:05.026739 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:34:05.026745 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2026-03-29 00:34:05.026751 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2026-03-29 00:34:05.026757 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:34:05.026780 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2026-03-29 00:34:24.579876 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2026-03-29 00:34:24.579949 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:34:24.579956 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2026-03-29 00:34:24.579962 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2026-03-29 00:34:24.579967 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:34:24.579972 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2026-03-29 00:34:24.579977 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2026-03-29 00:34:24.579982 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:34:24.579987 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2026-03-29 00:34:24.579991 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2026-03-29 00:34:24.579996 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:34:24.580000 | orchestrator | 2026-03-29 00:34:24.580006 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2026-03-29 00:34:24.580024 | orchestrator | Sunday 29 March 2026 00:34:05 +0000 (0:00:00.521) 0:06:24.990 ********** 2026-03-29 00:34:24.580029 | orchestrator | skipping: [testbed-manager] 2026-03-29 00:34:24.580033 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:34:24.580038 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:34:24.580042 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:34:24.580047 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:34:24.580051 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:34:24.580055 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:34:24.580060 | orchestrator | 2026-03-29 00:34:24.580065 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2026-03-29 00:34:24.580069 | orchestrator | Sunday 29 March 2026 00:34:05 +0000 (0:00:00.492) 0:06:25.483 ********** 2026-03-29 00:34:24.580074 | orchestrator | skipping: [testbed-manager] 2026-03-29 00:34:24.580078 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:34:24.580083 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:34:24.580087 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:34:24.580092 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:34:24.580096 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:34:24.580100 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:34:24.580105 | orchestrator | 2026-03-29 00:34:24.580110 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2026-03-29 00:34:24.580114 | orchestrator | Sunday 29 March 2026 00:34:06 +0000 (0:00:00.674) 0:06:26.157 ********** 2026-03-29 00:34:24.580119 | orchestrator | skipping: [testbed-manager] 2026-03-29 00:34:24.580123 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:34:24.580128 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:34:24.580132 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:34:24.580136 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:34:24.580141 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:34:24.580145 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:34:24.580150 | orchestrator | 2026-03-29 00:34:24.580154 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2026-03-29 00:34:24.580159 | orchestrator | Sunday 29 March 2026 00:34:06 +0000 (0:00:00.519) 0:06:26.677 ********** 2026-03-29 00:34:24.580166 | orchestrator | ok: [testbed-manager] 2026-03-29 00:34:24.580171 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:34:24.580176 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:34:24.580180 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:34:24.580185 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:34:24.580189 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:34:24.580193 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:34:24.580198 | orchestrator | 2026-03-29 00:34:24.580202 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2026-03-29 00:34:24.580207 | orchestrator | Sunday 29 March 2026 00:34:08 +0000 (0:00:01.849) 0:06:28.526 ********** 2026-03-29 00:34:24.580212 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-29 00:34:24.580220 | orchestrator | 2026-03-29 00:34:24.580224 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2026-03-29 00:34:24.580229 | orchestrator | Sunday 29 March 2026 00:34:09 +0000 (0:00:00.870) 0:06:29.397 ********** 2026-03-29 00:34:24.580233 | orchestrator | ok: [testbed-manager] 2026-03-29 00:34:24.580238 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:34:24.580242 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:34:24.580247 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:34:24.580251 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:34:24.580256 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:34:24.580261 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:34:24.580265 | orchestrator | 2026-03-29 00:34:24.580270 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2026-03-29 00:34:24.580278 | orchestrator | Sunday 29 March 2026 00:34:10 +0000 (0:00:01.136) 0:06:30.534 ********** 2026-03-29 00:34:24.580282 | orchestrator | ok: [testbed-manager] 2026-03-29 00:34:24.580287 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:34:24.580291 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:34:24.580296 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:34:24.580300 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:34:24.580305 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:34:24.580309 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:34:24.580314 | orchestrator | 2026-03-29 00:34:24.580318 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2026-03-29 00:34:24.580322 | orchestrator | Sunday 29 March 2026 00:34:11 +0000 (0:00:00.847) 0:06:31.382 ********** 2026-03-29 00:34:24.580327 | orchestrator | ok: [testbed-manager] 2026-03-29 00:34:24.580331 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:34:24.580336 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:34:24.580340 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:34:24.580345 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:34:24.580369 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:34:24.580373 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:34:24.580378 | orchestrator | 2026-03-29 00:34:24.580382 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2026-03-29 00:34:24.580397 | orchestrator | Sunday 29 March 2026 00:34:13 +0000 (0:00:01.368) 0:06:32.750 ********** 2026-03-29 00:34:24.580402 | orchestrator | skipping: [testbed-manager] 2026-03-29 00:34:24.580407 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:34:24.580411 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:34:24.580416 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:34:24.580420 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:34:24.580425 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:34:24.580429 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:34:24.580434 | orchestrator | 2026-03-29 00:34:24.580438 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2026-03-29 00:34:24.580443 | orchestrator | Sunday 29 March 2026 00:34:14 +0000 (0:00:01.426) 0:06:34.176 ********** 2026-03-29 00:34:24.580448 | orchestrator | ok: [testbed-manager] 2026-03-29 00:34:24.580453 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:34:24.580458 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:34:24.580463 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:34:24.580469 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:34:24.580474 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:34:24.580479 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:34:24.580484 | orchestrator | 2026-03-29 00:34:24.580490 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2026-03-29 00:34:24.580495 | orchestrator | Sunday 29 March 2026 00:34:15 +0000 (0:00:01.467) 0:06:35.644 ********** 2026-03-29 00:34:24.580500 | orchestrator | changed: [testbed-manager] 2026-03-29 00:34:24.580505 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:34:24.580510 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:34:24.580515 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:34:24.580520 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:34:24.580526 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:34:24.580531 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:34:24.580536 | orchestrator | 2026-03-29 00:34:24.580541 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2026-03-29 00:34:24.580546 | orchestrator | Sunday 29 March 2026 00:34:17 +0000 (0:00:01.371) 0:06:37.016 ********** 2026-03-29 00:34:24.580552 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-29 00:34:24.580557 | orchestrator | 2026-03-29 00:34:24.580562 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2026-03-29 00:34:24.580567 | orchestrator | Sunday 29 March 2026 00:34:18 +0000 (0:00:00.904) 0:06:37.920 ********** 2026-03-29 00:34:24.580579 | orchestrator | ok: [testbed-manager] 2026-03-29 00:34:24.580584 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:34:24.580589 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:34:24.580594 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:34:24.580600 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:34:24.580605 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:34:24.580610 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:34:24.580615 | orchestrator | 2026-03-29 00:34:24.580621 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2026-03-29 00:34:24.580626 | orchestrator | Sunday 29 March 2026 00:34:19 +0000 (0:00:01.319) 0:06:39.239 ********** 2026-03-29 00:34:24.580631 | orchestrator | ok: [testbed-manager] 2026-03-29 00:34:24.580636 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:34:24.580642 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:34:24.580647 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:34:24.580652 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:34:24.580657 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:34:24.580662 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:34:24.580668 | orchestrator | 2026-03-29 00:34:24.580673 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2026-03-29 00:34:24.580678 | orchestrator | Sunday 29 March 2026 00:34:20 +0000 (0:00:01.363) 0:06:40.603 ********** 2026-03-29 00:34:24.580683 | orchestrator | ok: [testbed-manager] 2026-03-29 00:34:24.580689 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:34:24.580694 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:34:24.580699 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:34:24.580704 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:34:24.580709 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:34:24.580714 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:34:24.580720 | orchestrator | 2026-03-29 00:34:24.580725 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2026-03-29 00:34:24.580730 | orchestrator | Sunday 29 March 2026 00:34:22 +0000 (0:00:01.364) 0:06:41.968 ********** 2026-03-29 00:34:24.580735 | orchestrator | ok: [testbed-manager] 2026-03-29 00:34:24.580740 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:34:24.580745 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:34:24.580750 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:34:24.580755 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:34:24.580761 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:34:24.580766 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:34:24.580771 | orchestrator | 2026-03-29 00:34:24.580776 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2026-03-29 00:34:24.580781 | orchestrator | Sunday 29 March 2026 00:34:23 +0000 (0:00:01.114) 0:06:43.082 ********** 2026-03-29 00:34:24.580786 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-29 00:34:24.580791 | orchestrator | 2026-03-29 00:34:24.580797 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-29 00:34:24.580802 | orchestrator | Sunday 29 March 2026 00:34:24 +0000 (0:00:00.875) 0:06:43.958 ********** 2026-03-29 00:34:24.580807 | orchestrator | 2026-03-29 00:34:24.580812 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-29 00:34:24.580818 | orchestrator | Sunday 29 March 2026 00:34:24 +0000 (0:00:00.219) 0:06:44.177 ********** 2026-03-29 00:34:24.580823 | orchestrator | 2026-03-29 00:34:24.580828 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-29 00:34:24.580833 | orchestrator | Sunday 29 March 2026 00:34:24 +0000 (0:00:00.046) 0:06:44.224 ********** 2026-03-29 00:34:24.580838 | orchestrator | 2026-03-29 00:34:24.580843 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-29 00:34:24.580851 | orchestrator | Sunday 29 March 2026 00:34:24 +0000 (0:00:00.048) 0:06:44.273 ********** 2026-03-29 00:34:51.334905 | orchestrator | 2026-03-29 00:34:51.335032 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-29 00:34:51.335082 | orchestrator | Sunday 29 March 2026 00:34:24 +0000 (0:00:00.049) 0:06:44.322 ********** 2026-03-29 00:34:51.335100 | orchestrator | 2026-03-29 00:34:51.335116 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-29 00:34:51.335130 | orchestrator | Sunday 29 March 2026 00:34:24 +0000 (0:00:00.040) 0:06:44.362 ********** 2026-03-29 00:34:51.335144 | orchestrator | 2026-03-29 00:34:51.335159 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-29 00:34:51.335174 | orchestrator | Sunday 29 March 2026 00:34:24 +0000 (0:00:00.051) 0:06:44.414 ********** 2026-03-29 00:34:51.335189 | orchestrator | 2026-03-29 00:34:51.335202 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-03-29 00:34:51.335217 | orchestrator | Sunday 29 March 2026 00:34:24 +0000 (0:00:00.049) 0:06:44.464 ********** 2026-03-29 00:34:51.335227 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:34:51.335236 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:34:51.335245 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:34:51.335253 | orchestrator | 2026-03-29 00:34:51.335262 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2026-03-29 00:34:51.335271 | orchestrator | Sunday 29 March 2026 00:34:26 +0000 (0:00:01.255) 0:06:45.719 ********** 2026-03-29 00:34:51.335279 | orchestrator | changed: [testbed-manager] 2026-03-29 00:34:51.335337 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:34:51.335349 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:34:51.335358 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:34:51.335366 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:34:51.335375 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:34:51.335384 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:34:51.335392 | orchestrator | 2026-03-29 00:34:51.335401 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart logrotate service] *********** 2026-03-29 00:34:51.335410 | orchestrator | Sunday 29 March 2026 00:34:27 +0000 (0:00:01.320) 0:06:47.040 ********** 2026-03-29 00:34:51.335418 | orchestrator | changed: [testbed-manager] 2026-03-29 00:34:51.335427 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:34:51.335436 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:34:51.335444 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:34:51.335452 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:34:51.335461 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:34:51.335469 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:34:51.335477 | orchestrator | 2026-03-29 00:34:51.335486 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2026-03-29 00:34:51.335495 | orchestrator | Sunday 29 March 2026 00:34:28 +0000 (0:00:01.224) 0:06:48.265 ********** 2026-03-29 00:34:51.335503 | orchestrator | skipping: [testbed-manager] 2026-03-29 00:34:51.335512 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:34:51.335520 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:34:51.335529 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:34:51.335537 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:34:51.335546 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:34:51.335554 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:34:51.335562 | orchestrator | 2026-03-29 00:34:51.335587 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2026-03-29 00:34:51.335595 | orchestrator | Sunday 29 March 2026 00:34:30 +0000 (0:00:02.353) 0:06:50.618 ********** 2026-03-29 00:34:51.335604 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:34:51.335612 | orchestrator | 2026-03-29 00:34:51.335621 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2026-03-29 00:34:51.335629 | orchestrator | Sunday 29 March 2026 00:34:31 +0000 (0:00:00.103) 0:06:50.722 ********** 2026-03-29 00:34:51.335637 | orchestrator | ok: [testbed-manager] 2026-03-29 00:34:51.335646 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:34:51.335654 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:34:51.335663 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:34:51.335672 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:34:51.335689 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:34:51.335698 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:34:51.335706 | orchestrator | 2026-03-29 00:34:51.335715 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2026-03-29 00:34:51.335724 | orchestrator | Sunday 29 March 2026 00:34:32 +0000 (0:00:01.327) 0:06:52.049 ********** 2026-03-29 00:34:51.335732 | orchestrator | skipping: [testbed-manager] 2026-03-29 00:34:51.335741 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:34:51.335749 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:34:51.335757 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:34:51.335766 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:34:51.335774 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:34:51.335782 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:34:51.335791 | orchestrator | 2026-03-29 00:34:51.335799 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2026-03-29 00:34:51.335808 | orchestrator | Sunday 29 March 2026 00:34:32 +0000 (0:00:00.517) 0:06:52.566 ********** 2026-03-29 00:34:51.335817 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-29 00:34:51.335829 | orchestrator | 2026-03-29 00:34:51.335838 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2026-03-29 00:34:51.335847 | orchestrator | Sunday 29 March 2026 00:34:33 +0000 (0:00:00.849) 0:06:53.416 ********** 2026-03-29 00:34:51.335855 | orchestrator | ok: [testbed-manager] 2026-03-29 00:34:51.335863 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:34:51.335872 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:34:51.335880 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:34:51.335889 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:34:51.335897 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:34:51.335906 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:34:51.335920 | orchestrator | 2026-03-29 00:34:51.335934 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2026-03-29 00:34:51.335947 | orchestrator | Sunday 29 March 2026 00:34:34 +0000 (0:00:01.073) 0:06:54.490 ********** 2026-03-29 00:34:51.335963 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2026-03-29 00:34:51.335998 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2026-03-29 00:34:51.336009 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2026-03-29 00:34:51.336017 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2026-03-29 00:34:51.336026 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2026-03-29 00:34:51.336034 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2026-03-29 00:34:51.336043 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2026-03-29 00:34:51.336056 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2026-03-29 00:34:51.336070 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2026-03-29 00:34:51.336084 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2026-03-29 00:34:51.336098 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2026-03-29 00:34:51.336114 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2026-03-29 00:34:51.336130 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2026-03-29 00:34:51.336145 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2026-03-29 00:34:51.336159 | orchestrator | 2026-03-29 00:34:51.336174 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2026-03-29 00:34:51.336189 | orchestrator | Sunday 29 March 2026 00:34:37 +0000 (0:00:02.615) 0:06:57.106 ********** 2026-03-29 00:34:51.336203 | orchestrator | skipping: [testbed-manager] 2026-03-29 00:34:51.336216 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:34:51.336225 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:34:51.336233 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:34:51.336250 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:34:51.336258 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:34:51.336267 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:34:51.336275 | orchestrator | 2026-03-29 00:34:51.336284 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2026-03-29 00:34:51.336368 | orchestrator | Sunday 29 March 2026 00:34:37 +0000 (0:00:00.512) 0:06:57.618 ********** 2026-03-29 00:34:51.336381 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-29 00:34:51.336392 | orchestrator | 2026-03-29 00:34:51.336401 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2026-03-29 00:34:51.336409 | orchestrator | Sunday 29 March 2026 00:34:38 +0000 (0:00:01.001) 0:06:58.619 ********** 2026-03-29 00:34:51.336418 | orchestrator | ok: [testbed-manager] 2026-03-29 00:34:51.336426 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:34:51.336435 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:34:51.336443 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:34:51.336452 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:34:51.336460 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:34:51.336468 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:34:51.336477 | orchestrator | 2026-03-29 00:34:51.336492 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2026-03-29 00:34:51.336501 | orchestrator | Sunday 29 March 2026 00:34:39 +0000 (0:00:00.854) 0:06:59.474 ********** 2026-03-29 00:34:51.336510 | orchestrator | ok: [testbed-manager] 2026-03-29 00:34:51.336518 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:34:51.336527 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:34:51.336535 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:34:51.336544 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:34:51.336552 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:34:51.336560 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:34:51.336569 | orchestrator | 2026-03-29 00:34:51.336577 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2026-03-29 00:34:51.336586 | orchestrator | Sunday 29 March 2026 00:34:40 +0000 (0:00:00.815) 0:07:00.289 ********** 2026-03-29 00:34:51.336595 | orchestrator | skipping: [testbed-manager] 2026-03-29 00:34:51.336603 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:34:51.336612 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:34:51.336620 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:34:51.336629 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:34:51.336637 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:34:51.336646 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:34:51.336654 | orchestrator | 2026-03-29 00:34:51.336663 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2026-03-29 00:34:51.336671 | orchestrator | Sunday 29 March 2026 00:34:41 +0000 (0:00:00.537) 0:07:00.827 ********** 2026-03-29 00:34:51.336680 | orchestrator | ok: [testbed-manager] 2026-03-29 00:34:51.336688 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:34:51.336697 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:34:51.336705 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:34:51.336714 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:34:51.336722 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:34:51.336730 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:34:51.336743 | orchestrator | 2026-03-29 00:34:51.336758 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2026-03-29 00:34:51.336772 | orchestrator | Sunday 29 March 2026 00:34:42 +0000 (0:00:01.569) 0:07:02.396 ********** 2026-03-29 00:34:51.336787 | orchestrator | skipping: [testbed-manager] 2026-03-29 00:34:51.336802 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:34:51.336817 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:34:51.336832 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:34:51.336847 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:34:51.336873 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:34:51.336888 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:34:51.336902 | orchestrator | 2026-03-29 00:34:51.336917 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2026-03-29 00:34:51.336932 | orchestrator | Sunday 29 March 2026 00:34:43 +0000 (0:00:00.661) 0:07:03.058 ********** 2026-03-29 00:34:51.336947 | orchestrator | ok: [testbed-manager] 2026-03-29 00:34:51.336962 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:34:51.336977 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:34:51.336992 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:34:51.337007 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:34:51.337020 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:34:51.337045 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:35:24.415707 | orchestrator | 2026-03-29 00:35:24.415879 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2026-03-29 00:35:24.415899 | orchestrator | Sunday 29 March 2026 00:34:51 +0000 (0:00:08.058) 0:07:11.116 ********** 2026-03-29 00:35:24.415954 | orchestrator | ok: [testbed-manager] 2026-03-29 00:35:24.415970 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:35:24.415983 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:35:24.415995 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:35:24.416006 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:35:24.416017 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:35:24.416028 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:35:24.416039 | orchestrator | 2026-03-29 00:35:24.416051 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2026-03-29 00:35:24.416062 | orchestrator | Sunday 29 March 2026 00:34:52 +0000 (0:00:01.341) 0:07:12.458 ********** 2026-03-29 00:35:24.416073 | orchestrator | ok: [testbed-manager] 2026-03-29 00:35:24.416084 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:35:24.416095 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:35:24.416106 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:35:24.416134 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:35:24.416157 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:35:24.416170 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:35:24.416183 | orchestrator | 2026-03-29 00:35:24.416196 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2026-03-29 00:35:24.416209 | orchestrator | Sunday 29 March 2026 00:34:54 +0000 (0:00:01.797) 0:07:14.255 ********** 2026-03-29 00:35:24.416283 | orchestrator | ok: [testbed-manager] 2026-03-29 00:35:24.416298 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:35:24.416311 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:35:24.416323 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:35:24.416335 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:35:24.416347 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:35:24.416360 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:35:24.416371 | orchestrator | 2026-03-29 00:35:24.416384 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-03-29 00:35:24.416396 | orchestrator | Sunday 29 March 2026 00:34:56 +0000 (0:00:01.842) 0:07:16.098 ********** 2026-03-29 00:35:24.416408 | orchestrator | ok: [testbed-manager] 2026-03-29 00:35:24.416421 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:35:24.416433 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:35:24.416445 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:35:24.416458 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:35:24.416470 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:35:24.416482 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:35:24.416495 | orchestrator | 2026-03-29 00:35:24.416516 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-03-29 00:35:24.416537 | orchestrator | Sunday 29 March 2026 00:34:57 +0000 (0:00:00.858) 0:07:16.957 ********** 2026-03-29 00:35:24.416557 | orchestrator | skipping: [testbed-manager] 2026-03-29 00:35:24.416576 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:35:24.416598 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:35:24.416660 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:35:24.416672 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:35:24.416683 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:35:24.416694 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:35:24.416705 | orchestrator | 2026-03-29 00:35:24.416716 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2026-03-29 00:35:24.416727 | orchestrator | Sunday 29 March 2026 00:34:58 +0000 (0:00:00.789) 0:07:17.746 ********** 2026-03-29 00:35:24.416738 | orchestrator | skipping: [testbed-manager] 2026-03-29 00:35:24.416749 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:35:24.416761 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:35:24.416772 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:35:24.416782 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:35:24.416793 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:35:24.416804 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:35:24.416814 | orchestrator | 2026-03-29 00:35:24.416825 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2026-03-29 00:35:24.416836 | orchestrator | Sunday 29 March 2026 00:34:58 +0000 (0:00:00.665) 0:07:18.411 ********** 2026-03-29 00:35:24.416846 | orchestrator | ok: [testbed-manager] 2026-03-29 00:35:24.416857 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:35:24.416868 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:35:24.416878 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:35:24.416889 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:35:24.416899 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:35:24.416910 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:35:24.416920 | orchestrator | 2026-03-29 00:35:24.416931 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2026-03-29 00:35:24.416942 | orchestrator | Sunday 29 March 2026 00:34:59 +0000 (0:00:00.502) 0:07:18.914 ********** 2026-03-29 00:35:24.416953 | orchestrator | ok: [testbed-manager] 2026-03-29 00:35:24.416963 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:35:24.416974 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:35:24.416984 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:35:24.416995 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:35:24.417005 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:35:24.417016 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:35:24.417026 | orchestrator | 2026-03-29 00:35:24.417037 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2026-03-29 00:35:24.417048 | orchestrator | Sunday 29 March 2026 00:34:59 +0000 (0:00:00.492) 0:07:19.407 ********** 2026-03-29 00:35:24.417059 | orchestrator | ok: [testbed-manager] 2026-03-29 00:35:24.417069 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:35:24.417080 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:35:24.417091 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:35:24.417101 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:35:24.417111 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:35:24.417122 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:35:24.417132 | orchestrator | 2026-03-29 00:35:24.417143 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2026-03-29 00:35:24.417154 | orchestrator | Sunday 29 March 2026 00:35:00 +0000 (0:00:00.479) 0:07:19.887 ********** 2026-03-29 00:35:24.417164 | orchestrator | ok: [testbed-manager] 2026-03-29 00:35:24.417175 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:35:24.417185 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:35:24.417196 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:35:24.417206 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:35:24.417217 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:35:24.417279 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:35:24.417291 | orchestrator | 2026-03-29 00:35:24.417326 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2026-03-29 00:35:24.417338 | orchestrator | Sunday 29 March 2026 00:35:05 +0000 (0:00:05.022) 0:07:24.909 ********** 2026-03-29 00:35:24.417349 | orchestrator | skipping: [testbed-manager] 2026-03-29 00:35:24.417360 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:35:24.417381 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:35:24.417391 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:35:24.417402 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:35:24.417413 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:35:24.417423 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:35:24.417434 | orchestrator | 2026-03-29 00:35:24.417445 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2026-03-29 00:35:24.417456 | orchestrator | Sunday 29 March 2026 00:35:05 +0000 (0:00:00.719) 0:07:25.628 ********** 2026-03-29 00:35:24.417484 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-29 00:35:24.417502 | orchestrator | 2026-03-29 00:35:24.417535 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2026-03-29 00:35:24.417553 | orchestrator | Sunday 29 March 2026 00:35:06 +0000 (0:00:00.833) 0:07:26.461 ********** 2026-03-29 00:35:24.417571 | orchestrator | ok: [testbed-manager] 2026-03-29 00:35:24.417590 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:35:24.417608 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:35:24.417626 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:35:24.417645 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:35:24.417658 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:35:24.417669 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:35:24.417679 | orchestrator | 2026-03-29 00:35:24.417690 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2026-03-29 00:35:24.417701 | orchestrator | Sunday 29 March 2026 00:35:08 +0000 (0:00:02.108) 0:07:28.570 ********** 2026-03-29 00:35:24.417712 | orchestrator | ok: [testbed-manager] 2026-03-29 00:35:24.417722 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:35:24.417733 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:35:24.417743 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:35:24.417753 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:35:24.417764 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:35:24.417774 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:35:24.417785 | orchestrator | 2026-03-29 00:35:24.417795 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2026-03-29 00:35:24.417806 | orchestrator | Sunday 29 March 2026 00:35:10 +0000 (0:00:01.358) 0:07:29.928 ********** 2026-03-29 00:35:24.417817 | orchestrator | ok: [testbed-manager] 2026-03-29 00:35:24.417828 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:35:24.417838 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:35:24.417849 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:35:24.417859 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:35:24.417870 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:35:24.417880 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:35:24.417891 | orchestrator | 2026-03-29 00:35:24.417901 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2026-03-29 00:35:24.417919 | orchestrator | Sunday 29 March 2026 00:35:11 +0000 (0:00:00.950) 0:07:30.879 ********** 2026-03-29 00:35:24.417931 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-29 00:35:24.417944 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-29 00:35:24.417955 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-29 00:35:24.417966 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-29 00:35:24.417976 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-29 00:35:24.417987 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-29 00:35:24.418007 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-29 00:35:24.418093 | orchestrator | 2026-03-29 00:35:24.418108 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2026-03-29 00:35:24.418119 | orchestrator | Sunday 29 March 2026 00:35:12 +0000 (0:00:01.779) 0:07:32.659 ********** 2026-03-29 00:35:24.418130 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-29 00:35:24.418141 | orchestrator | 2026-03-29 00:35:24.418152 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2026-03-29 00:35:24.418163 | orchestrator | Sunday 29 March 2026 00:35:13 +0000 (0:00:00.928) 0:07:33.587 ********** 2026-03-29 00:35:24.418174 | orchestrator | changed: [testbed-manager] 2026-03-29 00:35:24.418184 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:35:24.418195 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:35:24.418206 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:35:24.418216 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:35:24.418248 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:35:24.418259 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:35:24.418269 | orchestrator | 2026-03-29 00:35:24.418291 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2026-03-29 00:35:55.074671 | orchestrator | Sunday 29 March 2026 00:35:24 +0000 (0:00:10.524) 0:07:44.112 ********** 2026-03-29 00:35:55.074765 | orchestrator | ok: [testbed-manager] 2026-03-29 00:35:55.074775 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:35:55.074782 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:35:55.074788 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:35:55.074794 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:35:55.074800 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:35:55.074806 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:35:55.074812 | orchestrator | 2026-03-29 00:35:55.074819 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2026-03-29 00:35:55.074825 | orchestrator | Sunday 29 March 2026 00:35:26 +0000 (0:00:01.731) 0:07:45.843 ********** 2026-03-29 00:35:55.074831 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:35:55.074837 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:35:55.074842 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:35:55.074848 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:35:55.074854 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:35:55.074860 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:35:55.074866 | orchestrator | 2026-03-29 00:35:55.074871 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2026-03-29 00:35:55.074877 | orchestrator | Sunday 29 March 2026 00:35:27 +0000 (0:00:01.498) 0:07:47.342 ********** 2026-03-29 00:35:55.074883 | orchestrator | changed: [testbed-manager] 2026-03-29 00:35:55.074890 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:35:55.074896 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:35:55.074901 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:35:55.074907 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:35:55.074913 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:35:55.074918 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:35:55.074924 | orchestrator | 2026-03-29 00:35:55.074930 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2026-03-29 00:35:55.074936 | orchestrator | 2026-03-29 00:35:55.074942 | orchestrator | TASK [Include hardening role] ************************************************** 2026-03-29 00:35:55.074947 | orchestrator | Sunday 29 March 2026 00:35:28 +0000 (0:00:01.239) 0:07:48.581 ********** 2026-03-29 00:35:55.074953 | orchestrator | skipping: [testbed-manager] 2026-03-29 00:35:55.074959 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:35:55.074986 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:35:55.074993 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:35:55.074998 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:35:55.075004 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:35:55.075010 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:35:55.075015 | orchestrator | 2026-03-29 00:35:55.075021 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2026-03-29 00:35:55.075027 | orchestrator | 2026-03-29 00:35:55.075032 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2026-03-29 00:35:55.075038 | orchestrator | Sunday 29 March 2026 00:35:29 +0000 (0:00:00.488) 0:07:49.070 ********** 2026-03-29 00:35:55.075044 | orchestrator | changed: [testbed-manager] 2026-03-29 00:35:55.075050 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:35:55.075055 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:35:55.075061 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:35:55.075067 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:35:55.075073 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:35:55.075089 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:35:55.075095 | orchestrator | 2026-03-29 00:35:55.075101 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2026-03-29 00:35:55.075107 | orchestrator | Sunday 29 March 2026 00:35:30 +0000 (0:00:01.348) 0:07:50.418 ********** 2026-03-29 00:35:55.075112 | orchestrator | ok: [testbed-manager] 2026-03-29 00:35:55.075118 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:35:55.075124 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:35:55.075129 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:35:55.075135 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:35:55.075141 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:35:55.075146 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:35:55.075167 | orchestrator | 2026-03-29 00:35:55.075173 | orchestrator | TASK [Include auditd role] ***************************************************** 2026-03-29 00:35:55.075179 | orchestrator | Sunday 29 March 2026 00:35:32 +0000 (0:00:01.605) 0:07:52.023 ********** 2026-03-29 00:35:55.075185 | orchestrator | skipping: [testbed-manager] 2026-03-29 00:35:55.075191 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:35:55.075196 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:35:55.075202 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:35:55.075208 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:35:55.075213 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:35:55.075219 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:35:55.075224 | orchestrator | 2026-03-29 00:35:55.075230 | orchestrator | TASK [Include smartd role] ***************************************************** 2026-03-29 00:35:55.075236 | orchestrator | Sunday 29 March 2026 00:35:32 +0000 (0:00:00.515) 0:07:52.538 ********** 2026-03-29 00:35:55.075242 | orchestrator | included: osism.services.smartd for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-29 00:35:55.075249 | orchestrator | 2026-03-29 00:35:55.075255 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2026-03-29 00:35:55.075261 | orchestrator | Sunday 29 March 2026 00:35:33 +0000 (0:00:00.783) 0:07:53.322 ********** 2026-03-29 00:35:55.075269 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-29 00:35:55.075277 | orchestrator | 2026-03-29 00:35:55.075283 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2026-03-29 00:35:55.075289 | orchestrator | Sunday 29 March 2026 00:35:34 +0000 (0:00:00.915) 0:07:54.238 ********** 2026-03-29 00:35:55.075294 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:35:55.075300 | orchestrator | changed: [testbed-manager] 2026-03-29 00:35:55.075306 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:35:55.075311 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:35:55.075317 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:35:55.075327 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:35:55.075332 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:35:55.075338 | orchestrator | 2026-03-29 00:35:55.075356 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2026-03-29 00:35:55.075362 | orchestrator | Sunday 29 March 2026 00:35:43 +0000 (0:00:08.805) 0:08:03.043 ********** 2026-03-29 00:35:55.075368 | orchestrator | changed: [testbed-manager] 2026-03-29 00:35:55.075373 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:35:55.075379 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:35:55.075385 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:35:55.075390 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:35:55.075396 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:35:55.075401 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:35:55.075407 | orchestrator | 2026-03-29 00:35:55.075413 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2026-03-29 00:35:55.075419 | orchestrator | Sunday 29 March 2026 00:35:44 +0000 (0:00:00.880) 0:08:03.924 ********** 2026-03-29 00:35:55.075425 | orchestrator | changed: [testbed-manager] 2026-03-29 00:35:55.075430 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:35:55.075436 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:35:55.075441 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:35:55.075447 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:35:55.075453 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:35:55.075458 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:35:55.075464 | orchestrator | 2026-03-29 00:35:55.075470 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2026-03-29 00:35:55.075475 | orchestrator | Sunday 29 March 2026 00:35:45 +0000 (0:00:01.436) 0:08:05.360 ********** 2026-03-29 00:35:55.075481 | orchestrator | changed: [testbed-manager] 2026-03-29 00:35:55.075487 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:35:55.075492 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:35:55.075498 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:35:55.075503 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:35:55.075509 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:35:55.075515 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:35:55.075520 | orchestrator | 2026-03-29 00:35:55.075526 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2026-03-29 00:35:55.075532 | orchestrator | Sunday 29 March 2026 00:35:47 +0000 (0:00:02.208) 0:08:07.568 ********** 2026-03-29 00:35:55.075537 | orchestrator | changed: [testbed-manager] 2026-03-29 00:35:55.075543 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:35:55.075548 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:35:55.075554 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:35:55.075560 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:35:55.075565 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:35:55.075571 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:35:55.075577 | orchestrator | 2026-03-29 00:35:55.075582 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2026-03-29 00:35:55.075588 | orchestrator | Sunday 29 March 2026 00:35:49 +0000 (0:00:01.289) 0:08:08.858 ********** 2026-03-29 00:35:55.075594 | orchestrator | changed: [testbed-manager] 2026-03-29 00:35:55.075599 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:35:55.075605 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:35:55.075611 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:35:55.075616 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:35:55.075622 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:35:55.075631 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:35:55.075637 | orchestrator | 2026-03-29 00:35:55.075643 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2026-03-29 00:35:55.075648 | orchestrator | 2026-03-29 00:35:55.075654 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2026-03-29 00:35:55.075660 | orchestrator | Sunday 29 March 2026 00:35:50 +0000 (0:00:01.122) 0:08:09.981 ********** 2026-03-29 00:35:55.075670 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-29 00:35:55.075676 | orchestrator | 2026-03-29 00:35:55.075682 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-03-29 00:35:55.075688 | orchestrator | Sunday 29 March 2026 00:35:51 +0000 (0:00:00.968) 0:08:10.949 ********** 2026-03-29 00:35:55.075693 | orchestrator | ok: [testbed-manager] 2026-03-29 00:35:55.075699 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:35:55.075705 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:35:55.075710 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:35:55.075716 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:35:55.075722 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:35:55.075727 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:35:55.075733 | orchestrator | 2026-03-29 00:35:55.075739 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-03-29 00:35:55.075744 | orchestrator | Sunday 29 March 2026 00:35:52 +0000 (0:00:00.840) 0:08:11.790 ********** 2026-03-29 00:35:55.075750 | orchestrator | changed: [testbed-manager] 2026-03-29 00:35:55.075756 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:35:55.075762 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:35:55.075767 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:35:55.075773 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:35:55.075779 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:35:55.075784 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:35:55.075790 | orchestrator | 2026-03-29 00:35:55.075795 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2026-03-29 00:35:55.075801 | orchestrator | Sunday 29 March 2026 00:35:53 +0000 (0:00:01.259) 0:08:13.049 ********** 2026-03-29 00:35:55.075807 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-29 00:35:55.075813 | orchestrator | 2026-03-29 00:35:55.075819 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-03-29 00:35:55.075824 | orchestrator | Sunday 29 March 2026 00:35:54 +0000 (0:00:00.810) 0:08:13.859 ********** 2026-03-29 00:35:55.075830 | orchestrator | ok: [testbed-manager] 2026-03-29 00:35:55.075836 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:35:55.075841 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:35:55.075847 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:35:55.075853 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:35:55.075858 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:35:55.075864 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:35:55.075870 | orchestrator | 2026-03-29 00:35:55.075879 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-03-29 00:35:56.833970 | orchestrator | Sunday 29 March 2026 00:35:55 +0000 (0:00:00.908) 0:08:14.768 ********** 2026-03-29 00:35:56.834222 | orchestrator | changed: [testbed-manager] 2026-03-29 00:35:56.834245 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:35:56.834258 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:35:56.834270 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:35:56.834281 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:35:56.834292 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:35:56.834303 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:35:56.834313 | orchestrator | 2026-03-29 00:35:56.834325 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 00:35:56.834338 | orchestrator | testbed-manager : ok=168  changed=40  unreachable=0 failed=0 skipped=42  rescued=0 ignored=0 2026-03-29 00:35:56.834350 | orchestrator | testbed-node-0 : ok=177  changed=69  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-03-29 00:35:56.834361 | orchestrator | testbed-node-1 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-03-29 00:35:56.834402 | orchestrator | testbed-node-2 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-03-29 00:35:56.834420 | orchestrator | testbed-node-3 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-03-29 00:35:56.834436 | orchestrator | testbed-node-4 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-03-29 00:35:56.834447 | orchestrator | testbed-node-5 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-03-29 00:35:56.834457 | orchestrator | 2026-03-29 00:35:56.834468 | orchestrator | 2026-03-29 00:35:56.834487 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-29 00:35:56.834505 | orchestrator | Sunday 29 March 2026 00:35:56 +0000 (0:00:01.345) 0:08:16.114 ********** 2026-03-29 00:35:56.834523 | orchestrator | =============================================================================== 2026-03-29 00:35:56.834540 | orchestrator | osism.commons.packages : Install required packages --------------------- 76.90s 2026-03-29 00:35:56.834559 | orchestrator | osism.commons.packages : Download required packages -------------------- 38.03s 2026-03-29 00:35:56.834578 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 33.70s 2026-03-29 00:35:56.834615 | orchestrator | osism.commons.repository : Update package cache ------------------------ 18.55s 2026-03-29 00:35:56.834635 | orchestrator | osism.services.docker : Install docker package ------------------------- 11.50s 2026-03-29 00:35:56.834656 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 11.01s 2026-03-29 00:35:56.834675 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required -- 10.86s 2026-03-29 00:35:56.834692 | orchestrator | osism.services.lldpd : Install lldpd package --------------------------- 10.52s 2026-03-29 00:35:56.834704 | orchestrator | osism.services.docker : Install containerd package --------------------- 10.30s 2026-03-29 00:35:56.834714 | orchestrator | osism.services.rng : Install rng package -------------------------------- 9.31s 2026-03-29 00:35:56.834725 | orchestrator | osism.services.docker : Install docker-cli package ---------------------- 9.18s 2026-03-29 00:35:56.834736 | orchestrator | osism.services.smartd : Install smartmontools package ------------------- 8.81s 2026-03-29 00:35:56.834746 | orchestrator | osism.services.docker : Add repository ---------------------------------- 8.75s 2026-03-29 00:35:56.834758 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 8.71s 2026-03-29 00:35:56.834768 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 8.30s 2026-03-29 00:35:56.834779 | orchestrator | osism.commons.docker_compose : Install docker-compose-plugin package ---- 8.06s 2026-03-29 00:35:56.834790 | orchestrator | osism.services.docker : Install apt-transport-https package ------------- 7.48s 2026-03-29 00:35:56.834800 | orchestrator | osism.commons.cleanup : Remove dependencies that are no longer required --- 6.26s 2026-03-29 00:35:56.834811 | orchestrator | osism.commons.sysctl : Set sysctl parameters on rabbitmq ---------------- 5.93s 2026-03-29 00:35:56.834822 | orchestrator | osism.commons.services : Populate service facts ------------------------- 5.16s 2026-03-29 00:35:57.006803 | orchestrator | + osism apply fail2ban 2026-03-29 00:36:08.636685 | orchestrator | 2026-03-29 00:36:08 | INFO  | Prepare task for execution of fail2ban. 2026-03-29 00:36:08.730998 | orchestrator | 2026-03-29 00:36:08 | INFO  | Task 9042bfd3-37c4-44f4-bedd-b2e0b4b55a7d (fail2ban) was prepared for execution. 2026-03-29 00:36:08.731102 | orchestrator | 2026-03-29 00:36:08 | INFO  | It takes a moment until task 9042bfd3-37c4-44f4-bedd-b2e0b4b55a7d (fail2ban) has been started and output is visible here. 2026-03-29 00:36:29.333341 | orchestrator | 2026-03-29 00:36:29.333421 | orchestrator | PLAY [Apply role fail2ban] ***************************************************** 2026-03-29 00:36:29.333445 | orchestrator | 2026-03-29 00:36:29.333450 | orchestrator | TASK [osism.services.fail2ban : Include distribution specific install tasks] *** 2026-03-29 00:36:29.333455 | orchestrator | Sunday 29 March 2026 00:36:12 +0000 (0:00:00.247) 0:00:00.247 ********** 2026-03-29 00:36:29.333461 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/fail2ban/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-29 00:36:29.333467 | orchestrator | 2026-03-29 00:36:29.333471 | orchestrator | TASK [osism.services.fail2ban : Install fail2ban package] ********************** 2026-03-29 00:36:29.333475 | orchestrator | Sunday 29 March 2026 00:36:13 +0000 (0:00:00.984) 0:00:01.232 ********** 2026-03-29 00:36:29.333479 | orchestrator | changed: [testbed-manager] 2026-03-29 00:36:29.333485 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:36:29.333489 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:36:29.333493 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:36:29.333497 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:36:29.333501 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:36:29.333505 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:36:29.333509 | orchestrator | 2026-03-29 00:36:29.333513 | orchestrator | TASK [osism.services.fail2ban : Copy configuration files] ********************** 2026-03-29 00:36:29.333517 | orchestrator | Sunday 29 March 2026 00:36:24 +0000 (0:00:11.418) 0:00:12.651 ********** 2026-03-29 00:36:29.333521 | orchestrator | changed: [testbed-manager] 2026-03-29 00:36:29.333525 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:36:29.333529 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:36:29.333532 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:36:29.333536 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:36:29.333540 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:36:29.333544 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:36:29.333548 | orchestrator | 2026-03-29 00:36:29.333553 | orchestrator | TASK [osism.services.fail2ban : Manage fail2ban service] *********************** 2026-03-29 00:36:29.333557 | orchestrator | Sunday 29 March 2026 00:36:26 +0000 (0:00:01.561) 0:00:14.212 ********** 2026-03-29 00:36:29.333561 | orchestrator | ok: [testbed-manager] 2026-03-29 00:36:29.333566 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:36:29.333570 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:36:29.333574 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:36:29.333578 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:36:29.333582 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:36:29.333586 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:36:29.333589 | orchestrator | 2026-03-29 00:36:29.333594 | orchestrator | TASK [osism.services.fail2ban : Reload fail2ban configuration] ***************** 2026-03-29 00:36:29.333598 | orchestrator | Sunday 29 March 2026 00:36:27 +0000 (0:00:01.322) 0:00:15.535 ********** 2026-03-29 00:36:29.333602 | orchestrator | changed: [testbed-manager] 2026-03-29 00:36:29.333606 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:36:29.333610 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:36:29.333614 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:36:29.333618 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:36:29.333622 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:36:29.333626 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:36:29.333630 | orchestrator | 2026-03-29 00:36:29.333634 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 00:36:29.333649 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-29 00:36:29.333655 | orchestrator | testbed-node-0 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-29 00:36:29.333659 | orchestrator | testbed-node-1 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-29 00:36:29.333663 | orchestrator | testbed-node-2 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-29 00:36:29.333716 | orchestrator | testbed-node-3 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-29 00:36:29.333720 | orchestrator | testbed-node-4 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-29 00:36:29.333724 | orchestrator | testbed-node-5 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-29 00:36:29.333728 | orchestrator | 2026-03-29 00:36:29.333732 | orchestrator | 2026-03-29 00:36:29.333736 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-29 00:36:29.333740 | orchestrator | Sunday 29 March 2026 00:36:29 +0000 (0:00:01.524) 0:00:17.059 ********** 2026-03-29 00:36:29.333744 | orchestrator | =============================================================================== 2026-03-29 00:36:29.333748 | orchestrator | osism.services.fail2ban : Install fail2ban package --------------------- 11.42s 2026-03-29 00:36:29.333752 | orchestrator | osism.services.fail2ban : Copy configuration files ---------------------- 1.56s 2026-03-29 00:36:29.333756 | orchestrator | osism.services.fail2ban : Reload fail2ban configuration ----------------- 1.52s 2026-03-29 00:36:29.333760 | orchestrator | osism.services.fail2ban : Manage fail2ban service ----------------------- 1.32s 2026-03-29 00:36:29.333764 | orchestrator | osism.services.fail2ban : Include distribution specific install tasks --- 0.98s 2026-03-29 00:36:29.458676 | orchestrator | + osism apply network 2026-03-29 00:36:40.636293 | orchestrator | 2026-03-29 00:36:40 | INFO  | Prepare task for execution of network. 2026-03-29 00:36:40.712743 | orchestrator | 2026-03-29 00:36:40 | INFO  | Task a790d480-9af6-41c5-9afb-b62dc8052b70 (network) was prepared for execution. 2026-03-29 00:36:40.712837 | orchestrator | 2026-03-29 00:36:40 | INFO  | It takes a moment until task a790d480-9af6-41c5-9afb-b62dc8052b70 (network) has been started and output is visible here. 2026-03-29 00:37:08.244070 | orchestrator | 2026-03-29 00:37:08.244181 | orchestrator | PLAY [Apply role network] ****************************************************** 2026-03-29 00:37:08.244199 | orchestrator | 2026-03-29 00:37:08.244211 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2026-03-29 00:37:08.244223 | orchestrator | Sunday 29 March 2026 00:36:44 +0000 (0:00:00.332) 0:00:00.332 ********** 2026-03-29 00:37:08.244235 | orchestrator | ok: [testbed-manager] 2026-03-29 00:37:08.244247 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:37:08.244258 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:37:08.244269 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:37:08.244280 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:37:08.244291 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:37:08.244302 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:37:08.244312 | orchestrator | 2026-03-29 00:37:08.244323 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2026-03-29 00:37:08.244334 | orchestrator | Sunday 29 March 2026 00:36:44 +0000 (0:00:00.598) 0:00:00.931 ********** 2026-03-29 00:37:08.244347 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-29 00:37:08.244361 | orchestrator | 2026-03-29 00:37:08.244373 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2026-03-29 00:37:08.244383 | orchestrator | Sunday 29 March 2026 00:36:45 +0000 (0:00:01.138) 0:00:02.069 ********** 2026-03-29 00:37:08.244394 | orchestrator | ok: [testbed-manager] 2026-03-29 00:37:08.244405 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:37:08.244415 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:37:08.244426 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:37:08.244437 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:37:08.244447 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:37:08.244487 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:37:08.244498 | orchestrator | 2026-03-29 00:37:08.244509 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2026-03-29 00:37:08.244520 | orchestrator | Sunday 29 March 2026 00:36:48 +0000 (0:00:02.653) 0:00:04.722 ********** 2026-03-29 00:37:08.244531 | orchestrator | ok: [testbed-manager] 2026-03-29 00:37:08.244542 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:37:08.244552 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:37:08.244563 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:37:08.244573 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:37:08.244584 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:37:08.244595 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:37:08.244613 | orchestrator | 2026-03-29 00:37:08.244634 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2026-03-29 00:37:08.244654 | orchestrator | Sunday 29 March 2026 00:36:50 +0000 (0:00:01.612) 0:00:06.335 ********** 2026-03-29 00:37:08.244674 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2026-03-29 00:37:08.244693 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2026-03-29 00:37:08.244713 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2026-03-29 00:37:08.244731 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2026-03-29 00:37:08.244752 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2026-03-29 00:37:08.244771 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2026-03-29 00:37:08.244791 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2026-03-29 00:37:08.244811 | orchestrator | 2026-03-29 00:37:08.244832 | orchestrator | TASK [osism.commons.network : Write network_netplan_config_template to temporary file] *** 2026-03-29 00:37:08.244853 | orchestrator | Sunday 29 March 2026 00:36:51 +0000 (0:00:01.069) 0:00:07.405 ********** 2026-03-29 00:37:08.244868 | orchestrator | skipping: [testbed-manager] 2026-03-29 00:37:08.244881 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:37:08.244894 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:37:08.244906 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:37:08.244919 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:37:08.244931 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:37:08.244944 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:37:08.244957 | orchestrator | 2026-03-29 00:37:08.244968 | orchestrator | TASK [osism.commons.network : Render netplan configuration from network_netplan_config_template variable] *** 2026-03-29 00:37:08.244980 | orchestrator | Sunday 29 March 2026 00:36:51 +0000 (0:00:00.588) 0:00:07.993 ********** 2026-03-29 00:37:08.244990 | orchestrator | skipping: [testbed-manager] 2026-03-29 00:37:08.245001 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:37:08.245012 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:37:08.245046 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:37:08.245057 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:37:08.245067 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:37:08.245078 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:37:08.245089 | orchestrator | 2026-03-29 00:37:08.245120 | orchestrator | TASK [osism.commons.network : Remove temporary network_netplan_config_template file] *** 2026-03-29 00:37:08.245131 | orchestrator | Sunday 29 March 2026 00:36:52 +0000 (0:00:00.662) 0:00:08.656 ********** 2026-03-29 00:37:08.245142 | orchestrator | skipping: [testbed-manager] 2026-03-29 00:37:08.245153 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:37:08.245163 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:37:08.245174 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:37:08.245185 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:37:08.245195 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:37:08.245206 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:37:08.245217 | orchestrator | 2026-03-29 00:37:08.245228 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2026-03-29 00:37:08.245239 | orchestrator | Sunday 29 March 2026 00:36:52 +0000 (0:00:00.635) 0:00:09.291 ********** 2026-03-29 00:37:08.245249 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-03-29 00:37:08.245272 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-29 00:37:08.245283 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-29 00:37:08.245294 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-29 00:37:08.245305 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-03-29 00:37:08.245315 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-29 00:37:08.245326 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-29 00:37:08.245337 | orchestrator | 2026-03-29 00:37:08.245366 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2026-03-29 00:37:08.245378 | orchestrator | Sunday 29 March 2026 00:36:55 +0000 (0:00:02.921) 0:00:12.213 ********** 2026-03-29 00:37:08.245389 | orchestrator | changed: [testbed-manager] 2026-03-29 00:37:08.245400 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:37:08.245411 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:37:08.245422 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:37:08.245432 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:37:08.245443 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:37:08.245454 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:37:08.245464 | orchestrator | 2026-03-29 00:37:08.245475 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2026-03-29 00:37:08.245486 | orchestrator | Sunday 29 March 2026 00:36:57 +0000 (0:00:01.542) 0:00:13.756 ********** 2026-03-29 00:37:08.245497 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-29 00:37:08.245508 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-29 00:37:08.245518 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-03-29 00:37:08.245529 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-29 00:37:08.245540 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-29 00:37:08.245550 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-03-29 00:37:08.245561 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-29 00:37:08.245572 | orchestrator | 2026-03-29 00:37:08.245583 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2026-03-29 00:37:08.245593 | orchestrator | Sunday 29 March 2026 00:36:59 +0000 (0:00:01.582) 0:00:15.338 ********** 2026-03-29 00:37:08.245604 | orchestrator | ok: [testbed-manager] 2026-03-29 00:37:08.245615 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:37:08.245626 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:37:08.245636 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:37:08.245647 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:37:08.245658 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:37:08.245669 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:37:08.245679 | orchestrator | 2026-03-29 00:37:08.245690 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2026-03-29 00:37:08.245701 | orchestrator | Sunday 29 March 2026 00:37:00 +0000 (0:00:01.017) 0:00:16.355 ********** 2026-03-29 00:37:08.245712 | orchestrator | skipping: [testbed-manager] 2026-03-29 00:37:08.245723 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:37:08.245734 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:37:08.245745 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:37:08.245755 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:37:08.245766 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:37:08.245777 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:37:08.245787 | orchestrator | 2026-03-29 00:37:08.245798 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2026-03-29 00:37:08.245809 | orchestrator | Sunday 29 March 2026 00:37:00 +0000 (0:00:00.547) 0:00:16.903 ********** 2026-03-29 00:37:08.245820 | orchestrator | ok: [testbed-manager] 2026-03-29 00:37:08.245830 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:37:08.245841 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:37:08.245852 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:37:08.245863 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:37:08.245874 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:37:08.245884 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:37:08.245895 | orchestrator | 2026-03-29 00:37:08.245911 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2026-03-29 00:37:08.245930 | orchestrator | Sunday 29 March 2026 00:37:02 +0000 (0:00:02.176) 0:00:19.080 ********** 2026-03-29 00:37:08.245941 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:37:08.245952 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:37:08.245963 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:37:08.245973 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:37:08.245984 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:37:08.245995 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:37:08.246006 | orchestrator | changed: [testbed-manager] => (item={'src': '/opt/configuration/network/iptables.sh', 'dest': 'routable.d/iptables.sh'}) 2026-03-29 00:37:08.246094 | orchestrator | 2026-03-29 00:37:08.246108 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2026-03-29 00:37:08.246119 | orchestrator | Sunday 29 March 2026 00:37:03 +0000 (0:00:00.880) 0:00:19.961 ********** 2026-03-29 00:37:08.246130 | orchestrator | ok: [testbed-manager] 2026-03-29 00:37:08.246141 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:37:08.246151 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:37:08.246162 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:37:08.246173 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:37:08.246183 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:37:08.246194 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:37:08.246205 | orchestrator | 2026-03-29 00:37:08.246216 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2026-03-29 00:37:08.246227 | orchestrator | Sunday 29 March 2026 00:37:05 +0000 (0:00:01.801) 0:00:21.762 ********** 2026-03-29 00:37:08.246238 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-29 00:37:08.246252 | orchestrator | 2026-03-29 00:37:08.246263 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-03-29 00:37:08.246273 | orchestrator | Sunday 29 March 2026 00:37:06 +0000 (0:00:01.134) 0:00:22.897 ********** 2026-03-29 00:37:08.246284 | orchestrator | ok: [testbed-manager] 2026-03-29 00:37:08.246295 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:37:08.246305 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:37:08.246316 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:37:08.246327 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:37:08.246337 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:37:08.246348 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:37:08.246359 | orchestrator | 2026-03-29 00:37:08.246370 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2026-03-29 00:37:08.246381 | orchestrator | Sunday 29 March 2026 00:37:07 +0000 (0:00:01.140) 0:00:24.037 ********** 2026-03-29 00:37:08.246392 | orchestrator | ok: [testbed-manager] 2026-03-29 00:37:08.246403 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:37:08.246413 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:37:08.246424 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:37:08.246434 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:37:08.246454 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:37:23.841296 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:37:23.841430 | orchestrator | 2026-03-29 00:37:23.841459 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-03-29 00:37:23.841472 | orchestrator | Sunday 29 March 2026 00:37:08 +0000 (0:00:00.641) 0:00:24.679 ********** 2026-03-29 00:37:23.841484 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2026-03-29 00:37:23.841495 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2026-03-29 00:37:23.841506 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2026-03-29 00:37:23.841517 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2026-03-29 00:37:23.841528 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-29 00:37:23.841561 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2026-03-29 00:37:23.841572 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-29 00:37:23.841583 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-29 00:37:23.841593 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2026-03-29 00:37:23.841604 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-29 00:37:23.841614 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-29 00:37:23.841625 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2026-03-29 00:37:23.841635 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-29 00:37:23.841646 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-29 00:37:23.841656 | orchestrator | 2026-03-29 00:37:23.841667 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2026-03-29 00:37:23.841678 | orchestrator | Sunday 29 March 2026 00:37:09 +0000 (0:00:01.170) 0:00:25.850 ********** 2026-03-29 00:37:23.841689 | orchestrator | skipping: [testbed-manager] 2026-03-29 00:37:23.841699 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:37:23.841710 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:37:23.841720 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:37:23.841731 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:37:23.841741 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:37:23.841752 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:37:23.841762 | orchestrator | 2026-03-29 00:37:23.841773 | orchestrator | TASK [osism.commons.network : Include vxlan interfaces] ************************ 2026-03-29 00:37:23.841783 | orchestrator | Sunday 29 March 2026 00:37:10 +0000 (0:00:00.609) 0:00:26.460 ********** 2026-03-29 00:37:23.841814 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/vxlan-interfaces.yml for testbed-node-1, testbed-manager, testbed-node-3, testbed-node-4, testbed-node-0, testbed-node-2, testbed-node-5 2026-03-29 00:37:23.841829 | orchestrator | 2026-03-29 00:37:23.841839 | orchestrator | TASK [osism.commons.network : Create systemd networkd netdev files] ************ 2026-03-29 00:37:23.841850 | orchestrator | Sunday 29 March 2026 00:37:14 +0000 (0:00:04.542) 0:00:31.002 ********** 2026-03-29 00:37:23.841862 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.5', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'addresses': ['192.168.112.5/20']}}) 2026-03-29 00:37:23.841882 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.5', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'addresses': ['192.168.128.5/20']}}) 2026-03-29 00:37:23.841894 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.13', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-03-29 00:37:23.841906 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.11', 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-03-29 00:37:23.841917 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.10', 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-03-29 00:37:23.841928 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.12', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-03-29 00:37:23.841965 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.14', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-03-29 00:37:23.841978 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.15', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'addresses': []}}) 2026-03-29 00:37:23.842098 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.13', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.13/20']}}) 2026-03-29 00:37:23.842113 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.11', 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.11/20']}}) 2026-03-29 00:37:23.842125 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.10', 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.10/20']}}) 2026-03-29 00:37:23.842136 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.12', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.12/20']}}) 2026-03-29 00:37:23.842166 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.14', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.14/20']}}) 2026-03-29 00:37:23.842178 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.15', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'addresses': ['192.168.128.15/20']}}) 2026-03-29 00:37:23.842189 | orchestrator | 2026-03-29 00:37:23.842207 | orchestrator | TASK [osism.commons.network : Create systemd networkd network files] *********** 2026-03-29 00:37:23.842218 | orchestrator | Sunday 29 March 2026 00:37:19 +0000 (0:00:04.983) 0:00:35.986 ********** 2026-03-29 00:37:23.842229 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.5', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'addresses': ['192.168.112.5/20']}}) 2026-03-29 00:37:23.842241 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.10', 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-03-29 00:37:23.842252 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.5', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'addresses': ['192.168.128.5/20']}}) 2026-03-29 00:37:23.842272 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.12', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-03-29 00:37:23.842293 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.11', 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-03-29 00:37:23.842323 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.15', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'addresses': []}}) 2026-03-29 00:37:23.842343 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.14', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-03-29 00:37:23.842377 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.10', 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.10/20']}}) 2026-03-29 00:37:36.775813 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.13', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-03-29 00:37:36.775907 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.11', 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.11/20']}}) 2026-03-29 00:37:36.775919 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.12', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.12/20']}}) 2026-03-29 00:37:36.775926 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.15', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'addresses': ['192.168.128.15/20']}}) 2026-03-29 00:37:36.775932 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.14', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.14/20']}}) 2026-03-29 00:37:36.775939 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.13', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.13/20']}}) 2026-03-29 00:37:36.775945 | orchestrator | 2026-03-29 00:37:36.775953 | orchestrator | TASK [osism.commons.network : Include networkd cleanup tasks] ****************** 2026-03-29 00:37:36.775996 | orchestrator | Sunday 29 March 2026 00:37:24 +0000 (0:00:05.220) 0:00:41.207 ********** 2026-03-29 00:37:36.776017 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-networkd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-29 00:37:36.776024 | orchestrator | 2026-03-29 00:37:36.776030 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-03-29 00:37:36.776036 | orchestrator | Sunday 29 March 2026 00:37:26 +0000 (0:00:01.182) 0:00:42.390 ********** 2026-03-29 00:37:36.776043 | orchestrator | ok: [testbed-manager] 2026-03-29 00:37:36.776050 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:37:36.776056 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:37:36.776063 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:37:36.776069 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:37:36.776076 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:37:36.776082 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:37:36.776088 | orchestrator | 2026-03-29 00:37:36.776111 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-03-29 00:37:36.776117 | orchestrator | Sunday 29 March 2026 00:37:27 +0000 (0:00:01.598) 0:00:43.988 ********** 2026-03-29 00:37:36.776124 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-29 00:37:36.776130 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-29 00:37:36.776136 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-29 00:37:36.776142 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-29 00:37:36.776149 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-29 00:37:36.776155 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-29 00:37:36.776161 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-29 00:37:36.776167 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-29 00:37:36.776173 | orchestrator | skipping: [testbed-manager] 2026-03-29 00:37:36.776180 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-29 00:37:36.776186 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-29 00:37:36.776192 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-29 00:37:36.776198 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:37:36.776204 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-29 00:37:36.776210 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-29 00:37:36.776216 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-29 00:37:36.776223 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-29 00:37:36.776229 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:37:36.776248 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-29 00:37:36.776254 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-29 00:37:36.776280 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-29 00:37:36.776286 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-29 00:37:36.776292 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-29 00:37:36.776298 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:37:36.776304 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-29 00:37:36.776310 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-29 00:37:36.776316 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-29 00:37:36.776322 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:37:36.776329 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-29 00:37:36.776335 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:37:36.776341 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-29 00:37:36.776347 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-29 00:37:36.776353 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-29 00:37:36.776359 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-29 00:37:36.776367 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:37:36.776374 | orchestrator | 2026-03-29 00:37:36.776381 | orchestrator | TASK [osism.commons.network : Include network extra init] ********************** 2026-03-29 00:37:36.776393 | orchestrator | Sunday 29 March 2026 00:37:28 +0000 (0:00:00.803) 0:00:44.792 ********** 2026-03-29 00:37:36.776401 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/network-extra-init.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-29 00:37:36.776408 | orchestrator | 2026-03-29 00:37:36.776415 | orchestrator | TASK [osism.commons.network : Deploy network-extra-init script] **************** 2026-03-29 00:37:36.776422 | orchestrator | Sunday 29 March 2026 00:37:29 +0000 (0:00:01.190) 0:00:45.982 ********** 2026-03-29 00:37:36.776429 | orchestrator | skipping: [testbed-manager] 2026-03-29 00:37:36.776436 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:37:36.776448 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:37:36.776455 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:37:36.776463 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:37:36.776470 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:37:36.776477 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:37:36.776484 | orchestrator | 2026-03-29 00:37:36.776491 | orchestrator | TASK [osism.commons.network : Deploy network-extra-init systemd service] ******* 2026-03-29 00:37:36.776498 | orchestrator | Sunday 29 March 2026 00:37:30 +0000 (0:00:00.562) 0:00:46.545 ********** 2026-03-29 00:37:36.776505 | orchestrator | skipping: [testbed-manager] 2026-03-29 00:37:36.776513 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:37:36.776520 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:37:36.776526 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:37:36.776533 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:37:36.776540 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:37:36.776547 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:37:36.776554 | orchestrator | 2026-03-29 00:37:36.776561 | orchestrator | TASK [osism.commons.network : Enable and start network-extra-init service] ***** 2026-03-29 00:37:36.776568 | orchestrator | Sunday 29 March 2026 00:37:31 +0000 (0:00:00.858) 0:00:47.403 ********** 2026-03-29 00:37:36.776579 | orchestrator | skipping: [testbed-manager] 2026-03-29 00:37:36.776589 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:37:36.776599 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:37:36.776609 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:37:36.776620 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:37:36.776631 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:37:36.776643 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:37:36.776653 | orchestrator | 2026-03-29 00:37:36.776665 | orchestrator | TASK [osism.commons.network : Disable and stop network-extra-init service] ***** 2026-03-29 00:37:36.776672 | orchestrator | Sunday 29 March 2026 00:37:31 +0000 (0:00:00.610) 0:00:48.014 ********** 2026-03-29 00:37:36.776679 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:37:36.776686 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:37:36.776693 | orchestrator | ok: [testbed-manager] 2026-03-29 00:37:36.776700 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:37:36.776707 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:37:36.776714 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:37:36.776721 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:37:36.776727 | orchestrator | 2026-03-29 00:37:36.776733 | orchestrator | TASK [osism.commons.network : Remove network-extra-init systemd service] ******* 2026-03-29 00:37:36.776739 | orchestrator | Sunday 29 March 2026 00:37:33 +0000 (0:00:01.779) 0:00:49.793 ********** 2026-03-29 00:37:36.776745 | orchestrator | ok: [testbed-manager] 2026-03-29 00:37:36.776751 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:37:36.776757 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:37:36.776763 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:37:36.776769 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:37:36.776775 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:37:36.776781 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:37:36.776787 | orchestrator | 2026-03-29 00:37:36.776793 | orchestrator | TASK [osism.commons.network : Remove network-extra-init script] **************** 2026-03-29 00:37:36.776799 | orchestrator | Sunday 29 March 2026 00:37:34 +0000 (0:00:01.300) 0:00:51.094 ********** 2026-03-29 00:37:36.776811 | orchestrator | ok: [testbed-manager] 2026-03-29 00:37:36.776817 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:37:36.776823 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:37:36.776828 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:37:36.776834 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:37:36.776840 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:37:36.776846 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:37:36.776852 | orchestrator | 2026-03-29 00:37:36.776863 | orchestrator | RUNNING HANDLER [osism.commons.network : Reload systemd-networkd] ************** 2026-03-29 00:37:38.412598 | orchestrator | Sunday 29 March 2026 00:37:36 +0000 (0:00:01.986) 0:00:53.080 ********** 2026-03-29 00:37:38.412682 | orchestrator | skipping: [testbed-manager] 2026-03-29 00:37:38.412701 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:37:38.412717 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:37:38.412731 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:37:38.412744 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:37:38.412757 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:37:38.412772 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:37:38.412787 | orchestrator | 2026-03-29 00:37:38.412802 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2026-03-29 00:37:38.412816 | orchestrator | Sunday 29 March 2026 00:37:37 +0000 (0:00:00.783) 0:00:53.864 ********** 2026-03-29 00:37:38.412831 | orchestrator | skipping: [testbed-manager] 2026-03-29 00:37:38.412845 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:37:38.412860 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:37:38.412874 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:37:38.412888 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:37:38.412902 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:37:38.412915 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:37:38.412929 | orchestrator | 2026-03-29 00:37:38.412944 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 00:37:38.413017 | orchestrator | testbed-manager : ok=25  changed=5  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-03-29 00:37:38.413034 | orchestrator | testbed-node-0 : ok=24  changed=5  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-29 00:37:38.413050 | orchestrator | testbed-node-1 : ok=24  changed=5  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-29 00:37:38.413064 | orchestrator | testbed-node-2 : ok=24  changed=5  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-29 00:37:38.413078 | orchestrator | testbed-node-3 : ok=24  changed=5  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-29 00:37:38.413092 | orchestrator | testbed-node-4 : ok=24  changed=5  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-29 00:37:38.413106 | orchestrator | testbed-node-5 : ok=24  changed=5  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-29 00:37:38.413120 | orchestrator | 2026-03-29 00:37:38.413138 | orchestrator | 2026-03-29 00:37:38.413152 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-29 00:37:38.413165 | orchestrator | Sunday 29 March 2026 00:37:38 +0000 (0:00:00.539) 0:00:54.403 ********** 2026-03-29 00:37:38.413179 | orchestrator | =============================================================================== 2026-03-29 00:37:38.413193 | orchestrator | osism.commons.network : Create systemd networkd network files ----------- 5.22s 2026-03-29 00:37:38.413207 | orchestrator | osism.commons.network : Create systemd networkd netdev files ------------ 4.98s 2026-03-29 00:37:38.413221 | orchestrator | osism.commons.network : Include vxlan interfaces ------------------------ 4.54s 2026-03-29 00:37:38.413260 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 2.92s 2026-03-29 00:37:38.413275 | orchestrator | osism.commons.network : Install required packages ----------------------- 2.65s 2026-03-29 00:37:38.413289 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 2.18s 2026-03-29 00:37:38.413303 | orchestrator | osism.commons.network : Remove network-extra-init script ---------------- 1.99s 2026-03-29 00:37:38.413316 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 1.80s 2026-03-29 00:37:38.413330 | orchestrator | osism.commons.network : Disable and stop network-extra-init service ----- 1.78s 2026-03-29 00:37:38.413343 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 1.61s 2026-03-29 00:37:38.413356 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.60s 2026-03-29 00:37:38.413370 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 1.58s 2026-03-29 00:37:38.413384 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.54s 2026-03-29 00:37:38.413398 | orchestrator | osism.commons.network : Remove network-extra-init systemd service ------- 1.30s 2026-03-29 00:37:38.413412 | orchestrator | osism.commons.network : Include network extra init ---------------------- 1.19s 2026-03-29 00:37:38.413425 | orchestrator | osism.commons.network : Include networkd cleanup tasks ------------------ 1.18s 2026-03-29 00:37:38.413440 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.17s 2026-03-29 00:37:38.413453 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.14s 2026-03-29 00:37:38.413465 | orchestrator | osism.commons.network : Include type specific tasks --------------------- 1.14s 2026-03-29 00:37:38.413480 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.13s 2026-03-29 00:37:38.662650 | orchestrator | + osism apply wireguard 2026-03-29 00:37:49.991003 | orchestrator | 2026-03-29 00:37:49 | INFO  | Prepare task for execution of wireguard. 2026-03-29 00:37:50.066155 | orchestrator | 2026-03-29 00:37:50 | INFO  | Task d0ec08c5-30b1-4b85-bad7-f45d5b0369ae (wireguard) was prepared for execution. 2026-03-29 00:37:50.066253 | orchestrator | 2026-03-29 00:37:50 | INFO  | It takes a moment until task d0ec08c5-30b1-4b85-bad7-f45d5b0369ae (wireguard) has been started and output is visible here. 2026-03-29 00:38:07.648626 | orchestrator | 2026-03-29 00:38:07.648765 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2026-03-29 00:38:07.648781 | orchestrator | 2026-03-29 00:38:07.648794 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2026-03-29 00:38:07.648805 | orchestrator | Sunday 29 March 2026 00:37:53 +0000 (0:00:00.213) 0:00:00.213 ********** 2026-03-29 00:38:07.648818 | orchestrator | ok: [testbed-manager] 2026-03-29 00:38:07.648831 | orchestrator | 2026-03-29 00:38:07.648842 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2026-03-29 00:38:07.648852 | orchestrator | Sunday 29 March 2026 00:37:54 +0000 (0:00:01.464) 0:00:01.678 ********** 2026-03-29 00:38:07.648863 | orchestrator | changed: [testbed-manager] 2026-03-29 00:38:07.648884 | orchestrator | 2026-03-29 00:38:07.648963 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2026-03-29 00:38:07.648984 | orchestrator | Sunday 29 March 2026 00:38:00 +0000 (0:00:05.609) 0:00:07.287 ********** 2026-03-29 00:38:07.649004 | orchestrator | changed: [testbed-manager] 2026-03-29 00:38:07.649023 | orchestrator | 2026-03-29 00:38:07.649041 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2026-03-29 00:38:07.649085 | orchestrator | Sunday 29 March 2026 00:38:00 +0000 (0:00:00.544) 0:00:07.832 ********** 2026-03-29 00:38:07.649097 | orchestrator | changed: [testbed-manager] 2026-03-29 00:38:07.649108 | orchestrator | 2026-03-29 00:38:07.649119 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2026-03-29 00:38:07.649130 | orchestrator | Sunday 29 March 2026 00:38:01 +0000 (0:00:00.424) 0:00:08.257 ********** 2026-03-29 00:38:07.649143 | orchestrator | ok: [testbed-manager] 2026-03-29 00:38:07.649188 | orchestrator | 2026-03-29 00:38:07.649202 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2026-03-29 00:38:07.649215 | orchestrator | Sunday 29 March 2026 00:38:01 +0000 (0:00:00.547) 0:00:08.805 ********** 2026-03-29 00:38:07.649227 | orchestrator | ok: [testbed-manager] 2026-03-29 00:38:07.649239 | orchestrator | 2026-03-29 00:38:07.649252 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2026-03-29 00:38:07.649264 | orchestrator | Sunday 29 March 2026 00:38:02 +0000 (0:00:00.417) 0:00:09.222 ********** 2026-03-29 00:38:07.649277 | orchestrator | ok: [testbed-manager] 2026-03-29 00:38:07.649289 | orchestrator | 2026-03-29 00:38:07.649301 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2026-03-29 00:38:07.649319 | orchestrator | Sunday 29 March 2026 00:38:02 +0000 (0:00:00.409) 0:00:09.631 ********** 2026-03-29 00:38:07.649332 | orchestrator | changed: [testbed-manager] 2026-03-29 00:38:07.649344 | orchestrator | 2026-03-29 00:38:07.649356 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2026-03-29 00:38:07.649369 | orchestrator | Sunday 29 March 2026 00:38:03 +0000 (0:00:01.141) 0:00:10.773 ********** 2026-03-29 00:38:07.649382 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-29 00:38:07.649395 | orchestrator | changed: [testbed-manager] 2026-03-29 00:38:07.649407 | orchestrator | 2026-03-29 00:38:07.649420 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2026-03-29 00:38:07.649432 | orchestrator | Sunday 29 March 2026 00:38:04 +0000 (0:00:00.887) 0:00:11.660 ********** 2026-03-29 00:38:07.649444 | orchestrator | changed: [testbed-manager] 2026-03-29 00:38:07.649457 | orchestrator | 2026-03-29 00:38:07.649470 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2026-03-29 00:38:07.649483 | orchestrator | Sunday 29 March 2026 00:38:06 +0000 (0:00:01.901) 0:00:13.562 ********** 2026-03-29 00:38:07.649495 | orchestrator | changed: [testbed-manager] 2026-03-29 00:38:07.649505 | orchestrator | 2026-03-29 00:38:07.649516 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 00:38:07.649527 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-29 00:38:07.649539 | orchestrator | 2026-03-29 00:38:07.649550 | orchestrator | 2026-03-29 00:38:07.649561 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-29 00:38:07.649571 | orchestrator | Sunday 29 March 2026 00:38:07 +0000 (0:00:00.911) 0:00:14.473 ********** 2026-03-29 00:38:07.649582 | orchestrator | =============================================================================== 2026-03-29 00:38:07.649592 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 5.61s 2026-03-29 00:38:07.649603 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 1.90s 2026-03-29 00:38:07.649614 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 1.46s 2026-03-29 00:38:07.649625 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.14s 2026-03-29 00:38:07.649635 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 0.91s 2026-03-29 00:38:07.649646 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 0.89s 2026-03-29 00:38:07.649657 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.55s 2026-03-29 00:38:07.649667 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.54s 2026-03-29 00:38:07.649678 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.42s 2026-03-29 00:38:07.649688 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.42s 2026-03-29 00:38:07.649699 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.41s 2026-03-29 00:38:07.818292 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2026-03-29 00:38:07.851330 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2026-03-29 00:38:07.851481 | orchestrator | Dload Upload Total Spent Left Speed 2026-03-29 00:38:07.924801 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 14 100 14 0 0 189 0 --:--:-- --:--:-- --:--:-- 191 2026-03-29 00:38:07.937088 | orchestrator | + osism apply --environment custom workarounds 2026-03-29 00:38:09.123958 | orchestrator | 2026-03-29 00:38:09 | INFO  | Trying to run play workarounds in environment custom 2026-03-29 00:38:19.239203 | orchestrator | 2026-03-29 00:38:19 | INFO  | Prepare task for execution of workarounds. 2026-03-29 00:38:19.325860 | orchestrator | 2026-03-29 00:38:19 | INFO  | Task ddd2eb30-1960-4325-a798-bc3054e1f363 (workarounds) was prepared for execution. 2026-03-29 00:38:19.325990 | orchestrator | 2026-03-29 00:38:19 | INFO  | It takes a moment until task ddd2eb30-1960-4325-a798-bc3054e1f363 (workarounds) has been started and output is visible here. 2026-03-29 00:38:42.464578 | orchestrator | 2026-03-29 00:38:42.464662 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-29 00:38:42.464669 | orchestrator | 2026-03-29 00:38:42.464674 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2026-03-29 00:38:42.464678 | orchestrator | Sunday 29 March 2026 00:38:22 +0000 (0:00:00.175) 0:00:00.175 ********** 2026-03-29 00:38:42.464683 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2026-03-29 00:38:42.464688 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2026-03-29 00:38:42.464692 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2026-03-29 00:38:42.464695 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2026-03-29 00:38:42.464700 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2026-03-29 00:38:42.464703 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2026-03-29 00:38:42.464707 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2026-03-29 00:38:42.464711 | orchestrator | 2026-03-29 00:38:42.464715 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2026-03-29 00:38:42.464719 | orchestrator | 2026-03-29 00:38:42.464723 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-03-29 00:38:42.464727 | orchestrator | Sunday 29 March 2026 00:38:23 +0000 (0:00:00.688) 0:00:00.863 ********** 2026-03-29 00:38:42.464730 | orchestrator | ok: [testbed-manager] 2026-03-29 00:38:42.464735 | orchestrator | 2026-03-29 00:38:42.464752 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2026-03-29 00:38:42.464756 | orchestrator | 2026-03-29 00:38:42.464760 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-03-29 00:38:42.464763 | orchestrator | Sunday 29 March 2026 00:38:25 +0000 (0:00:02.313) 0:00:03.177 ********** 2026-03-29 00:38:42.464767 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:38:42.464771 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:38:42.464774 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:38:42.464778 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:38:42.464782 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:38:42.464785 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:38:42.464789 | orchestrator | 2026-03-29 00:38:42.464793 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2026-03-29 00:38:42.464796 | orchestrator | 2026-03-29 00:38:42.464800 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2026-03-29 00:38:42.464804 | orchestrator | Sunday 29 March 2026 00:38:27 +0000 (0:00:02.209) 0:00:05.387 ********** 2026-03-29 00:38:42.464808 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-29 00:38:42.464812 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-29 00:38:42.464816 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-29 00:38:42.464853 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-29 00:38:42.464858 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-29 00:38:42.464862 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-29 00:38:42.464866 | orchestrator | 2026-03-29 00:38:42.464869 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2026-03-29 00:38:42.464873 | orchestrator | Sunday 29 March 2026 00:38:28 +0000 (0:00:01.295) 0:00:06.682 ********** 2026-03-29 00:38:42.464877 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:38:42.464881 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:38:42.464885 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:38:42.464889 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:38:42.464892 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:38:42.464896 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:38:42.464900 | orchestrator | 2026-03-29 00:38:42.464904 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2026-03-29 00:38:42.464907 | orchestrator | Sunday 29 March 2026 00:38:32 +0000 (0:00:03.761) 0:00:10.443 ********** 2026-03-29 00:38:42.464911 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:38:42.464915 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:38:42.464919 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:38:42.464922 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:38:42.464926 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:38:42.464930 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:38:42.464933 | orchestrator | 2026-03-29 00:38:42.464937 | orchestrator | PLAY [Add a workaround service] ************************************************ 2026-03-29 00:38:42.464941 | orchestrator | 2026-03-29 00:38:42.464945 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2026-03-29 00:38:42.464949 | orchestrator | Sunday 29 March 2026 00:38:33 +0000 (0:00:00.468) 0:00:10.911 ********** 2026-03-29 00:38:42.464952 | orchestrator | changed: [testbed-manager] 2026-03-29 00:38:42.464956 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:38:42.464960 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:38:42.464964 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:38:42.464967 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:38:42.464971 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:38:42.464975 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:38:42.464979 | orchestrator | 2026-03-29 00:38:42.464982 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2026-03-29 00:38:42.464986 | orchestrator | Sunday 29 March 2026 00:38:34 +0000 (0:00:01.602) 0:00:12.514 ********** 2026-03-29 00:38:42.464990 | orchestrator | changed: [testbed-manager] 2026-03-29 00:38:42.464994 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:38:42.464997 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:38:42.465001 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:38:42.465005 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:38:42.465009 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:38:42.465023 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:38:42.465027 | orchestrator | 2026-03-29 00:38:42.465031 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2026-03-29 00:38:42.465035 | orchestrator | Sunday 29 March 2026 00:38:36 +0000 (0:00:01.390) 0:00:13.905 ********** 2026-03-29 00:38:42.465039 | orchestrator | ok: [testbed-manager] 2026-03-29 00:38:42.465043 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:38:42.465046 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:38:42.465050 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:38:42.465054 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:38:42.465058 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:38:42.465061 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:38:42.465065 | orchestrator | 2026-03-29 00:38:42.465072 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2026-03-29 00:38:42.465076 | orchestrator | Sunday 29 March 2026 00:38:37 +0000 (0:00:01.561) 0:00:15.467 ********** 2026-03-29 00:38:42.465080 | orchestrator | changed: [testbed-manager] 2026-03-29 00:38:42.465084 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:38:42.465087 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:38:42.465091 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:38:42.465095 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:38:42.465099 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:38:42.465102 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:38:42.465106 | orchestrator | 2026-03-29 00:38:42.465110 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2026-03-29 00:38:42.465114 | orchestrator | Sunday 29 March 2026 00:38:39 +0000 (0:00:01.474) 0:00:16.941 ********** 2026-03-29 00:38:42.465117 | orchestrator | skipping: [testbed-manager] 2026-03-29 00:38:42.465121 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:38:42.465128 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:38:42.465131 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:38:42.465135 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:38:42.465140 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:38:42.465144 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:38:42.465148 | orchestrator | 2026-03-29 00:38:42.465152 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2026-03-29 00:38:42.465157 | orchestrator | 2026-03-29 00:38:42.465161 | orchestrator | TASK [Install python3-docker] ************************************************** 2026-03-29 00:38:42.465165 | orchestrator | Sunday 29 March 2026 00:38:39 +0000 (0:00:00.634) 0:00:17.575 ********** 2026-03-29 00:38:42.465169 | orchestrator | ok: [testbed-manager] 2026-03-29 00:38:42.465174 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:38:42.465178 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:38:42.465182 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:38:42.465186 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:38:42.465190 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:38:42.465195 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:38:42.465199 | orchestrator | 2026-03-29 00:38:42.465203 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 00:38:42.465209 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-29 00:38:42.465214 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-29 00:38:42.465219 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-29 00:38:42.465223 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-29 00:38:42.465227 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-29 00:38:42.465232 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-29 00:38:42.465236 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-29 00:38:42.465240 | orchestrator | 2026-03-29 00:38:42.465244 | orchestrator | 2026-03-29 00:38:42.465249 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-29 00:38:42.465253 | orchestrator | Sunday 29 March 2026 00:38:42 +0000 (0:00:02.634) 0:00:20.209 ********** 2026-03-29 00:38:42.465258 | orchestrator | =============================================================================== 2026-03-29 00:38:42.465266 | orchestrator | Run update-ca-certificates ---------------------------------------------- 3.76s 2026-03-29 00:38:42.465270 | orchestrator | Install python3-docker -------------------------------------------------- 2.63s 2026-03-29 00:38:42.465274 | orchestrator | Apply netplan configuration --------------------------------------------- 2.31s 2026-03-29 00:38:42.465278 | orchestrator | Apply netplan configuration --------------------------------------------- 2.21s 2026-03-29 00:38:42.465283 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.60s 2026-03-29 00:38:42.465287 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.56s 2026-03-29 00:38:42.465291 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 1.47s 2026-03-29 00:38:42.465295 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.39s 2026-03-29 00:38:42.465300 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.30s 2026-03-29 00:38:42.465304 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.69s 2026-03-29 00:38:42.465308 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.63s 2026-03-29 00:38:42.465315 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.47s 2026-03-29 00:38:42.766444 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2026-03-29 00:38:54.028729 | orchestrator | 2026-03-29 00:38:54 | INFO  | Prepare task for execution of reboot. 2026-03-29 00:38:54.105776 | orchestrator | 2026-03-29 00:38:54 | INFO  | Task c8dae833-b7f0-409c-89ca-ada904e057b6 (reboot) was prepared for execution. 2026-03-29 00:38:54.105876 | orchestrator | 2026-03-29 00:38:54 | INFO  | It takes a moment until task c8dae833-b7f0-409c-89ca-ada904e057b6 (reboot) has been started and output is visible here. 2026-03-29 00:39:04.908018 | orchestrator | 2026-03-29 00:39:04.908107 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-29 00:39:04.908116 | orchestrator | 2026-03-29 00:39:04.908120 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-29 00:39:04.908125 | orchestrator | Sunday 29 March 2026 00:38:57 +0000 (0:00:00.218) 0:00:00.218 ********** 2026-03-29 00:39:04.908129 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:39:04.908134 | orchestrator | 2026-03-29 00:39:04.908138 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-29 00:39:04.908142 | orchestrator | Sunday 29 March 2026 00:38:57 +0000 (0:00:00.116) 0:00:00.335 ********** 2026-03-29 00:39:04.908146 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:39:04.908150 | orchestrator | 2026-03-29 00:39:04.908165 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-29 00:39:04.908169 | orchestrator | Sunday 29 March 2026 00:38:58 +0000 (0:00:01.245) 0:00:01.580 ********** 2026-03-29 00:39:04.908173 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:39:04.908177 | orchestrator | 2026-03-29 00:39:04.908180 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-29 00:39:04.908184 | orchestrator | 2026-03-29 00:39:04.908188 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-29 00:39:04.908192 | orchestrator | Sunday 29 March 2026 00:38:58 +0000 (0:00:00.109) 0:00:01.689 ********** 2026-03-29 00:39:04.908195 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:39:04.908199 | orchestrator | 2026-03-29 00:39:04.908203 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-29 00:39:04.908207 | orchestrator | Sunday 29 March 2026 00:38:58 +0000 (0:00:00.086) 0:00:01.776 ********** 2026-03-29 00:39:04.908210 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:39:04.908214 | orchestrator | 2026-03-29 00:39:04.908218 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-29 00:39:04.908222 | orchestrator | Sunday 29 March 2026 00:38:59 +0000 (0:00:00.994) 0:00:02.771 ********** 2026-03-29 00:39:04.908226 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:39:04.908229 | orchestrator | 2026-03-29 00:39:04.908245 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-29 00:39:04.908250 | orchestrator | 2026-03-29 00:39:04.908254 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-29 00:39:04.908258 | orchestrator | Sunday 29 March 2026 00:38:59 +0000 (0:00:00.134) 0:00:02.906 ********** 2026-03-29 00:39:04.908261 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:39:04.908265 | orchestrator | 2026-03-29 00:39:04.908269 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-29 00:39:04.908273 | orchestrator | Sunday 29 March 2026 00:38:59 +0000 (0:00:00.103) 0:00:03.009 ********** 2026-03-29 00:39:04.908276 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:39:04.908280 | orchestrator | 2026-03-29 00:39:04.908284 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-29 00:39:04.908288 | orchestrator | Sunday 29 March 2026 00:39:00 +0000 (0:00:00.999) 0:00:04.008 ********** 2026-03-29 00:39:04.908291 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:39:04.908295 | orchestrator | 2026-03-29 00:39:04.908299 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-29 00:39:04.908302 | orchestrator | 2026-03-29 00:39:04.908306 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-29 00:39:04.908310 | orchestrator | Sunday 29 March 2026 00:39:00 +0000 (0:00:00.115) 0:00:04.123 ********** 2026-03-29 00:39:04.908314 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:39:04.908317 | orchestrator | 2026-03-29 00:39:04.908321 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-29 00:39:04.908325 | orchestrator | Sunday 29 March 2026 00:39:01 +0000 (0:00:00.091) 0:00:04.215 ********** 2026-03-29 00:39:04.908328 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:39:04.908332 | orchestrator | 2026-03-29 00:39:04.908336 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-29 00:39:04.908340 | orchestrator | Sunday 29 March 2026 00:39:02 +0000 (0:00:00.997) 0:00:05.212 ********** 2026-03-29 00:39:04.908343 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:39:04.908347 | orchestrator | 2026-03-29 00:39:04.908351 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-29 00:39:04.908355 | orchestrator | 2026-03-29 00:39:04.908358 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-29 00:39:04.908362 | orchestrator | Sunday 29 March 2026 00:39:02 +0000 (0:00:00.120) 0:00:05.333 ********** 2026-03-29 00:39:04.908366 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:39:04.908369 | orchestrator | 2026-03-29 00:39:04.908373 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-29 00:39:04.908377 | orchestrator | Sunday 29 March 2026 00:39:02 +0000 (0:00:00.089) 0:00:05.422 ********** 2026-03-29 00:39:04.908381 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:39:04.908384 | orchestrator | 2026-03-29 00:39:04.908388 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-29 00:39:04.908392 | orchestrator | Sunday 29 March 2026 00:39:03 +0000 (0:00:01.119) 0:00:06.541 ********** 2026-03-29 00:39:04.908395 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:39:04.908399 | orchestrator | 2026-03-29 00:39:04.908403 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-29 00:39:04.908406 | orchestrator | 2026-03-29 00:39:04.908410 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-29 00:39:04.908414 | orchestrator | Sunday 29 March 2026 00:39:03 +0000 (0:00:00.119) 0:00:06.661 ********** 2026-03-29 00:39:04.908417 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:39:04.908421 | orchestrator | 2026-03-29 00:39:04.908425 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-29 00:39:04.908429 | orchestrator | Sunday 29 March 2026 00:39:03 +0000 (0:00:00.093) 0:00:06.755 ********** 2026-03-29 00:39:04.908432 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:39:04.908436 | orchestrator | 2026-03-29 00:39:04.908440 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-29 00:39:04.908451 | orchestrator | Sunday 29 March 2026 00:39:04 +0000 (0:00:01.043) 0:00:07.799 ********** 2026-03-29 00:39:04.908471 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:39:04.908475 | orchestrator | 2026-03-29 00:39:04.908479 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 00:39:04.908484 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-29 00:39:04.908488 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-29 00:39:04.908495 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-29 00:39:04.908499 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-29 00:39:04.908503 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-29 00:39:04.908507 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-29 00:39:04.908510 | orchestrator | 2026-03-29 00:39:04.908514 | orchestrator | 2026-03-29 00:39:04.908518 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-29 00:39:04.908522 | orchestrator | Sunday 29 March 2026 00:39:04 +0000 (0:00:00.043) 0:00:07.843 ********** 2026-03-29 00:39:04.908525 | orchestrator | =============================================================================== 2026-03-29 00:39:04.908529 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 6.40s 2026-03-29 00:39:04.908533 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.64s 2026-03-29 00:39:04.908537 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.58s 2026-03-29 00:39:05.077105 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2026-03-29 00:39:16.343053 | orchestrator | 2026-03-29 00:39:16 | INFO  | Prepare task for execution of wait-for-connection. 2026-03-29 00:39:16.415713 | orchestrator | 2026-03-29 00:39:16 | INFO  | Task 0e49f881-7a51-465f-8270-8d6c7a0d4c38 (wait-for-connection) was prepared for execution. 2026-03-29 00:39:16.415792 | orchestrator | 2026-03-29 00:39:16 | INFO  | It takes a moment until task 0e49f881-7a51-465f-8270-8d6c7a0d4c38 (wait-for-connection) has been started and output is visible here. 2026-03-29 00:39:31.316986 | orchestrator | 2026-03-29 00:39:31.317128 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2026-03-29 00:39:31.317149 | orchestrator | 2026-03-29 00:39:31.317162 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2026-03-29 00:39:31.317174 | orchestrator | Sunday 29 March 2026 00:39:19 +0000 (0:00:00.303) 0:00:00.303 ********** 2026-03-29 00:39:31.317186 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:39:31.317198 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:39:31.317209 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:39:31.317220 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:39:31.317231 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:39:31.317243 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:39:31.317254 | orchestrator | 2026-03-29 00:39:31.317265 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 00:39:31.317277 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-29 00:39:31.317290 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-29 00:39:31.317329 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-29 00:39:31.317340 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-29 00:39:31.317351 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-29 00:39:31.317362 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-29 00:39:31.317373 | orchestrator | 2026-03-29 00:39:31.317433 | orchestrator | 2026-03-29 00:39:31.317447 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-29 00:39:31.317460 | orchestrator | Sunday 29 March 2026 00:39:31 +0000 (0:00:11.535) 0:00:11.839 ********** 2026-03-29 00:39:31.317472 | orchestrator | =============================================================================== 2026-03-29 00:39:31.317484 | orchestrator | Wait until remote system is reachable ---------------------------------- 11.54s 2026-03-29 00:39:31.429674 | orchestrator | + osism apply hddtemp 2026-03-29 00:39:42.579747 | orchestrator | 2026-03-29 00:39:42 | INFO  | Prepare task for execution of hddtemp. 2026-03-29 00:39:42.651465 | orchestrator | 2026-03-29 00:39:42 | INFO  | Task ba8dfec0-1114-486f-9d9d-0bcb30e77643 (hddtemp) was prepared for execution. 2026-03-29 00:39:42.651549 | orchestrator | 2026-03-29 00:39:42 | INFO  | It takes a moment until task ba8dfec0-1114-486f-9d9d-0bcb30e77643 (hddtemp) has been started and output is visible here. 2026-03-29 00:40:09.886979 | orchestrator | 2026-03-29 00:40:09.887124 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2026-03-29 00:40:09.887143 | orchestrator | 2026-03-29 00:40:09.887155 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2026-03-29 00:40:09.887166 | orchestrator | Sunday 29 March 2026 00:39:45 +0000 (0:00:00.327) 0:00:00.327 ********** 2026-03-29 00:40:09.887177 | orchestrator | ok: [testbed-manager] 2026-03-29 00:40:09.887189 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:40:09.887199 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:40:09.887213 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:40:09.887233 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:40:09.887250 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:40:09.887286 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:40:09.887315 | orchestrator | 2026-03-29 00:40:09.887343 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2026-03-29 00:40:09.887364 | orchestrator | Sunday 29 March 2026 00:39:46 +0000 (0:00:00.572) 0:00:00.899 ********** 2026-03-29 00:40:09.887383 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-29 00:40:09.887403 | orchestrator | 2026-03-29 00:40:09.887421 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2026-03-29 00:40:09.887439 | orchestrator | Sunday 29 March 2026 00:39:47 +0000 (0:00:01.104) 0:00:02.004 ********** 2026-03-29 00:40:09.887485 | orchestrator | ok: [testbed-manager] 2026-03-29 00:40:09.887505 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:40:09.887525 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:40:09.887546 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:40:09.887566 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:40:09.887584 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:40:09.887604 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:40:09.887622 | orchestrator | 2026-03-29 00:40:09.887641 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2026-03-29 00:40:09.887661 | orchestrator | Sunday 29 March 2026 00:39:50 +0000 (0:00:02.504) 0:00:04.508 ********** 2026-03-29 00:40:09.887679 | orchestrator | changed: [testbed-manager] 2026-03-29 00:40:09.887699 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:40:09.887746 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:40:09.887766 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:40:09.887785 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:40:09.887806 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:40:09.887824 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:40:09.887842 | orchestrator | 2026-03-29 00:40:09.887861 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2026-03-29 00:40:09.887880 | orchestrator | Sunday 29 March 2026 00:39:51 +0000 (0:00:00.923) 0:00:05.432 ********** 2026-03-29 00:40:09.887899 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:40:09.887917 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:40:09.887934 | orchestrator | ok: [testbed-manager] 2026-03-29 00:40:09.887952 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:40:09.887971 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:40:09.887988 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:40:09.888007 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:40:09.888025 | orchestrator | 2026-03-29 00:40:09.888069 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2026-03-29 00:40:09.888089 | orchestrator | Sunday 29 March 2026 00:39:52 +0000 (0:00:01.354) 0:00:06.787 ********** 2026-03-29 00:40:09.888106 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:40:09.888124 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:40:09.888142 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:40:09.888161 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:40:09.888180 | orchestrator | changed: [testbed-manager] 2026-03-29 00:40:09.888200 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:40:09.888217 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:40:09.888236 | orchestrator | 2026-03-29 00:40:09.888255 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2026-03-29 00:40:09.888275 | orchestrator | Sunday 29 March 2026 00:39:52 +0000 (0:00:00.642) 0:00:07.429 ********** 2026-03-29 00:40:09.888293 | orchestrator | changed: [testbed-manager] 2026-03-29 00:40:09.888312 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:40:09.888330 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:40:09.888348 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:40:09.888365 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:40:09.888383 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:40:09.888402 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:40:09.888420 | orchestrator | 2026-03-29 00:40:09.888439 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2026-03-29 00:40:09.888457 | orchestrator | Sunday 29 March 2026 00:40:06 +0000 (0:00:13.897) 0:00:21.326 ********** 2026-03-29 00:40:09.888476 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-29 00:40:09.888496 | orchestrator | 2026-03-29 00:40:09.888515 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2026-03-29 00:40:09.888533 | orchestrator | Sunday 29 March 2026 00:40:07 +0000 (0:00:01.076) 0:00:22.403 ********** 2026-03-29 00:40:09.888559 | orchestrator | changed: [testbed-manager] 2026-03-29 00:40:09.888589 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:40:09.888610 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:40:09.888628 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:40:09.888646 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:40:09.888664 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:40:09.888682 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:40:09.888700 | orchestrator | 2026-03-29 00:40:09.888719 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 00:40:09.888737 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-29 00:40:09.888782 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-29 00:40:09.888820 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-29 00:40:09.888840 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-29 00:40:09.888870 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-29 00:40:09.888889 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-29 00:40:09.888908 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-29 00:40:09.888927 | orchestrator | 2026-03-29 00:40:09.888944 | orchestrator | 2026-03-29 00:40:09.888963 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-29 00:40:09.888981 | orchestrator | Sunday 29 March 2026 00:40:09 +0000 (0:00:01.732) 0:00:24.135 ********** 2026-03-29 00:40:09.888999 | orchestrator | =============================================================================== 2026-03-29 00:40:09.889017 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 13.90s 2026-03-29 00:40:09.889094 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 2.50s 2026-03-29 00:40:09.889118 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 1.73s 2026-03-29 00:40:09.889138 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 1.35s 2026-03-29 00:40:09.889156 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 1.10s 2026-03-29 00:40:09.889174 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.08s 2026-03-29 00:40:09.889191 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 0.92s 2026-03-29 00:40:09.889210 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 0.64s 2026-03-29 00:40:09.889229 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.57s 2026-03-29 00:40:10.014638 | orchestrator | ++ semver latest 7.1.1 2026-03-29 00:40:10.061127 | orchestrator | + [[ -1 -ge 0 ]] 2026-03-29 00:40:10.061222 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-03-29 00:40:10.061246 | orchestrator | + sudo systemctl restart manager.service 2026-03-29 00:40:23.403232 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-03-29 00:40:23.403319 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-03-29 00:40:23.403333 | orchestrator | + local max_attempts=60 2026-03-29 00:40:23.403344 | orchestrator | + local name=ceph-ansible 2026-03-29 00:40:23.403355 | orchestrator | + local attempt_num=1 2026-03-29 00:40:23.403470 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-29 00:40:23.450880 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-29 00:40:23.450964 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-29 00:40:23.450974 | orchestrator | + sleep 5 2026-03-29 00:40:28.455280 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-29 00:40:28.542768 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-29 00:40:28.542877 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-29 00:40:28.542895 | orchestrator | + sleep 5 2026-03-29 00:40:33.545699 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-29 00:40:33.569968 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-29 00:40:33.570138 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-29 00:40:33.570156 | orchestrator | + sleep 5 2026-03-29 00:40:38.574380 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-29 00:40:38.605591 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-29 00:40:38.605674 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-29 00:40:38.605688 | orchestrator | + sleep 5 2026-03-29 00:40:43.609874 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-29 00:40:43.642335 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-29 00:40:43.642428 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-29 00:40:43.642442 | orchestrator | + sleep 5 2026-03-29 00:40:48.647292 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-29 00:40:48.681082 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-29 00:40:48.681211 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-29 00:40:48.681227 | orchestrator | + sleep 5 2026-03-29 00:40:53.686300 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-29 00:40:53.726390 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-29 00:40:53.726494 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-29 00:40:53.726510 | orchestrator | + sleep 5 2026-03-29 00:40:58.734090 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-29 00:40:58.785976 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-29 00:40:58.786108 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-29 00:40:58.786128 | orchestrator | + sleep 5 2026-03-29 00:41:03.790099 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-29 00:41:03.822568 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-29 00:41:03.822642 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-29 00:41:03.822652 | orchestrator | + sleep 5 2026-03-29 00:41:08.826364 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-29 00:41:08.866964 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-29 00:41:08.867100 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-29 00:41:08.867116 | orchestrator | + sleep 5 2026-03-29 00:41:13.872571 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-29 00:41:13.908789 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-29 00:41:13.908908 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-29 00:41:13.908920 | orchestrator | + sleep 5 2026-03-29 00:41:18.913265 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-29 00:41:18.944805 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-29 00:41:18.944927 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-29 00:41:18.944945 | orchestrator | + sleep 5 2026-03-29 00:41:23.947968 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-29 00:41:23.983658 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-29 00:41:23.983760 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-29 00:41:23.983773 | orchestrator | + sleep 5 2026-03-29 00:41:28.987734 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-29 00:41:29.024842 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-29 00:41:29.024953 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-03-29 00:41:29.024970 | orchestrator | + local max_attempts=60 2026-03-29 00:41:29.024982 | orchestrator | + local name=kolla-ansible 2026-03-29 00:41:29.024992 | orchestrator | + local attempt_num=1 2026-03-29 00:41:29.025629 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-03-29 00:41:29.056025 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-29 00:41:29.056135 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-03-29 00:41:29.056160 | orchestrator | + local max_attempts=60 2026-03-29 00:41:29.056180 | orchestrator | + local name=osism-ansible 2026-03-29 00:41:29.056200 | orchestrator | + local attempt_num=1 2026-03-29 00:41:29.057195 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-03-29 00:41:29.087942 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-29 00:41:29.088039 | orchestrator | + [[ true == \t\r\u\e ]] 2026-03-29 00:41:29.088052 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-03-29 00:41:29.248842 | orchestrator | ARA in ceph-ansible already disabled. 2026-03-29 00:41:29.386420 | orchestrator | ARA in kolla-ansible already disabled. 2026-03-29 00:41:29.535698 | orchestrator | ARA in osism-ansible already disabled. 2026-03-29 00:41:29.676051 | orchestrator | ARA in osism-kubernetes already disabled. 2026-03-29 00:41:29.676454 | orchestrator | + osism apply gather-facts 2026-03-29 00:41:40.913080 | orchestrator | 2026-03-29 00:41:40 | INFO  | Prepare task for execution of gather-facts. 2026-03-29 00:41:40.998521 | orchestrator | 2026-03-29 00:41:41 | INFO  | Task 6b66cc5c-0c83-42c5-b4aa-6e2a84cfbefe (gather-facts) was prepared for execution. 2026-03-29 00:41:40.998586 | orchestrator | 2026-03-29 00:41:41 | INFO  | It takes a moment until task 6b66cc5c-0c83-42c5-b4aa-6e2a84cfbefe (gather-facts) has been started and output is visible here. 2026-03-29 00:41:44.547453 | orchestrator | [WARNING]: Invalid characters were found in group names but not replaced, use 2026-03-29 00:41:44.547530 | orchestrator | -vvvv to see details 2026-03-29 00:41:44.547551 | orchestrator | 2026-03-29 00:41:44.547565 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-03-29 00:41:44.547572 | orchestrator | 2026-03-29 00:41:44.547579 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-29 00:41:44.547587 | orchestrator | fatal: [testbed-manager]: UNREACHABLE! => {"changed": false, "msg": "Data could not be sent to remote host \"192.168.16.5\". Make sure this host can be reached over ssh: no such identity: /ansible/secrets/id_rsa: No such file or directory\r\ndragon@192.168.16.5: Permission denied (publickey).\r\n", "unreachable": true} 2026-03-29 00:41:44.547595 | orchestrator | ...ignoring 2026-03-29 00:41:44.547602 | orchestrator | fatal: [testbed-node-0]: UNREACHABLE! => {"changed": false, "msg": "Data could not be sent to remote host \"192.168.16.10\". Make sure this host can be reached over ssh: no such identity: /ansible/secrets/id_rsa: No such file or directory\r\ndragon@192.168.16.10: Permission denied (publickey).\r\n", "unreachable": true} 2026-03-29 00:41:44.547608 | orchestrator | ...ignoring 2026-03-29 00:41:44.547615 | orchestrator | fatal: [testbed-node-2]: UNREACHABLE! => {"changed": false, "msg": "Data could not be sent to remote host \"192.168.16.12\". Make sure this host can be reached over ssh: no such identity: /ansible/secrets/id_rsa: No such file or directory\r\ndragon@192.168.16.12: Permission denied (publickey).\r\n", "unreachable": true} 2026-03-29 00:41:44.547621 | orchestrator | ...ignoring 2026-03-29 00:41:44.547627 | orchestrator | fatal: [testbed-node-4]: UNREACHABLE! => {"changed": false, "msg": "Data could not be sent to remote host \"192.168.16.14\". Make sure this host can be reached over ssh: no such identity: /ansible/secrets/id_rsa: No such file or directory\r\ndragon@192.168.16.14: Permission denied (publickey).\r\n", "unreachable": true} 2026-03-29 00:41:44.547634 | orchestrator | ...ignoring 2026-03-29 00:41:44.547640 | orchestrator | fatal: [testbed-node-3]: UNREACHABLE! => {"changed": false, "msg": "Data could not be sent to remote host \"192.168.16.13\". Make sure this host can be reached over ssh: no such identity: /ansible/secrets/id_rsa: No such file or directory\r\ndragon@192.168.16.13: Permission denied (publickey).\r\n", "unreachable": true} 2026-03-29 00:41:44.547647 | orchestrator | ...ignoring 2026-03-29 00:41:44.547653 | orchestrator | fatal: [testbed-node-1]: UNREACHABLE! => {"changed": false, "msg": "Data could not be sent to remote host \"192.168.16.11\". Make sure this host can be reached over ssh: no such identity: /ansible/secrets/id_rsa: No such file or directory\r\ndragon@192.168.16.11: Permission denied (publickey).\r\n", "unreachable": true} 2026-03-29 00:41:44.547659 | orchestrator | ...ignoring 2026-03-29 00:41:44.547665 | orchestrator | fatal: [testbed-node-5]: UNREACHABLE! => {"changed": false, "msg": "Data could not be sent to remote host \"192.168.16.15\". Make sure this host can be reached over ssh: no such identity: /ansible/secrets/id_rsa: No such file or directory\r\ndragon@192.168.16.15: Permission denied (publickey).\r\n", "unreachable": true} 2026-03-29 00:41:44.547672 | orchestrator | ...ignoring 2026-03-29 00:41:44.547678 | orchestrator | 2026-03-29 00:41:44.547684 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-03-29 00:41:44.547690 | orchestrator | 2026-03-29 00:41:44.547696 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-03-29 00:41:44.547702 | orchestrator | skipping: [testbed-manager] 2026-03-29 00:41:44.547709 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:41:44.547716 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:41:44.547741 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:41:44.547747 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:41:44.547753 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:41:44.547759 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:41:44.547765 | orchestrator | 2026-03-29 00:41:44.547783 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 00:41:44.547790 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2026-03-29 00:41:44.547798 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2026-03-29 00:41:44.547804 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2026-03-29 00:41:44.547810 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2026-03-29 00:41:44.547827 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2026-03-29 00:41:44.547834 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2026-03-29 00:41:44.547840 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2026-03-29 00:41:44.547846 | orchestrator | 2026-03-29 00:41:44.662695 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2026-03-29 00:41:44.672500 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2026-03-29 00:41:44.691793 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2026-03-29 00:41:44.710208 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2026-03-29 00:41:44.723323 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2026-03-29 00:41:44.743114 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/320-openstack-minimal.sh /usr/local/bin/deploy-openstack-minimal 2026-03-29 00:41:44.759485 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2026-03-29 00:41:44.777138 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2026-03-29 00:41:44.789539 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2026-03-29 00:41:44.804877 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade-manager.sh /usr/local/bin/upgrade-manager 2026-03-29 00:41:44.816083 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2026-03-29 00:41:44.828652 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2026-03-29 00:41:44.846680 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2026-03-29 00:41:44.866002 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2026-03-29 00:41:44.883579 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/320-openstack-minimal.sh /usr/local/bin/upgrade-openstack-minimal 2026-03-29 00:41:44.900802 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2026-03-29 00:41:44.914517 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2026-03-29 00:41:44.932417 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2026-03-29 00:41:44.950294 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2026-03-29 00:41:44.966221 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh /usr/local/bin/bootstrap-octavia 2026-03-29 00:41:44.980794 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2026-03-29 00:41:44.997383 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2026-03-29 00:41:45.016556 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2026-03-29 00:41:45.026424 | orchestrator | + [[ false == \t\r\u\e ]] 2026-03-29 00:41:45.275479 | orchestrator | ok: Runtime: 0:23:16.095512 2026-03-29 00:41:45.384834 | 2026-03-29 00:41:45.384981 | TASK [Deploy services] 2026-03-29 00:41:45.918065 | orchestrator | skipping: Conditional result was False 2026-03-29 00:41:45.936331 | 2026-03-29 00:41:45.936497 | TASK [Deploy in a nutshell] 2026-03-29 00:41:46.648805 | orchestrator | 2026-03-29 00:41:46.648945 | orchestrator | # PULL IMAGES 2026-03-29 00:41:46.648959 | orchestrator | 2026-03-29 00:41:46.648969 | orchestrator | + set -e 2026-03-29 00:41:46.648980 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-29 00:41:46.648994 | orchestrator | ++ export INTERACTIVE=false 2026-03-29 00:41:46.649004 | orchestrator | ++ INTERACTIVE=false 2026-03-29 00:41:46.649034 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-29 00:41:46.649048 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-29 00:41:46.649057 | orchestrator | + source /opt/manager-vars.sh 2026-03-29 00:41:46.649065 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-03-29 00:41:46.649076 | orchestrator | ++ NUMBER_OF_NODES=6 2026-03-29 00:41:46.649083 | orchestrator | ++ export CEPH_VERSION=reef 2026-03-29 00:41:46.649094 | orchestrator | ++ CEPH_VERSION=reef 2026-03-29 00:41:46.649101 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-03-29 00:41:46.649113 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-03-29 00:41:46.649120 | orchestrator | ++ export MANAGER_VERSION=latest 2026-03-29 00:41:46.649129 | orchestrator | ++ MANAGER_VERSION=latest 2026-03-29 00:41:46.649137 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-03-29 00:41:46.649144 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-03-29 00:41:46.649151 | orchestrator | ++ export ARA=false 2026-03-29 00:41:46.649158 | orchestrator | ++ ARA=false 2026-03-29 00:41:46.649164 | orchestrator | ++ export DEPLOY_MODE=manager 2026-03-29 00:41:46.649171 | orchestrator | ++ DEPLOY_MODE=manager 2026-03-29 00:41:46.649177 | orchestrator | ++ export TEMPEST=true 2026-03-29 00:41:46.649184 | orchestrator | ++ TEMPEST=true 2026-03-29 00:41:46.649191 | orchestrator | ++ export IS_ZUUL=true 2026-03-29 00:41:46.649197 | orchestrator | ++ IS_ZUUL=true 2026-03-29 00:41:46.649204 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.35 2026-03-29 00:41:46.649210 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.35 2026-03-29 00:41:46.649217 | orchestrator | ++ export EXTERNAL_API=false 2026-03-29 00:41:46.649224 | orchestrator | ++ EXTERNAL_API=false 2026-03-29 00:41:46.649230 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-03-29 00:41:46.649237 | orchestrator | ++ IMAGE_USER=ubuntu 2026-03-29 00:41:46.649244 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-03-29 00:41:46.649251 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-03-29 00:41:46.649282 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-03-29 00:41:46.649291 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-03-29 00:41:46.649298 | orchestrator | + echo 2026-03-29 00:41:46.649304 | orchestrator | + echo '# PULL IMAGES' 2026-03-29 00:41:46.649311 | orchestrator | + echo 2026-03-29 00:41:46.650006 | orchestrator | ++ semver latest 7.0.0 2026-03-29 00:41:46.705793 | orchestrator | + [[ -1 -ge 0 ]] 2026-03-29 00:41:46.705936 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-03-29 00:41:46.705966 | orchestrator | + osism apply --no-wait -r 2 -e custom pull-images 2026-03-29 00:41:47.834810 | orchestrator | 2026-03-29 00:41:47 | INFO  | Trying to run play pull-images in environment custom 2026-03-29 00:41:57.961578 | orchestrator | 2026-03-29 00:41:57 | INFO  | Prepare task for execution of pull-images. 2026-03-29 00:41:58.029875 | orchestrator | 2026-03-29 00:41:58 | INFO  | Task 67270a44-714b-468c-a3b0-9959b8da917b (pull-images) was prepared for execution. 2026-03-29 00:41:58.029966 | orchestrator | 2026-03-29 00:41:58 | INFO  | Task 67270a44-714b-468c-a3b0-9959b8da917b is running in background. No more output. Check ARA for logs. 2026-03-29 00:41:59.302574 | orchestrator | 2026-03-29 00:41:59 | INFO  | Trying to run play wipe-partitions in environment custom 2026-03-29 00:42:09.462681 | orchestrator | 2026-03-29 00:42:09 | INFO  | Prepare task for execution of wipe-partitions. 2026-03-29 00:42:09.576005 | orchestrator | 2026-03-29 00:42:09 | INFO  | Task 2e91469d-5f0b-4c66-b9d9-8d6d0b0fbff9 (wipe-partitions) was prepared for execution. 2026-03-29 00:42:09.576109 | orchestrator | 2026-03-29 00:42:09 | INFO  | It takes a moment until task 2e91469d-5f0b-4c66-b9d9-8d6d0b0fbff9 (wipe-partitions) has been started and output is visible here. 2026-03-29 00:42:21.305801 | orchestrator | 2026-03-29 00:42:21.305907 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2026-03-29 00:42:21.305921 | orchestrator | 2026-03-29 00:42:21.305931 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2026-03-29 00:42:21.305944 | orchestrator | Sunday 29 March 2026 00:42:12 +0000 (0:00:00.154) 0:00:00.154 ********** 2026-03-29 00:42:21.305977 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:42:21.305988 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:42:21.305997 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:42:21.306006 | orchestrator | 2026-03-29 00:42:21.306059 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2026-03-29 00:42:21.306071 | orchestrator | Sunday 29 March 2026 00:42:13 +0000 (0:00:01.250) 0:00:01.405 ********** 2026-03-29 00:42:21.306084 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:42:21.306093 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:42:21.306102 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:42:21.306110 | orchestrator | 2026-03-29 00:42:21.306119 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2026-03-29 00:42:21.306129 | orchestrator | Sunday 29 March 2026 00:42:14 +0000 (0:00:00.225) 0:00:01.630 ********** 2026-03-29 00:42:21.306138 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:42:21.306147 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:42:21.306156 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:42:21.306165 | orchestrator | 2026-03-29 00:42:21.306173 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2026-03-29 00:42:21.306182 | orchestrator | Sunday 29 March 2026 00:42:14 +0000 (0:00:00.553) 0:00:02.184 ********** 2026-03-29 00:42:21.306191 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:42:21.306199 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:42:21.306208 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:42:21.306216 | orchestrator | 2026-03-29 00:42:21.306225 | orchestrator | TASK [Check device availability] *********************************************** 2026-03-29 00:42:21.306234 | orchestrator | Sunday 29 March 2026 00:42:14 +0000 (0:00:00.239) 0:00:02.423 ********** 2026-03-29 00:42:21.306242 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-03-29 00:42:21.306255 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-03-29 00:42:21.306263 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-03-29 00:42:21.306272 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-03-29 00:42:21.306280 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-03-29 00:42:21.306289 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-03-29 00:42:21.306298 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-03-29 00:42:21.306306 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-03-29 00:42:21.306315 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-03-29 00:42:21.306324 | orchestrator | 2026-03-29 00:42:21.306386 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2026-03-29 00:42:21.306396 | orchestrator | Sunday 29 March 2026 00:42:16 +0000 (0:00:01.352) 0:00:03.776 ********** 2026-03-29 00:42:21.306407 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2026-03-29 00:42:21.306417 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2026-03-29 00:42:21.306427 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2026-03-29 00:42:21.306437 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2026-03-29 00:42:21.306447 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2026-03-29 00:42:21.306457 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2026-03-29 00:42:21.306466 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2026-03-29 00:42:21.306476 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2026-03-29 00:42:21.306486 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2026-03-29 00:42:21.306496 | orchestrator | 2026-03-29 00:42:21.306506 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2026-03-29 00:42:21.306516 | orchestrator | Sunday 29 March 2026 00:42:17 +0000 (0:00:01.370) 0:00:05.146 ********** 2026-03-29 00:42:21.306526 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-03-29 00:42:21.306536 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-03-29 00:42:21.306547 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-03-29 00:42:21.306563 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-03-29 00:42:21.306582 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-03-29 00:42:21.306591 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-03-29 00:42:21.306600 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-03-29 00:42:21.306608 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-03-29 00:42:21.306617 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-03-29 00:42:21.306625 | orchestrator | 2026-03-29 00:42:21.306634 | orchestrator | TASK [Reload udev rules] ******************************************************* 2026-03-29 00:42:21.306643 | orchestrator | Sunday 29 March 2026 00:42:19 +0000 (0:00:02.179) 0:00:07.326 ********** 2026-03-29 00:42:21.306652 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:42:21.306660 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:42:21.306669 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:42:21.306677 | orchestrator | 2026-03-29 00:42:21.306686 | orchestrator | TASK [Request device events from the kernel] *********************************** 2026-03-29 00:42:21.306695 | orchestrator | Sunday 29 March 2026 00:42:20 +0000 (0:00:00.572) 0:00:07.899 ********** 2026-03-29 00:42:21.306703 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:42:21.306712 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:42:21.306721 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:42:21.306730 | orchestrator | 2026-03-29 00:42:21.306739 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 00:42:21.306749 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-29 00:42:21.306759 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-29 00:42:21.306783 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-29 00:42:21.306792 | orchestrator | 2026-03-29 00:42:21.306801 | orchestrator | 2026-03-29 00:42:21.306809 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-29 00:42:21.306818 | orchestrator | Sunday 29 March 2026 00:42:21 +0000 (0:00:00.620) 0:00:08.519 ********** 2026-03-29 00:42:21.306827 | orchestrator | =============================================================================== 2026-03-29 00:42:21.306836 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 2.18s 2026-03-29 00:42:21.306844 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.37s 2026-03-29 00:42:21.306853 | orchestrator | Check device availability ----------------------------------------------- 1.35s 2026-03-29 00:42:21.306861 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 1.25s 2026-03-29 00:42:21.306870 | orchestrator | Request device events from the kernel ----------------------------------- 0.62s 2026-03-29 00:42:21.306879 | orchestrator | Reload udev rules ------------------------------------------------------- 0.57s 2026-03-29 00:42:21.306887 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.55s 2026-03-29 00:42:21.306896 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.24s 2026-03-29 00:42:21.306905 | orchestrator | Remove all rook related logical devices --------------------------------- 0.23s 2026-03-29 00:42:32.558433 | orchestrator | 2026-03-29 00:42:32 | INFO  | Prepare task for execution of facts. 2026-03-29 00:42:32.627040 | orchestrator | 2026-03-29 00:42:32 | INFO  | Task 406e6dae-2524-4232-9618-159e0f048fc5 (facts) was prepared for execution. 2026-03-29 00:42:32.628454 | orchestrator | 2026-03-29 00:42:32 | INFO  | It takes a moment until task 406e6dae-2524-4232-9618-159e0f048fc5 (facts) has been started and output is visible here. 2026-03-29 00:42:43.804904 | orchestrator | 2026-03-29 00:42:43.804995 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-03-29 00:42:43.805006 | orchestrator | 2026-03-29 00:42:43.805035 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-03-29 00:42:43.805042 | orchestrator | Sunday 29 March 2026 00:42:35 +0000 (0:00:00.317) 0:00:00.317 ********** 2026-03-29 00:42:43.805049 | orchestrator | ok: [testbed-manager] 2026-03-29 00:42:43.805056 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:42:43.805062 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:42:43.805068 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:42:43.805074 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:42:43.805081 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:42:43.805087 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:42:43.805093 | orchestrator | 2026-03-29 00:42:43.805112 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-03-29 00:42:43.805122 | orchestrator | Sunday 29 March 2026 00:42:36 +0000 (0:00:01.322) 0:00:01.640 ********** 2026-03-29 00:42:43.805132 | orchestrator | skipping: [testbed-manager] 2026-03-29 00:42:43.805142 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:42:43.805158 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:42:43.805172 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:42:43.805180 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:42:43.805189 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:42:43.805198 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:42:43.805208 | orchestrator | 2026-03-29 00:42:43.805217 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-03-29 00:42:43.805226 | orchestrator | 2026-03-29 00:42:43.805235 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-29 00:42:43.805246 | orchestrator | Sunday 29 March 2026 00:42:37 +0000 (0:00:01.033) 0:00:02.674 ********** 2026-03-29 00:42:43.805257 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:42:43.805266 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:42:43.805276 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:42:43.805287 | orchestrator | ok: [testbed-manager] 2026-03-29 00:42:43.805296 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:42:43.805307 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:42:43.805316 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:42:43.805323 | orchestrator | 2026-03-29 00:42:43.805329 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-03-29 00:42:43.805335 | orchestrator | 2026-03-29 00:42:43.805341 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-03-29 00:42:43.805348 | orchestrator | Sunday 29 March 2026 00:42:43 +0000 (0:00:05.181) 0:00:07.855 ********** 2026-03-29 00:42:43.805354 | orchestrator | skipping: [testbed-manager] 2026-03-29 00:42:43.805360 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:42:43.805366 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:42:43.805424 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:42:43.805431 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:42:43.805437 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:42:43.805444 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:42:43.805450 | orchestrator | 2026-03-29 00:42:43.805456 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 00:42:43.805463 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-29 00:42:43.805471 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-29 00:42:43.805479 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-29 00:42:43.805486 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-29 00:42:43.805494 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-29 00:42:43.805511 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-29 00:42:43.805519 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-29 00:42:43.805526 | orchestrator | 2026-03-29 00:42:43.805533 | orchestrator | 2026-03-29 00:42:43.805540 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-29 00:42:43.805547 | orchestrator | Sunday 29 March 2026 00:42:43 +0000 (0:00:00.451) 0:00:08.307 ********** 2026-03-29 00:42:43.805555 | orchestrator | =============================================================================== 2026-03-29 00:42:43.805562 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.18s 2026-03-29 00:42:43.805569 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.32s 2026-03-29 00:42:43.805576 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.03s 2026-03-29 00:42:43.805583 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.45s 2026-03-29 00:42:45.000497 | orchestrator | 2026-03-29 00:42:45 | INFO  | Prepare task for execution of ceph-configure-lvm-volumes. 2026-03-29 00:42:45.053628 | orchestrator | 2026-03-29 00:42:45 | INFO  | Task a69b8f0d-f57b-4982-aa95-0433c8896868 (ceph-configure-lvm-volumes) was prepared for execution. 2026-03-29 00:42:45.053745 | orchestrator | 2026-03-29 00:42:45 | INFO  | It takes a moment until task a69b8f0d-f57b-4982-aa95-0433c8896868 (ceph-configure-lvm-volumes) has been started and output is visible here. 2026-03-29 00:42:55.946677 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-03-29 00:42:55.946793 | orchestrator | 2.16.14 2026-03-29 00:42:55.946810 | orchestrator | 2026-03-29 00:42:55.946836 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-03-29 00:42:55.946852 | orchestrator | 2026-03-29 00:42:55.946871 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-29 00:42:55.946890 | orchestrator | Sunday 29 March 2026 00:42:49 +0000 (0:00:00.297) 0:00:00.297 ********** 2026-03-29 00:42:55.946910 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-29 00:42:55.946929 | orchestrator | 2026-03-29 00:42:55.946947 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-29 00:42:55.946987 | orchestrator | Sunday 29 March 2026 00:42:49 +0000 (0:00:00.234) 0:00:00.531 ********** 2026-03-29 00:42:55.947022 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:42:55.947041 | orchestrator | 2026-03-29 00:42:55.947059 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 00:42:55.947077 | orchestrator | Sunday 29 March 2026 00:42:49 +0000 (0:00:00.220) 0:00:00.752 ********** 2026-03-29 00:42:55.947094 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-03-29 00:42:55.947112 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-03-29 00:42:55.947131 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-03-29 00:42:55.947150 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-03-29 00:42:55.947217 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-03-29 00:42:55.947237 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-03-29 00:42:55.947256 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-03-29 00:42:55.947276 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-03-29 00:42:55.947296 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-03-29 00:42:55.947315 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-03-29 00:42:55.947364 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-03-29 00:42:55.947384 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-03-29 00:42:55.947432 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-03-29 00:42:55.947451 | orchestrator | 2026-03-29 00:42:55.947471 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 00:42:55.947490 | orchestrator | Sunday 29 March 2026 00:42:50 +0000 (0:00:00.358) 0:00:01.110 ********** 2026-03-29 00:42:55.947510 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:42:55.947529 | orchestrator | 2026-03-29 00:42:55.947548 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 00:42:55.947566 | orchestrator | Sunday 29 March 2026 00:42:50 +0000 (0:00:00.460) 0:00:01.571 ********** 2026-03-29 00:42:55.947584 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:42:55.947603 | orchestrator | 2026-03-29 00:42:55.947621 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 00:42:55.947645 | orchestrator | Sunday 29 March 2026 00:42:50 +0000 (0:00:00.199) 0:00:01.770 ********** 2026-03-29 00:42:55.947663 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:42:55.947681 | orchestrator | 2026-03-29 00:42:55.947699 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 00:42:55.947718 | orchestrator | Sunday 29 March 2026 00:42:51 +0000 (0:00:00.187) 0:00:01.958 ********** 2026-03-29 00:42:55.947737 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:42:55.947755 | orchestrator | 2026-03-29 00:42:55.947773 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 00:42:55.947792 | orchestrator | Sunday 29 March 2026 00:42:51 +0000 (0:00:00.185) 0:00:02.143 ********** 2026-03-29 00:42:55.947810 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:42:55.947828 | orchestrator | 2026-03-29 00:42:55.947847 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 00:42:55.947866 | orchestrator | Sunday 29 March 2026 00:42:51 +0000 (0:00:00.189) 0:00:02.333 ********** 2026-03-29 00:42:55.947884 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:42:55.947902 | orchestrator | 2026-03-29 00:42:55.947921 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 00:42:55.947939 | orchestrator | Sunday 29 March 2026 00:42:51 +0000 (0:00:00.196) 0:00:02.530 ********** 2026-03-29 00:42:55.947957 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:42:55.947976 | orchestrator | 2026-03-29 00:42:55.947994 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 00:42:55.948013 | orchestrator | Sunday 29 March 2026 00:42:51 +0000 (0:00:00.194) 0:00:02.724 ********** 2026-03-29 00:42:55.948031 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:42:55.948049 | orchestrator | 2026-03-29 00:42:55.948067 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 00:42:55.948085 | orchestrator | Sunday 29 March 2026 00:42:51 +0000 (0:00:00.194) 0:00:02.918 ********** 2026-03-29 00:42:55.948103 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_7b7cca30-4cc6-46c3-a861-4239a25d0253) 2026-03-29 00:42:55.948123 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_7b7cca30-4cc6-46c3-a861-4239a25d0253) 2026-03-29 00:42:55.948141 | orchestrator | 2026-03-29 00:42:55.948160 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 00:42:55.948204 | orchestrator | Sunday 29 March 2026 00:42:52 +0000 (0:00:00.404) 0:00:03.323 ********** 2026-03-29 00:42:55.948223 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_cf707a58-c66d-4c72-840a-e00f4b50b6ac) 2026-03-29 00:42:55.948242 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_cf707a58-c66d-4c72-840a-e00f4b50b6ac) 2026-03-29 00:42:55.948261 | orchestrator | 2026-03-29 00:42:55.948279 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 00:42:55.948312 | orchestrator | Sunday 29 March 2026 00:42:52 +0000 (0:00:00.386) 0:00:03.709 ********** 2026-03-29 00:42:55.948331 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_756b3521-cc64-4337-8d74-551033403337) 2026-03-29 00:42:55.948349 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_756b3521-cc64-4337-8d74-551033403337) 2026-03-29 00:42:55.948368 | orchestrator | 2026-03-29 00:42:55.948386 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 00:42:55.948428 | orchestrator | Sunday 29 March 2026 00:42:53 +0000 (0:00:00.528) 0:00:04.238 ********** 2026-03-29 00:42:55.948447 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_006c3921-cee3-45d1-95d5-34c501bc63f9) 2026-03-29 00:42:55.948466 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_006c3921-cee3-45d1-95d5-34c501bc63f9) 2026-03-29 00:42:55.948485 | orchestrator | 2026-03-29 00:42:55.948503 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 00:42:55.948522 | orchestrator | Sunday 29 March 2026 00:42:53 +0000 (0:00:00.516) 0:00:04.754 ********** 2026-03-29 00:42:55.948540 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-29 00:42:55.948558 | orchestrator | 2026-03-29 00:42:55.948576 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 00:42:55.948594 | orchestrator | Sunday 29 March 2026 00:42:54 +0000 (0:00:00.586) 0:00:05.340 ********** 2026-03-29 00:42:55.948623 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-03-29 00:42:55.948642 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-03-29 00:42:55.948660 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-03-29 00:42:55.948679 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-03-29 00:42:55.948697 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-03-29 00:42:55.948715 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-03-29 00:42:55.948733 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-03-29 00:42:55.948751 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-03-29 00:42:55.948770 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-03-29 00:42:55.948788 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-03-29 00:42:55.948806 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-03-29 00:42:55.948824 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-03-29 00:42:55.948842 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-03-29 00:42:55.948860 | orchestrator | 2026-03-29 00:42:55.948878 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 00:42:55.948897 | orchestrator | Sunday 29 March 2026 00:42:54 +0000 (0:00:00.336) 0:00:05.677 ********** 2026-03-29 00:42:55.948915 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:42:55.948933 | orchestrator | 2026-03-29 00:42:55.948951 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 00:42:55.948970 | orchestrator | Sunday 29 March 2026 00:42:54 +0000 (0:00:00.184) 0:00:05.862 ********** 2026-03-29 00:42:55.948988 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:42:55.949006 | orchestrator | 2026-03-29 00:42:55.949024 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 00:42:55.949043 | orchestrator | Sunday 29 March 2026 00:42:55 +0000 (0:00:00.172) 0:00:06.034 ********** 2026-03-29 00:42:55.949061 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:42:55.949089 | orchestrator | 2026-03-29 00:42:55.949107 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 00:42:55.949126 | orchestrator | Sunday 29 March 2026 00:42:55 +0000 (0:00:00.171) 0:00:06.206 ********** 2026-03-29 00:42:55.949144 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:42:55.949199 | orchestrator | 2026-03-29 00:42:55.949219 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 00:42:55.949238 | orchestrator | Sunday 29 March 2026 00:42:55 +0000 (0:00:00.162) 0:00:06.368 ********** 2026-03-29 00:42:55.949257 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:42:55.949276 | orchestrator | 2026-03-29 00:42:55.949303 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 00:42:55.949322 | orchestrator | Sunday 29 March 2026 00:42:55 +0000 (0:00:00.182) 0:00:06.551 ********** 2026-03-29 00:42:55.949343 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:42:55.949363 | orchestrator | 2026-03-29 00:42:55.949383 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 00:42:55.949500 | orchestrator | Sunday 29 March 2026 00:42:55 +0000 (0:00:00.188) 0:00:06.740 ********** 2026-03-29 00:42:55.949519 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:42:55.949538 | orchestrator | 2026-03-29 00:42:55.949569 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 00:43:02.607202 | orchestrator | Sunday 29 March 2026 00:42:55 +0000 (0:00:00.154) 0:00:06.894 ********** 2026-03-29 00:43:02.607313 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:43:02.607329 | orchestrator | 2026-03-29 00:43:02.607342 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 00:43:02.607354 | orchestrator | Sunday 29 March 2026 00:42:56 +0000 (0:00:00.170) 0:00:07.065 ********** 2026-03-29 00:43:02.607365 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-03-29 00:43:02.607377 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-03-29 00:43:02.607388 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-03-29 00:43:02.607399 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-03-29 00:43:02.607459 | orchestrator | 2026-03-29 00:43:02.607471 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 00:43:02.607482 | orchestrator | Sunday 29 March 2026 00:42:56 +0000 (0:00:00.868) 0:00:07.934 ********** 2026-03-29 00:43:02.607493 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:43:02.607504 | orchestrator | 2026-03-29 00:43:02.607515 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 00:43:02.607526 | orchestrator | Sunday 29 March 2026 00:42:57 +0000 (0:00:00.187) 0:00:08.122 ********** 2026-03-29 00:43:02.607537 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:43:02.607547 | orchestrator | 2026-03-29 00:43:02.607558 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 00:43:02.607569 | orchestrator | Sunday 29 March 2026 00:42:57 +0000 (0:00:00.166) 0:00:08.288 ********** 2026-03-29 00:43:02.607580 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:43:02.607591 | orchestrator | 2026-03-29 00:43:02.607601 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 00:43:02.607612 | orchestrator | Sunday 29 March 2026 00:42:57 +0000 (0:00:00.182) 0:00:08.471 ********** 2026-03-29 00:43:02.607623 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:43:02.607634 | orchestrator | 2026-03-29 00:43:02.607644 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-03-29 00:43:02.607655 | orchestrator | Sunday 29 March 2026 00:42:57 +0000 (0:00:00.183) 0:00:08.655 ********** 2026-03-29 00:43:02.607666 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2026-03-29 00:43:02.607677 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2026-03-29 00:43:02.607688 | orchestrator | 2026-03-29 00:43:02.607699 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-03-29 00:43:02.607710 | orchestrator | Sunday 29 March 2026 00:42:57 +0000 (0:00:00.153) 0:00:08.809 ********** 2026-03-29 00:43:02.607743 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:43:02.607756 | orchestrator | 2026-03-29 00:43:02.607770 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-03-29 00:43:02.607783 | orchestrator | Sunday 29 March 2026 00:42:57 +0000 (0:00:00.113) 0:00:08.922 ********** 2026-03-29 00:43:02.607795 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:43:02.607807 | orchestrator | 2026-03-29 00:43:02.607823 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-03-29 00:43:02.607837 | orchestrator | Sunday 29 March 2026 00:42:58 +0000 (0:00:00.127) 0:00:09.050 ********** 2026-03-29 00:43:02.607849 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:43:02.607861 | orchestrator | 2026-03-29 00:43:02.607873 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-03-29 00:43:02.607886 | orchestrator | Sunday 29 March 2026 00:42:58 +0000 (0:00:00.119) 0:00:09.170 ********** 2026-03-29 00:43:02.607899 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:43:02.607912 | orchestrator | 2026-03-29 00:43:02.607925 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-03-29 00:43:02.607938 | orchestrator | Sunday 29 March 2026 00:42:58 +0000 (0:00:00.124) 0:00:09.294 ********** 2026-03-29 00:43:02.607952 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'cb4f0063-6caa-55a9-9ed6-73f648958ae5'}}) 2026-03-29 00:43:02.607966 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '9db53e8f-4e16-545c-9934-db4b909c3b32'}}) 2026-03-29 00:43:02.607978 | orchestrator | 2026-03-29 00:43:02.607991 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-03-29 00:43:02.608003 | orchestrator | Sunday 29 March 2026 00:42:58 +0000 (0:00:00.156) 0:00:09.450 ********** 2026-03-29 00:43:02.608016 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'cb4f0063-6caa-55a9-9ed6-73f648958ae5'}})  2026-03-29 00:43:02.608043 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '9db53e8f-4e16-545c-9934-db4b909c3b32'}})  2026-03-29 00:43:02.608055 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:43:02.608068 | orchestrator | 2026-03-29 00:43:02.608079 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-03-29 00:43:02.608089 | orchestrator | Sunday 29 March 2026 00:42:58 +0000 (0:00:00.134) 0:00:09.585 ********** 2026-03-29 00:43:02.608100 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'cb4f0063-6caa-55a9-9ed6-73f648958ae5'}})  2026-03-29 00:43:02.608111 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '9db53e8f-4e16-545c-9934-db4b909c3b32'}})  2026-03-29 00:43:02.608122 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:43:02.608133 | orchestrator | 2026-03-29 00:43:02.608144 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-03-29 00:43:02.608154 | orchestrator | Sunday 29 March 2026 00:42:58 +0000 (0:00:00.267) 0:00:09.853 ********** 2026-03-29 00:43:02.608165 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'cb4f0063-6caa-55a9-9ed6-73f648958ae5'}})  2026-03-29 00:43:02.608195 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '9db53e8f-4e16-545c-9934-db4b909c3b32'}})  2026-03-29 00:43:02.608207 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:43:02.608218 | orchestrator | 2026-03-29 00:43:02.608228 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-03-29 00:43:02.608239 | orchestrator | Sunday 29 March 2026 00:42:59 +0000 (0:00:00.136) 0:00:09.989 ********** 2026-03-29 00:43:02.608250 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:43:02.608261 | orchestrator | 2026-03-29 00:43:02.608271 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-03-29 00:43:02.608282 | orchestrator | Sunday 29 March 2026 00:42:59 +0000 (0:00:00.116) 0:00:10.106 ********** 2026-03-29 00:43:02.608293 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:43:02.608311 | orchestrator | 2026-03-29 00:43:02.608322 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-03-29 00:43:02.608333 | orchestrator | Sunday 29 March 2026 00:42:59 +0000 (0:00:00.126) 0:00:10.232 ********** 2026-03-29 00:43:02.608344 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:43:02.608355 | orchestrator | 2026-03-29 00:43:02.608376 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-03-29 00:43:02.608387 | orchestrator | Sunday 29 March 2026 00:42:59 +0000 (0:00:00.107) 0:00:10.340 ********** 2026-03-29 00:43:02.608398 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:43:02.608444 | orchestrator | 2026-03-29 00:43:02.608455 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-03-29 00:43:02.608466 | orchestrator | Sunday 29 March 2026 00:42:59 +0000 (0:00:00.114) 0:00:10.455 ********** 2026-03-29 00:43:02.608476 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:43:02.608487 | orchestrator | 2026-03-29 00:43:02.608498 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-03-29 00:43:02.608509 | orchestrator | Sunday 29 March 2026 00:42:59 +0000 (0:00:00.117) 0:00:10.573 ********** 2026-03-29 00:43:02.608519 | orchestrator | ok: [testbed-node-3] => { 2026-03-29 00:43:02.608530 | orchestrator |  "ceph_osd_devices": { 2026-03-29 00:43:02.608541 | orchestrator |  "sdb": { 2026-03-29 00:43:02.608553 | orchestrator |  "osd_lvm_uuid": "cb4f0063-6caa-55a9-9ed6-73f648958ae5" 2026-03-29 00:43:02.608564 | orchestrator |  }, 2026-03-29 00:43:02.608575 | orchestrator |  "sdc": { 2026-03-29 00:43:02.608586 | orchestrator |  "osd_lvm_uuid": "9db53e8f-4e16-545c-9934-db4b909c3b32" 2026-03-29 00:43:02.608597 | orchestrator |  } 2026-03-29 00:43:02.608608 | orchestrator |  } 2026-03-29 00:43:02.608619 | orchestrator | } 2026-03-29 00:43:02.608630 | orchestrator | 2026-03-29 00:43:02.608641 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-03-29 00:43:02.608652 | orchestrator | Sunday 29 March 2026 00:42:59 +0000 (0:00:00.118) 0:00:10.691 ********** 2026-03-29 00:43:02.608662 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:43:02.608673 | orchestrator | 2026-03-29 00:43:02.608684 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-03-29 00:43:02.608694 | orchestrator | Sunday 29 March 2026 00:42:59 +0000 (0:00:00.113) 0:00:10.805 ********** 2026-03-29 00:43:02.608705 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:43:02.608715 | orchestrator | 2026-03-29 00:43:02.608726 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-03-29 00:43:02.608737 | orchestrator | Sunday 29 March 2026 00:42:59 +0000 (0:00:00.122) 0:00:10.927 ********** 2026-03-29 00:43:02.608748 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:43:02.608758 | orchestrator | 2026-03-29 00:43:02.608769 | orchestrator | TASK [Print configuration data] ************************************************ 2026-03-29 00:43:02.608780 | orchestrator | Sunday 29 March 2026 00:43:00 +0000 (0:00:00.111) 0:00:11.038 ********** 2026-03-29 00:43:02.608790 | orchestrator | changed: [testbed-node-3] => { 2026-03-29 00:43:02.608801 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-03-29 00:43:02.608812 | orchestrator |  "ceph_osd_devices": { 2026-03-29 00:43:02.608823 | orchestrator |  "sdb": { 2026-03-29 00:43:02.608834 | orchestrator |  "osd_lvm_uuid": "cb4f0063-6caa-55a9-9ed6-73f648958ae5" 2026-03-29 00:43:02.608844 | orchestrator |  }, 2026-03-29 00:43:02.608855 | orchestrator |  "sdc": { 2026-03-29 00:43:02.608866 | orchestrator |  "osd_lvm_uuid": "9db53e8f-4e16-545c-9934-db4b909c3b32" 2026-03-29 00:43:02.608877 | orchestrator |  } 2026-03-29 00:43:02.608888 | orchestrator |  }, 2026-03-29 00:43:02.608899 | orchestrator |  "lvm_volumes": [ 2026-03-29 00:43:02.608910 | orchestrator |  { 2026-03-29 00:43:02.608921 | orchestrator |  "data": "osd-block-cb4f0063-6caa-55a9-9ed6-73f648958ae5", 2026-03-29 00:43:02.608931 | orchestrator |  "data_vg": "ceph-cb4f0063-6caa-55a9-9ed6-73f648958ae5" 2026-03-29 00:43:02.608949 | orchestrator |  }, 2026-03-29 00:43:02.608960 | orchestrator |  { 2026-03-29 00:43:02.608970 | orchestrator |  "data": "osd-block-9db53e8f-4e16-545c-9934-db4b909c3b32", 2026-03-29 00:43:02.608981 | orchestrator |  "data_vg": "ceph-9db53e8f-4e16-545c-9934-db4b909c3b32" 2026-03-29 00:43:02.608992 | orchestrator |  } 2026-03-29 00:43:02.609003 | orchestrator |  ] 2026-03-29 00:43:02.609014 | orchestrator |  } 2026-03-29 00:43:02.609028 | orchestrator | } 2026-03-29 00:43:02.609047 | orchestrator | 2026-03-29 00:43:02.609066 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-03-29 00:43:02.609084 | orchestrator | Sunday 29 March 2026 00:43:00 +0000 (0:00:00.193) 0:00:11.232 ********** 2026-03-29 00:43:02.609103 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-29 00:43:02.609121 | orchestrator | 2026-03-29 00:43:02.609139 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-03-29 00:43:02.609158 | orchestrator | 2026-03-29 00:43:02.609170 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-29 00:43:02.609181 | orchestrator | Sunday 29 March 2026 00:43:02 +0000 (0:00:01.891) 0:00:13.124 ********** 2026-03-29 00:43:02.609191 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-03-29 00:43:02.609202 | orchestrator | 2026-03-29 00:43:02.609219 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-29 00:43:02.609230 | orchestrator | Sunday 29 March 2026 00:43:02 +0000 (0:00:00.225) 0:00:13.350 ********** 2026-03-29 00:43:02.609241 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:43:02.609252 | orchestrator | 2026-03-29 00:43:02.609271 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 00:43:09.239955 | orchestrator | Sunday 29 March 2026 00:43:02 +0000 (0:00:00.205) 0:00:13.555 ********** 2026-03-29 00:43:09.240060 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-03-29 00:43:09.240076 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-03-29 00:43:09.240086 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-03-29 00:43:09.240096 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-03-29 00:43:09.240106 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-03-29 00:43:09.240116 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-03-29 00:43:09.240126 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-03-29 00:43:09.240140 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-03-29 00:43:09.240150 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-03-29 00:43:09.240161 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-03-29 00:43:09.240171 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-03-29 00:43:09.240181 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-03-29 00:43:09.240190 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-03-29 00:43:09.240200 | orchestrator | 2026-03-29 00:43:09.240211 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 00:43:09.240221 | orchestrator | Sunday 29 March 2026 00:43:02 +0000 (0:00:00.323) 0:00:13.879 ********** 2026-03-29 00:43:09.240231 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:43:09.240242 | orchestrator | 2026-03-29 00:43:09.240251 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 00:43:09.240261 | orchestrator | Sunday 29 March 2026 00:43:03 +0000 (0:00:00.180) 0:00:14.059 ********** 2026-03-29 00:43:09.240292 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:43:09.240302 | orchestrator | 2026-03-29 00:43:09.240312 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 00:43:09.240322 | orchestrator | Sunday 29 March 2026 00:43:03 +0000 (0:00:00.186) 0:00:14.246 ********** 2026-03-29 00:43:09.240332 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:43:09.240341 | orchestrator | 2026-03-29 00:43:09.240351 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 00:43:09.240361 | orchestrator | Sunday 29 March 2026 00:43:03 +0000 (0:00:00.183) 0:00:14.430 ********** 2026-03-29 00:43:09.240370 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:43:09.240380 | orchestrator | 2026-03-29 00:43:09.240390 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 00:43:09.240399 | orchestrator | Sunday 29 March 2026 00:43:03 +0000 (0:00:00.188) 0:00:14.619 ********** 2026-03-29 00:43:09.240409 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:43:09.240445 | orchestrator | 2026-03-29 00:43:09.240455 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 00:43:09.240465 | orchestrator | Sunday 29 March 2026 00:43:04 +0000 (0:00:00.428) 0:00:15.047 ********** 2026-03-29 00:43:09.240474 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:43:09.240484 | orchestrator | 2026-03-29 00:43:09.240494 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 00:43:09.240504 | orchestrator | Sunday 29 March 2026 00:43:04 +0000 (0:00:00.173) 0:00:15.220 ********** 2026-03-29 00:43:09.240516 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:43:09.240527 | orchestrator | 2026-03-29 00:43:09.240538 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 00:43:09.240550 | orchestrator | Sunday 29 March 2026 00:43:04 +0000 (0:00:00.185) 0:00:15.405 ********** 2026-03-29 00:43:09.240562 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:43:09.240573 | orchestrator | 2026-03-29 00:43:09.240584 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 00:43:09.240595 | orchestrator | Sunday 29 March 2026 00:43:04 +0000 (0:00:00.175) 0:00:15.581 ********** 2026-03-29 00:43:09.240607 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_331a20ca-a2d6-4acb-b247-8df95204773a) 2026-03-29 00:43:09.240620 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_331a20ca-a2d6-4acb-b247-8df95204773a) 2026-03-29 00:43:09.240631 | orchestrator | 2026-03-29 00:43:09.240660 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 00:43:09.240673 | orchestrator | Sunday 29 March 2026 00:43:04 +0000 (0:00:00.367) 0:00:15.949 ********** 2026-03-29 00:43:09.240684 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_f8431fa8-afc6-4068-bff4-a67d5c0799f9) 2026-03-29 00:43:09.240695 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_f8431fa8-afc6-4068-bff4-a67d5c0799f9) 2026-03-29 00:43:09.240707 | orchestrator | 2026-03-29 00:43:09.240718 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 00:43:09.240730 | orchestrator | Sunday 29 March 2026 00:43:05 +0000 (0:00:00.384) 0:00:16.333 ********** 2026-03-29 00:43:09.240741 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_08797191-4f26-4e13-8d53-ed6640c6fbd2) 2026-03-29 00:43:09.240753 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_08797191-4f26-4e13-8d53-ed6640c6fbd2) 2026-03-29 00:43:09.240764 | orchestrator | 2026-03-29 00:43:09.240776 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 00:43:09.240802 | orchestrator | Sunday 29 March 2026 00:43:05 +0000 (0:00:00.384) 0:00:16.717 ********** 2026-03-29 00:43:09.240814 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_6eff12ff-972f-42e1-84ee-23c8e4926f48) 2026-03-29 00:43:09.240825 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_6eff12ff-972f-42e1-84ee-23c8e4926f48) 2026-03-29 00:43:09.240837 | orchestrator | 2026-03-29 00:43:09.240855 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 00:43:09.240867 | orchestrator | Sunday 29 March 2026 00:43:06 +0000 (0:00:00.372) 0:00:17.090 ********** 2026-03-29 00:43:09.240878 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-29 00:43:09.240890 | orchestrator | 2026-03-29 00:43:09.240900 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 00:43:09.240909 | orchestrator | Sunday 29 March 2026 00:43:06 +0000 (0:00:00.309) 0:00:17.399 ********** 2026-03-29 00:43:09.240919 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-03-29 00:43:09.240928 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-03-29 00:43:09.240938 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-03-29 00:43:09.240947 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-03-29 00:43:09.240957 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-03-29 00:43:09.240966 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-03-29 00:43:09.240976 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-03-29 00:43:09.240986 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-03-29 00:43:09.240995 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-03-29 00:43:09.241005 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-03-29 00:43:09.241014 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-03-29 00:43:09.241023 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-03-29 00:43:09.241033 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-03-29 00:43:09.241042 | orchestrator | 2026-03-29 00:43:09.241052 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 00:43:09.241061 | orchestrator | Sunday 29 March 2026 00:43:06 +0000 (0:00:00.361) 0:00:17.761 ********** 2026-03-29 00:43:09.241071 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:43:09.241081 | orchestrator | 2026-03-29 00:43:09.241090 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 00:43:09.241100 | orchestrator | Sunday 29 March 2026 00:43:07 +0000 (0:00:00.199) 0:00:17.960 ********** 2026-03-29 00:43:09.241110 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:43:09.241119 | orchestrator | 2026-03-29 00:43:09.241129 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 00:43:09.241139 | orchestrator | Sunday 29 March 2026 00:43:07 +0000 (0:00:00.474) 0:00:18.435 ********** 2026-03-29 00:43:09.241149 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:43:09.241158 | orchestrator | 2026-03-29 00:43:09.241168 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 00:43:09.241178 | orchestrator | Sunday 29 March 2026 00:43:07 +0000 (0:00:00.185) 0:00:18.621 ********** 2026-03-29 00:43:09.241187 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:43:09.241197 | orchestrator | 2026-03-29 00:43:09.241206 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 00:43:09.241216 | orchestrator | Sunday 29 March 2026 00:43:07 +0000 (0:00:00.181) 0:00:18.802 ********** 2026-03-29 00:43:09.241225 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:43:09.241235 | orchestrator | 2026-03-29 00:43:09.241244 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 00:43:09.241254 | orchestrator | Sunday 29 March 2026 00:43:08 +0000 (0:00:00.176) 0:00:18.978 ********** 2026-03-29 00:43:09.241263 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:43:09.241278 | orchestrator | 2026-03-29 00:43:09.241293 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 00:43:09.241303 | orchestrator | Sunday 29 March 2026 00:43:08 +0000 (0:00:00.191) 0:00:19.170 ********** 2026-03-29 00:43:09.241313 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:43:09.241322 | orchestrator | 2026-03-29 00:43:09.241332 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 00:43:09.241341 | orchestrator | Sunday 29 March 2026 00:43:08 +0000 (0:00:00.177) 0:00:19.347 ********** 2026-03-29 00:43:09.241351 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:43:09.241360 | orchestrator | 2026-03-29 00:43:09.241370 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 00:43:09.241379 | orchestrator | Sunday 29 March 2026 00:43:08 +0000 (0:00:00.158) 0:00:19.506 ********** 2026-03-29 00:43:09.241389 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-03-29 00:43:09.241399 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-03-29 00:43:09.241409 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-03-29 00:43:09.241436 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-03-29 00:43:09.241446 | orchestrator | 2026-03-29 00:43:09.241456 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 00:43:09.241465 | orchestrator | Sunday 29 March 2026 00:43:09 +0000 (0:00:00.580) 0:00:20.086 ********** 2026-03-29 00:43:09.241475 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:43:14.616775 | orchestrator | 2026-03-29 00:43:14.616850 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 00:43:14.616857 | orchestrator | Sunday 29 March 2026 00:43:09 +0000 (0:00:00.178) 0:00:20.265 ********** 2026-03-29 00:43:14.616862 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:43:14.616867 | orchestrator | 2026-03-29 00:43:14.616872 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 00:43:14.616876 | orchestrator | Sunday 29 March 2026 00:43:09 +0000 (0:00:00.165) 0:00:20.430 ********** 2026-03-29 00:43:14.616880 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:43:14.616883 | orchestrator | 2026-03-29 00:43:14.616887 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 00:43:14.616891 | orchestrator | Sunday 29 March 2026 00:43:09 +0000 (0:00:00.170) 0:00:20.600 ********** 2026-03-29 00:43:14.616895 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:43:14.616899 | orchestrator | 2026-03-29 00:43:14.616903 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-03-29 00:43:14.616906 | orchestrator | Sunday 29 March 2026 00:43:09 +0000 (0:00:00.178) 0:00:20.778 ********** 2026-03-29 00:43:14.616910 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2026-03-29 00:43:14.616914 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2026-03-29 00:43:14.616918 | orchestrator | 2026-03-29 00:43:14.616922 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-03-29 00:43:14.616926 | orchestrator | Sunday 29 March 2026 00:43:10 +0000 (0:00:00.275) 0:00:21.054 ********** 2026-03-29 00:43:14.616930 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:43:14.616933 | orchestrator | 2026-03-29 00:43:14.616937 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-03-29 00:43:14.616941 | orchestrator | Sunday 29 March 2026 00:43:10 +0000 (0:00:00.121) 0:00:21.175 ********** 2026-03-29 00:43:14.616945 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:43:14.616948 | orchestrator | 2026-03-29 00:43:14.616952 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-03-29 00:43:14.616956 | orchestrator | Sunday 29 March 2026 00:43:10 +0000 (0:00:00.128) 0:00:21.304 ********** 2026-03-29 00:43:14.616960 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:43:14.616964 | orchestrator | 2026-03-29 00:43:14.616968 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-03-29 00:43:14.616972 | orchestrator | Sunday 29 March 2026 00:43:10 +0000 (0:00:00.118) 0:00:21.422 ********** 2026-03-29 00:43:14.616990 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:43:14.616994 | orchestrator | 2026-03-29 00:43:14.616998 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-03-29 00:43:14.617002 | orchestrator | Sunday 29 March 2026 00:43:10 +0000 (0:00:00.117) 0:00:21.540 ********** 2026-03-29 00:43:14.617006 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'ce40293b-1bc0-5558-a1b7-16c9a624d7c9'}}) 2026-03-29 00:43:14.617011 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'c9903f66-e17d-5d19-b140-42471f0a3aa8'}}) 2026-03-29 00:43:14.617015 | orchestrator | 2026-03-29 00:43:14.617018 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-03-29 00:43:14.617022 | orchestrator | Sunday 29 March 2026 00:43:10 +0000 (0:00:00.126) 0:00:21.667 ********** 2026-03-29 00:43:14.617027 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'ce40293b-1bc0-5558-a1b7-16c9a624d7c9'}})  2026-03-29 00:43:14.617032 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'c9903f66-e17d-5d19-b140-42471f0a3aa8'}})  2026-03-29 00:43:14.617035 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:43:14.617039 | orchestrator | 2026-03-29 00:43:14.617043 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-03-29 00:43:14.617047 | orchestrator | Sunday 29 March 2026 00:43:10 +0000 (0:00:00.123) 0:00:21.790 ********** 2026-03-29 00:43:14.617051 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'ce40293b-1bc0-5558-a1b7-16c9a624d7c9'}})  2026-03-29 00:43:14.617054 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'c9903f66-e17d-5d19-b140-42471f0a3aa8'}})  2026-03-29 00:43:14.617059 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:43:14.617062 | orchestrator | 2026-03-29 00:43:14.617066 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-03-29 00:43:14.617070 | orchestrator | Sunday 29 March 2026 00:43:10 +0000 (0:00:00.115) 0:00:21.905 ********** 2026-03-29 00:43:14.617074 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'ce40293b-1bc0-5558-a1b7-16c9a624d7c9'}})  2026-03-29 00:43:14.617077 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'c9903f66-e17d-5d19-b140-42471f0a3aa8'}})  2026-03-29 00:43:14.617081 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:43:14.617085 | orchestrator | 2026-03-29 00:43:14.617100 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-03-29 00:43:14.617104 | orchestrator | Sunday 29 March 2026 00:43:11 +0000 (0:00:00.119) 0:00:22.024 ********** 2026-03-29 00:43:14.617108 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:43:14.617111 | orchestrator | 2026-03-29 00:43:14.617115 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-03-29 00:43:14.617119 | orchestrator | Sunday 29 March 2026 00:43:11 +0000 (0:00:00.105) 0:00:22.130 ********** 2026-03-29 00:43:14.617123 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:43:14.617126 | orchestrator | 2026-03-29 00:43:14.617130 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-03-29 00:43:14.617134 | orchestrator | Sunday 29 March 2026 00:43:11 +0000 (0:00:00.109) 0:00:22.239 ********** 2026-03-29 00:43:14.617147 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:43:14.617151 | orchestrator | 2026-03-29 00:43:14.617155 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-03-29 00:43:14.617159 | orchestrator | Sunday 29 March 2026 00:43:11 +0000 (0:00:00.103) 0:00:22.342 ********** 2026-03-29 00:43:14.617162 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:43:14.617166 | orchestrator | 2026-03-29 00:43:14.617170 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-03-29 00:43:14.617173 | orchestrator | Sunday 29 March 2026 00:43:11 +0000 (0:00:00.229) 0:00:22.571 ********** 2026-03-29 00:43:14.617177 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:43:14.617184 | orchestrator | 2026-03-29 00:43:14.617188 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-03-29 00:43:14.617192 | orchestrator | Sunday 29 March 2026 00:43:11 +0000 (0:00:00.108) 0:00:22.680 ********** 2026-03-29 00:43:14.617196 | orchestrator | ok: [testbed-node-4] => { 2026-03-29 00:43:14.617200 | orchestrator |  "ceph_osd_devices": { 2026-03-29 00:43:14.617204 | orchestrator |  "sdb": { 2026-03-29 00:43:14.617208 | orchestrator |  "osd_lvm_uuid": "ce40293b-1bc0-5558-a1b7-16c9a624d7c9" 2026-03-29 00:43:14.617212 | orchestrator |  }, 2026-03-29 00:43:14.617216 | orchestrator |  "sdc": { 2026-03-29 00:43:14.617220 | orchestrator |  "osd_lvm_uuid": "c9903f66-e17d-5d19-b140-42471f0a3aa8" 2026-03-29 00:43:14.617223 | orchestrator |  } 2026-03-29 00:43:14.617227 | orchestrator |  } 2026-03-29 00:43:14.617231 | orchestrator | } 2026-03-29 00:43:14.617235 | orchestrator | 2026-03-29 00:43:14.617239 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-03-29 00:43:14.617243 | orchestrator | Sunday 29 March 2026 00:43:11 +0000 (0:00:00.123) 0:00:22.803 ********** 2026-03-29 00:43:14.617247 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:43:14.617250 | orchestrator | 2026-03-29 00:43:14.617254 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-03-29 00:43:14.617258 | orchestrator | Sunday 29 March 2026 00:43:11 +0000 (0:00:00.126) 0:00:22.930 ********** 2026-03-29 00:43:14.617261 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:43:14.617265 | orchestrator | 2026-03-29 00:43:14.617269 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-03-29 00:43:14.617273 | orchestrator | Sunday 29 March 2026 00:43:12 +0000 (0:00:00.122) 0:00:23.052 ********** 2026-03-29 00:43:14.617276 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:43:14.617280 | orchestrator | 2026-03-29 00:43:14.617284 | orchestrator | TASK [Print configuration data] ************************************************ 2026-03-29 00:43:14.617288 | orchestrator | Sunday 29 March 2026 00:43:12 +0000 (0:00:00.119) 0:00:23.172 ********** 2026-03-29 00:43:14.617292 | orchestrator | changed: [testbed-node-4] => { 2026-03-29 00:43:14.617295 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-03-29 00:43:14.617299 | orchestrator |  "ceph_osd_devices": { 2026-03-29 00:43:14.617303 | orchestrator |  "sdb": { 2026-03-29 00:43:14.617307 | orchestrator |  "osd_lvm_uuid": "ce40293b-1bc0-5558-a1b7-16c9a624d7c9" 2026-03-29 00:43:14.617311 | orchestrator |  }, 2026-03-29 00:43:14.617315 | orchestrator |  "sdc": { 2026-03-29 00:43:14.617318 | orchestrator |  "osd_lvm_uuid": "c9903f66-e17d-5d19-b140-42471f0a3aa8" 2026-03-29 00:43:14.617322 | orchestrator |  } 2026-03-29 00:43:14.617326 | orchestrator |  }, 2026-03-29 00:43:14.617330 | orchestrator |  "lvm_volumes": [ 2026-03-29 00:43:14.617334 | orchestrator |  { 2026-03-29 00:43:14.617337 | orchestrator |  "data": "osd-block-ce40293b-1bc0-5558-a1b7-16c9a624d7c9", 2026-03-29 00:43:14.617341 | orchestrator |  "data_vg": "ceph-ce40293b-1bc0-5558-a1b7-16c9a624d7c9" 2026-03-29 00:43:14.617345 | orchestrator |  }, 2026-03-29 00:43:14.617349 | orchestrator |  { 2026-03-29 00:43:14.617352 | orchestrator |  "data": "osd-block-c9903f66-e17d-5d19-b140-42471f0a3aa8", 2026-03-29 00:43:14.617356 | orchestrator |  "data_vg": "ceph-c9903f66-e17d-5d19-b140-42471f0a3aa8" 2026-03-29 00:43:14.617360 | orchestrator |  } 2026-03-29 00:43:14.617363 | orchestrator |  ] 2026-03-29 00:43:14.617367 | orchestrator |  } 2026-03-29 00:43:14.617371 | orchestrator | } 2026-03-29 00:43:14.617375 | orchestrator | 2026-03-29 00:43:14.617379 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-03-29 00:43:14.617382 | orchestrator | Sunday 29 March 2026 00:43:12 +0000 (0:00:00.185) 0:00:23.357 ********** 2026-03-29 00:43:14.617386 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-03-29 00:43:14.617390 | orchestrator | 2026-03-29 00:43:14.617397 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-03-29 00:43:14.617400 | orchestrator | 2026-03-29 00:43:14.617404 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-29 00:43:14.617408 | orchestrator | Sunday 29 March 2026 00:43:13 +0000 (0:00:00.909) 0:00:24.267 ********** 2026-03-29 00:43:14.617412 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-03-29 00:43:14.617416 | orchestrator | 2026-03-29 00:43:14.617419 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-29 00:43:14.617423 | orchestrator | Sunday 29 March 2026 00:43:13 +0000 (0:00:00.506) 0:00:24.773 ********** 2026-03-29 00:43:14.617447 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:43:14.617451 | orchestrator | 2026-03-29 00:43:14.617454 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 00:43:14.617458 | orchestrator | Sunday 29 March 2026 00:43:14 +0000 (0:00:00.590) 0:00:25.363 ********** 2026-03-29 00:43:14.617462 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-03-29 00:43:14.617466 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-03-29 00:43:14.617469 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-03-29 00:43:14.617473 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-03-29 00:43:14.617477 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-03-29 00:43:14.617484 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-03-29 00:43:21.738911 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-03-29 00:43:21.739008 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-03-29 00:43:21.739021 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-03-29 00:43:21.739030 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-03-29 00:43:21.739057 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-03-29 00:43:21.739066 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-03-29 00:43:21.739075 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-03-29 00:43:21.739084 | orchestrator | 2026-03-29 00:43:21.739095 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 00:43:21.739104 | orchestrator | Sunday 29 March 2026 00:43:14 +0000 (0:00:00.265) 0:00:25.629 ********** 2026-03-29 00:43:21.739113 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:43:21.739123 | orchestrator | 2026-03-29 00:43:21.739132 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 00:43:21.739141 | orchestrator | Sunday 29 March 2026 00:43:14 +0000 (0:00:00.164) 0:00:25.794 ********** 2026-03-29 00:43:21.739150 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:43:21.739158 | orchestrator | 2026-03-29 00:43:21.739167 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 00:43:21.739175 | orchestrator | Sunday 29 March 2026 00:43:15 +0000 (0:00:00.182) 0:00:25.976 ********** 2026-03-29 00:43:21.739184 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:43:21.739192 | orchestrator | 2026-03-29 00:43:21.739201 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 00:43:21.739209 | orchestrator | Sunday 29 March 2026 00:43:15 +0000 (0:00:00.174) 0:00:26.151 ********** 2026-03-29 00:43:21.739222 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:43:21.739231 | orchestrator | 2026-03-29 00:43:21.739240 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 00:43:21.739248 | orchestrator | Sunday 29 March 2026 00:43:15 +0000 (0:00:00.139) 0:00:26.290 ********** 2026-03-29 00:43:21.739278 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:43:21.739287 | orchestrator | 2026-03-29 00:43:21.739296 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 00:43:21.739304 | orchestrator | Sunday 29 March 2026 00:43:15 +0000 (0:00:00.136) 0:00:26.427 ********** 2026-03-29 00:43:21.739312 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:43:21.739321 | orchestrator | 2026-03-29 00:43:21.739329 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 00:43:21.739338 | orchestrator | Sunday 29 March 2026 00:43:15 +0000 (0:00:00.141) 0:00:26.569 ********** 2026-03-29 00:43:21.739346 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:43:21.739355 | orchestrator | 2026-03-29 00:43:21.739364 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 00:43:21.739373 | orchestrator | Sunday 29 March 2026 00:43:15 +0000 (0:00:00.131) 0:00:26.700 ********** 2026-03-29 00:43:21.739381 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:43:21.739390 | orchestrator | 2026-03-29 00:43:21.739398 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 00:43:21.739407 | orchestrator | Sunday 29 March 2026 00:43:15 +0000 (0:00:00.152) 0:00:26.853 ********** 2026-03-29 00:43:21.739415 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_7f3fbef7-4677-4949-824a-e0d60c532987) 2026-03-29 00:43:21.739425 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_7f3fbef7-4677-4949-824a-e0d60c532987) 2026-03-29 00:43:21.739433 | orchestrator | 2026-03-29 00:43:21.739548 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 00:43:21.739559 | orchestrator | Sunday 29 March 2026 00:43:16 +0000 (0:00:00.506) 0:00:27.359 ********** 2026-03-29 00:43:21.739569 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_19b179cd-386f-4584-8a4b-106e5ad8592d) 2026-03-29 00:43:21.739579 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_19b179cd-386f-4584-8a4b-106e5ad8592d) 2026-03-29 00:43:21.739590 | orchestrator | 2026-03-29 00:43:21.739599 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 00:43:21.739610 | orchestrator | Sunday 29 March 2026 00:43:17 +0000 (0:00:00.617) 0:00:27.976 ********** 2026-03-29 00:43:21.739620 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_64dd44e8-56db-4990-9653-26f9a904c769) 2026-03-29 00:43:21.739631 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_64dd44e8-56db-4990-9653-26f9a904c769) 2026-03-29 00:43:21.739641 | orchestrator | 2026-03-29 00:43:21.739650 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 00:43:21.739658 | orchestrator | Sunday 29 March 2026 00:43:17 +0000 (0:00:00.386) 0:00:28.363 ********** 2026-03-29 00:43:21.739667 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_66d732d5-e9a7-47c2-8d7a-ba89d690a00e) 2026-03-29 00:43:21.739675 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_66d732d5-e9a7-47c2-8d7a-ba89d690a00e) 2026-03-29 00:43:21.739684 | orchestrator | 2026-03-29 00:43:21.739692 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 00:43:21.739701 | orchestrator | Sunday 29 March 2026 00:43:17 +0000 (0:00:00.393) 0:00:28.756 ********** 2026-03-29 00:43:21.739710 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-29 00:43:21.739718 | orchestrator | 2026-03-29 00:43:21.739727 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 00:43:21.739751 | orchestrator | Sunday 29 March 2026 00:43:18 +0000 (0:00:00.315) 0:00:29.072 ********** 2026-03-29 00:43:21.739761 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-03-29 00:43:21.739769 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-03-29 00:43:21.739779 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-03-29 00:43:21.739787 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-03-29 00:43:21.739804 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-03-29 00:43:21.739812 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-03-29 00:43:21.739821 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-03-29 00:43:21.739830 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-03-29 00:43:21.739838 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-03-29 00:43:21.739847 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-03-29 00:43:21.739855 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-03-29 00:43:21.739864 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-03-29 00:43:21.739872 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-03-29 00:43:21.739881 | orchestrator | 2026-03-29 00:43:21.739890 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 00:43:21.739898 | orchestrator | Sunday 29 March 2026 00:43:18 +0000 (0:00:00.330) 0:00:29.403 ********** 2026-03-29 00:43:21.739923 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:43:21.739942 | orchestrator | 2026-03-29 00:43:21.739951 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 00:43:21.739959 | orchestrator | Sunday 29 March 2026 00:43:18 +0000 (0:00:00.189) 0:00:29.593 ********** 2026-03-29 00:43:21.739968 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:43:21.739977 | orchestrator | 2026-03-29 00:43:21.739986 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 00:43:21.739994 | orchestrator | Sunday 29 March 2026 00:43:18 +0000 (0:00:00.204) 0:00:29.798 ********** 2026-03-29 00:43:21.740003 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:43:21.740012 | orchestrator | 2026-03-29 00:43:21.740020 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 00:43:21.740035 | orchestrator | Sunday 29 March 2026 00:43:19 +0000 (0:00:00.174) 0:00:29.972 ********** 2026-03-29 00:43:21.740044 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:43:21.740053 | orchestrator | 2026-03-29 00:43:21.740062 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 00:43:21.740070 | orchestrator | Sunday 29 March 2026 00:43:19 +0000 (0:00:00.214) 0:00:30.186 ********** 2026-03-29 00:43:21.740079 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:43:21.740088 | orchestrator | 2026-03-29 00:43:21.740096 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 00:43:21.740105 | orchestrator | Sunday 29 March 2026 00:43:19 +0000 (0:00:00.187) 0:00:30.373 ********** 2026-03-29 00:43:21.740114 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:43:21.740122 | orchestrator | 2026-03-29 00:43:21.740131 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 00:43:21.740140 | orchestrator | Sunday 29 March 2026 00:43:20 +0000 (0:00:00.586) 0:00:30.960 ********** 2026-03-29 00:43:21.740149 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:43:21.740157 | orchestrator | 2026-03-29 00:43:21.740166 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 00:43:21.740175 | orchestrator | Sunday 29 March 2026 00:43:20 +0000 (0:00:00.192) 0:00:31.152 ********** 2026-03-29 00:43:21.740183 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:43:21.740192 | orchestrator | 2026-03-29 00:43:21.740201 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 00:43:21.740209 | orchestrator | Sunday 29 March 2026 00:43:20 +0000 (0:00:00.180) 0:00:31.332 ********** 2026-03-29 00:43:21.740218 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-03-29 00:43:21.740233 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-03-29 00:43:21.740242 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-03-29 00:43:21.740251 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-03-29 00:43:21.740260 | orchestrator | 2026-03-29 00:43:21.740268 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 00:43:21.740277 | orchestrator | Sunday 29 March 2026 00:43:20 +0000 (0:00:00.607) 0:00:31.940 ********** 2026-03-29 00:43:21.740286 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:43:21.740294 | orchestrator | 2026-03-29 00:43:21.740303 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 00:43:21.740312 | orchestrator | Sunday 29 March 2026 00:43:21 +0000 (0:00:00.187) 0:00:32.128 ********** 2026-03-29 00:43:21.740320 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:43:21.740329 | orchestrator | 2026-03-29 00:43:21.740338 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 00:43:21.740346 | orchestrator | Sunday 29 March 2026 00:43:21 +0000 (0:00:00.178) 0:00:32.307 ********** 2026-03-29 00:43:21.740355 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:43:21.740364 | orchestrator | 2026-03-29 00:43:21.740372 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 00:43:21.740381 | orchestrator | Sunday 29 March 2026 00:43:21 +0000 (0:00:00.187) 0:00:32.494 ********** 2026-03-29 00:43:21.740390 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:43:21.740398 | orchestrator | 2026-03-29 00:43:21.740413 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-03-29 00:43:25.536177 | orchestrator | Sunday 29 March 2026 00:43:21 +0000 (0:00:00.192) 0:00:32.686 ********** 2026-03-29 00:43:25.536251 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2026-03-29 00:43:25.536258 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2026-03-29 00:43:25.536263 | orchestrator | 2026-03-29 00:43:25.536268 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-03-29 00:43:25.536273 | orchestrator | Sunday 29 March 2026 00:43:21 +0000 (0:00:00.153) 0:00:32.840 ********** 2026-03-29 00:43:25.536278 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:43:25.536282 | orchestrator | 2026-03-29 00:43:25.536287 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-03-29 00:43:25.536291 | orchestrator | Sunday 29 March 2026 00:43:21 +0000 (0:00:00.108) 0:00:32.949 ********** 2026-03-29 00:43:25.536295 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:43:25.536299 | orchestrator | 2026-03-29 00:43:25.536303 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-03-29 00:43:25.536307 | orchestrator | Sunday 29 March 2026 00:43:22 +0000 (0:00:00.114) 0:00:33.063 ********** 2026-03-29 00:43:25.536311 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:43:25.536316 | orchestrator | 2026-03-29 00:43:25.536320 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-03-29 00:43:25.536324 | orchestrator | Sunday 29 March 2026 00:43:22 +0000 (0:00:00.115) 0:00:33.178 ********** 2026-03-29 00:43:25.536329 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:43:25.536334 | orchestrator | 2026-03-29 00:43:25.536338 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-03-29 00:43:25.536342 | orchestrator | Sunday 29 March 2026 00:43:22 +0000 (0:00:00.243) 0:00:33.422 ********** 2026-03-29 00:43:25.536346 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '185c2dd0-6b1c-571f-b734-244d928106eb'}}) 2026-03-29 00:43:25.536351 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '18721a71-2d87-5ab0-bec8-5e03a015e695'}}) 2026-03-29 00:43:25.536356 | orchestrator | 2026-03-29 00:43:25.536360 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-03-29 00:43:25.536364 | orchestrator | Sunday 29 March 2026 00:43:22 +0000 (0:00:00.157) 0:00:33.580 ********** 2026-03-29 00:43:25.536369 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '185c2dd0-6b1c-571f-b734-244d928106eb'}})  2026-03-29 00:43:25.536391 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '18721a71-2d87-5ab0-bec8-5e03a015e695'}})  2026-03-29 00:43:25.536395 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:43:25.536399 | orchestrator | 2026-03-29 00:43:25.536404 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-03-29 00:43:25.536408 | orchestrator | Sunday 29 March 2026 00:43:22 +0000 (0:00:00.131) 0:00:33.712 ********** 2026-03-29 00:43:25.536412 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '185c2dd0-6b1c-571f-b734-244d928106eb'}})  2026-03-29 00:43:25.536416 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '18721a71-2d87-5ab0-bec8-5e03a015e695'}})  2026-03-29 00:43:25.536420 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:43:25.536424 | orchestrator | 2026-03-29 00:43:25.536428 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-03-29 00:43:25.536432 | orchestrator | Sunday 29 March 2026 00:43:22 +0000 (0:00:00.133) 0:00:33.845 ********** 2026-03-29 00:43:25.536436 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '185c2dd0-6b1c-571f-b734-244d928106eb'}})  2026-03-29 00:43:25.536440 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '18721a71-2d87-5ab0-bec8-5e03a015e695'}})  2026-03-29 00:43:25.536503 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:43:25.536508 | orchestrator | 2026-03-29 00:43:25.536512 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-03-29 00:43:25.536516 | orchestrator | Sunday 29 March 2026 00:43:23 +0000 (0:00:00.127) 0:00:33.973 ********** 2026-03-29 00:43:25.536520 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:43:25.536524 | orchestrator | 2026-03-29 00:43:25.536528 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-03-29 00:43:25.536532 | orchestrator | Sunday 29 March 2026 00:43:23 +0000 (0:00:00.109) 0:00:34.083 ********** 2026-03-29 00:43:25.536536 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:43:25.536540 | orchestrator | 2026-03-29 00:43:25.536544 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-03-29 00:43:25.536548 | orchestrator | Sunday 29 March 2026 00:43:23 +0000 (0:00:00.119) 0:00:34.202 ********** 2026-03-29 00:43:25.536552 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:43:25.536557 | orchestrator | 2026-03-29 00:43:25.536561 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-03-29 00:43:25.536565 | orchestrator | Sunday 29 March 2026 00:43:23 +0000 (0:00:00.113) 0:00:34.316 ********** 2026-03-29 00:43:25.536569 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:43:25.536573 | orchestrator | 2026-03-29 00:43:25.536577 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-03-29 00:43:25.536588 | orchestrator | Sunday 29 March 2026 00:43:23 +0000 (0:00:00.202) 0:00:34.518 ********** 2026-03-29 00:43:25.536592 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:43:25.536596 | orchestrator | 2026-03-29 00:43:25.536606 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-03-29 00:43:25.536611 | orchestrator | Sunday 29 March 2026 00:43:23 +0000 (0:00:00.146) 0:00:34.665 ********** 2026-03-29 00:43:25.536615 | orchestrator | ok: [testbed-node-5] => { 2026-03-29 00:43:25.536619 | orchestrator |  "ceph_osd_devices": { 2026-03-29 00:43:25.536623 | orchestrator |  "sdb": { 2026-03-29 00:43:25.536640 | orchestrator |  "osd_lvm_uuid": "185c2dd0-6b1c-571f-b734-244d928106eb" 2026-03-29 00:43:25.536645 | orchestrator |  }, 2026-03-29 00:43:25.536649 | orchestrator |  "sdc": { 2026-03-29 00:43:25.536665 | orchestrator |  "osd_lvm_uuid": "18721a71-2d87-5ab0-bec8-5e03a015e695" 2026-03-29 00:43:25.536670 | orchestrator |  } 2026-03-29 00:43:25.536674 | orchestrator |  } 2026-03-29 00:43:25.536678 | orchestrator | } 2026-03-29 00:43:25.536683 | orchestrator | 2026-03-29 00:43:25.536692 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-03-29 00:43:25.536696 | orchestrator | Sunday 29 March 2026 00:43:23 +0000 (0:00:00.133) 0:00:34.798 ********** 2026-03-29 00:43:25.536700 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:43:25.536704 | orchestrator | 2026-03-29 00:43:25.536708 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-03-29 00:43:25.536713 | orchestrator | Sunday 29 March 2026 00:43:23 +0000 (0:00:00.124) 0:00:34.923 ********** 2026-03-29 00:43:25.536717 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:43:25.536721 | orchestrator | 2026-03-29 00:43:25.536725 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-03-29 00:43:25.536729 | orchestrator | Sunday 29 March 2026 00:43:24 +0000 (0:00:00.311) 0:00:35.234 ********** 2026-03-29 00:43:25.536733 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:43:25.536737 | orchestrator | 2026-03-29 00:43:25.536741 | orchestrator | TASK [Print configuration data] ************************************************ 2026-03-29 00:43:25.536746 | orchestrator | Sunday 29 March 2026 00:43:24 +0000 (0:00:00.120) 0:00:35.355 ********** 2026-03-29 00:43:25.536750 | orchestrator | changed: [testbed-node-5] => { 2026-03-29 00:43:25.536755 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-03-29 00:43:25.536760 | orchestrator |  "ceph_osd_devices": { 2026-03-29 00:43:25.536765 | orchestrator |  "sdb": { 2026-03-29 00:43:25.536769 | orchestrator |  "osd_lvm_uuid": "185c2dd0-6b1c-571f-b734-244d928106eb" 2026-03-29 00:43:25.536774 | orchestrator |  }, 2026-03-29 00:43:25.536779 | orchestrator |  "sdc": { 2026-03-29 00:43:25.536787 | orchestrator |  "osd_lvm_uuid": "18721a71-2d87-5ab0-bec8-5e03a015e695" 2026-03-29 00:43:25.536792 | orchestrator |  } 2026-03-29 00:43:25.536796 | orchestrator |  }, 2026-03-29 00:43:25.536801 | orchestrator |  "lvm_volumes": [ 2026-03-29 00:43:25.536806 | orchestrator |  { 2026-03-29 00:43:25.536811 | orchestrator |  "data": "osd-block-185c2dd0-6b1c-571f-b734-244d928106eb", 2026-03-29 00:43:25.536816 | orchestrator |  "data_vg": "ceph-185c2dd0-6b1c-571f-b734-244d928106eb" 2026-03-29 00:43:25.536821 | orchestrator |  }, 2026-03-29 00:43:25.536829 | orchestrator |  { 2026-03-29 00:43:25.536834 | orchestrator |  "data": "osd-block-18721a71-2d87-5ab0-bec8-5e03a015e695", 2026-03-29 00:43:25.536839 | orchestrator |  "data_vg": "ceph-18721a71-2d87-5ab0-bec8-5e03a015e695" 2026-03-29 00:43:25.536844 | orchestrator |  } 2026-03-29 00:43:25.536849 | orchestrator |  ] 2026-03-29 00:43:25.536853 | orchestrator |  } 2026-03-29 00:43:25.536858 | orchestrator | } 2026-03-29 00:43:25.536863 | orchestrator | 2026-03-29 00:43:25.536868 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-03-29 00:43:25.536873 | orchestrator | Sunday 29 March 2026 00:43:24 +0000 (0:00:00.175) 0:00:35.530 ********** 2026-03-29 00:43:25.536878 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-03-29 00:43:25.536882 | orchestrator | 2026-03-29 00:43:25.536887 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 00:43:25.536892 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-29 00:43:25.536899 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-29 00:43:25.536903 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-29 00:43:25.536908 | orchestrator | 2026-03-29 00:43:25.536913 | orchestrator | 2026-03-29 00:43:25.536918 | orchestrator | 2026-03-29 00:43:25.536923 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-29 00:43:25.536927 | orchestrator | Sunday 29 March 2026 00:43:25 +0000 (0:00:00.950) 0:00:36.481 ********** 2026-03-29 00:43:25.536936 | orchestrator | =============================================================================== 2026-03-29 00:43:25.536940 | orchestrator | Write configuration file ------------------------------------------------ 3.75s 2026-03-29 00:43:25.536945 | orchestrator | Add known partitions to the list of available block devices ------------- 1.03s 2026-03-29 00:43:25.536950 | orchestrator | Get initial list of available block devices ----------------------------- 1.02s 2026-03-29 00:43:25.536955 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.97s 2026-03-29 00:43:25.536960 | orchestrator | Add known links to the list of available block devices ------------------ 0.95s 2026-03-29 00:43:25.536964 | orchestrator | Add known partitions to the list of available block devices ------------- 0.87s 2026-03-29 00:43:25.536969 | orchestrator | Add known links to the list of available block devices ------------------ 0.62s 2026-03-29 00:43:25.536974 | orchestrator | Add known partitions to the list of available block devices ------------- 0.61s 2026-03-29 00:43:25.536979 | orchestrator | Add known partitions to the list of available block devices ------------- 0.59s 2026-03-29 00:43:25.536983 | orchestrator | Add known links to the list of available block devices ------------------ 0.59s 2026-03-29 00:43:25.536988 | orchestrator | Set UUIDs for OSD VGs/LVs ----------------------------------------------- 0.58s 2026-03-29 00:43:25.536993 | orchestrator | Add known partitions to the list of available block devices ------------- 0.58s 2026-03-29 00:43:25.536998 | orchestrator | Print DB devices -------------------------------------------------------- 0.56s 2026-03-29 00:43:25.537005 | orchestrator | Print configuration data ------------------------------------------------ 0.55s 2026-03-29 00:43:25.960815 | orchestrator | Set WAL devices config data --------------------------------------------- 0.55s 2026-03-29 00:43:25.960905 | orchestrator | Add known links to the list of available block devices ------------------ 0.53s 2026-03-29 00:43:25.960917 | orchestrator | Add known links to the list of available block devices ------------------ 0.52s 2026-03-29 00:43:25.960926 | orchestrator | Generate lvm_volumes structure (block + wal) ---------------------------- 0.52s 2026-03-29 00:43:25.960936 | orchestrator | Add known links to the list of available block devices ------------------ 0.51s 2026-03-29 00:43:25.960944 | orchestrator | Define lvm_volumes structures ------------------------------------------- 0.49s 2026-03-29 00:43:47.568983 | orchestrator | 2026-03-29 00:43:47 | INFO  | Task 57479ebe-015d-4b4f-a16c-555670e811f0 (sync inventory) is running in background. Output coming soon. 2026-03-29 00:44:15.323069 | orchestrator | 2026-03-29 00:43:49 | INFO  | Starting group_vars file reorganization 2026-03-29 00:44:15.323152 | orchestrator | 2026-03-29 00:43:49 | INFO  | Moved 0 file(s) to their respective directories 2026-03-29 00:44:15.323160 | orchestrator | 2026-03-29 00:43:49 | INFO  | Group_vars file reorganization completed 2026-03-29 00:44:15.323165 | orchestrator | 2026-03-29 00:43:51 | INFO  | Starting variable preparation from inventory 2026-03-29 00:44:15.323170 | orchestrator | 2026-03-29 00:43:54 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2026-03-29 00:44:15.323174 | orchestrator | 2026-03-29 00:43:54 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2026-03-29 00:44:15.323179 | orchestrator | 2026-03-29 00:43:54 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2026-03-29 00:44:15.323183 | orchestrator | 2026-03-29 00:43:54 | INFO  | 3 file(s) written, 6 host(s) processed 2026-03-29 00:44:15.323187 | orchestrator | 2026-03-29 00:43:54 | INFO  | Variable preparation completed 2026-03-29 00:44:15.323191 | orchestrator | 2026-03-29 00:43:55 | INFO  | Starting inventory overwrite handling 2026-03-29 00:44:15.323195 | orchestrator | 2026-03-29 00:43:55 | INFO  | Handling group overwrites in 99-overwrite 2026-03-29 00:44:15.323199 | orchestrator | 2026-03-29 00:43:55 | INFO  | Removing group frr:children from 60-generic 2026-03-29 00:44:15.323220 | orchestrator | 2026-03-29 00:43:55 | INFO  | Removing group netbird:children from 50-infrastructure 2026-03-29 00:44:15.323224 | orchestrator | 2026-03-29 00:43:55 | INFO  | Removing group ceph-rgw from 50-ceph 2026-03-29 00:44:15.323229 | orchestrator | 2026-03-29 00:43:55 | INFO  | Removing group ceph-mds from 50-ceph 2026-03-29 00:44:15.323232 | orchestrator | 2026-03-29 00:43:55 | INFO  | Handling group overwrites in 20-roles 2026-03-29 00:44:15.323236 | orchestrator | 2026-03-29 00:43:55 | INFO  | Removing group k3s_node from 50-infrastructure 2026-03-29 00:44:15.323240 | orchestrator | 2026-03-29 00:43:55 | INFO  | Removed 5 group(s) in total 2026-03-29 00:44:15.323244 | orchestrator | 2026-03-29 00:43:55 | INFO  | Inventory overwrite handling completed 2026-03-29 00:44:15.323247 | orchestrator | 2026-03-29 00:43:56 | INFO  | Starting merge of inventory files 2026-03-29 00:44:15.323251 | orchestrator | 2026-03-29 00:43:56 | INFO  | Inventory files merged successfully 2026-03-29 00:44:15.323255 | orchestrator | 2026-03-29 00:44:01 | INFO  | Generating minified hosts file 2026-03-29 00:44:15.323259 | orchestrator | 2026-03-29 00:44:02 | INFO  | Successfully wrote minified hosts file to /inventory.merge/hosts-minified.yml 2026-03-29 00:44:15.323263 | orchestrator | 2026-03-29 00:44:02 | INFO  | Successfully wrote fast inventory to /inventory.merge/fast/hosts.json 2026-03-29 00:44:15.323279 | orchestrator | 2026-03-29 00:44:03 | INFO  | Generating ClusterShell configuration from Ansible inventory 2026-03-29 00:44:15.323283 | orchestrator | 2026-03-29 00:44:14 | INFO  | Successfully wrote ClusterShell configuration 2026-03-29 00:44:15.323287 | orchestrator | [master 40ab655] 2026-03-29-00-44 2026-03-29 00:44:15.323293 | orchestrator | 5 files changed, 75 insertions(+), 10 deletions(-) 2026-03-29 00:44:15.323300 | orchestrator | create mode 100644 fast/host_vars/testbed-node-3/ceph-lvm-configuration.yml 2026-03-29 00:44:15.323306 | orchestrator | create mode 100644 fast/host_vars/testbed-node-4/ceph-lvm-configuration.yml 2026-03-29 00:44:15.323312 | orchestrator | create mode 100644 fast/host_vars/testbed-node-5/ceph-lvm-configuration.yml 2026-03-29 00:44:16.587750 | orchestrator | 2026-03-29 00:44:16 | INFO  | Prepare task for execution of ceph-create-lvm-devices. 2026-03-29 00:44:16.643492 | orchestrator | 2026-03-29 00:44:16 | INFO  | Task 2932cf92-72b3-47b3-909f-c6301a60e1ea (ceph-create-lvm-devices) was prepared for execution. 2026-03-29 00:44:16.643661 | orchestrator | 2026-03-29 00:44:16 | INFO  | It takes a moment until task 2932cf92-72b3-47b3-909f-c6301a60e1ea (ceph-create-lvm-devices) has been started and output is visible here. 2026-03-29 00:44:27.621295 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-03-29 00:44:27.621410 | orchestrator | 2.16.14 2026-03-29 00:44:27.621426 | orchestrator | 2026-03-29 00:44:27.621437 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-03-29 00:44:27.621447 | orchestrator | 2026-03-29 00:44:27.621456 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-29 00:44:27.621465 | orchestrator | Sunday 29 March 2026 00:44:20 +0000 (0:00:00.251) 0:00:00.251 ********** 2026-03-29 00:44:27.621474 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-29 00:44:27.621483 | orchestrator | 2026-03-29 00:44:27.621492 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-29 00:44:27.621500 | orchestrator | Sunday 29 March 2026 00:44:20 +0000 (0:00:00.269) 0:00:00.520 ********** 2026-03-29 00:44:27.621509 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:44:27.621518 | orchestrator | 2026-03-29 00:44:27.621526 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 00:44:27.621535 | orchestrator | Sunday 29 March 2026 00:44:21 +0000 (0:00:00.178) 0:00:00.699 ********** 2026-03-29 00:44:27.621658 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-03-29 00:44:27.621670 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-03-29 00:44:27.621678 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-03-29 00:44:27.621686 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-03-29 00:44:27.621694 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-03-29 00:44:27.621713 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-03-29 00:44:27.621721 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-03-29 00:44:27.621729 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-03-29 00:44:27.621737 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-03-29 00:44:27.621744 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-03-29 00:44:27.621752 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-03-29 00:44:27.621760 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-03-29 00:44:27.621767 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-03-29 00:44:27.621775 | orchestrator | 2026-03-29 00:44:27.621783 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 00:44:27.621791 | orchestrator | Sunday 29 March 2026 00:44:21 +0000 (0:00:00.355) 0:00:01.054 ********** 2026-03-29 00:44:27.621798 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:44:27.621806 | orchestrator | 2026-03-29 00:44:27.621814 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 00:44:27.621822 | orchestrator | Sunday 29 March 2026 00:44:21 +0000 (0:00:00.412) 0:00:01.466 ********** 2026-03-29 00:44:27.621831 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:44:27.621840 | orchestrator | 2026-03-29 00:44:27.621849 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 00:44:27.621858 | orchestrator | Sunday 29 March 2026 00:44:22 +0000 (0:00:00.179) 0:00:01.646 ********** 2026-03-29 00:44:27.621867 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:44:27.621876 | orchestrator | 2026-03-29 00:44:27.621886 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 00:44:27.621895 | orchestrator | Sunday 29 March 2026 00:44:22 +0000 (0:00:00.200) 0:00:01.847 ********** 2026-03-29 00:44:27.621904 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:44:27.621913 | orchestrator | 2026-03-29 00:44:27.621922 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 00:44:27.621931 | orchestrator | Sunday 29 March 2026 00:44:22 +0000 (0:00:00.191) 0:00:02.039 ********** 2026-03-29 00:44:27.621944 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:44:27.621957 | orchestrator | 2026-03-29 00:44:27.621970 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 00:44:27.621984 | orchestrator | Sunday 29 March 2026 00:44:22 +0000 (0:00:00.197) 0:00:02.236 ********** 2026-03-29 00:44:27.621993 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:44:27.622001 | orchestrator | 2026-03-29 00:44:27.622009 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 00:44:27.622071 | orchestrator | Sunday 29 March 2026 00:44:22 +0000 (0:00:00.215) 0:00:02.452 ********** 2026-03-29 00:44:27.622082 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:44:27.622090 | orchestrator | 2026-03-29 00:44:27.622097 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 00:44:27.622105 | orchestrator | Sunday 29 March 2026 00:44:23 +0000 (0:00:00.184) 0:00:02.636 ********** 2026-03-29 00:44:27.622113 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:44:27.622129 | orchestrator | 2026-03-29 00:44:27.622137 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 00:44:27.622145 | orchestrator | Sunday 29 March 2026 00:44:23 +0000 (0:00:00.189) 0:00:02.826 ********** 2026-03-29 00:44:27.622152 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_7b7cca30-4cc6-46c3-a861-4239a25d0253) 2026-03-29 00:44:27.622162 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_7b7cca30-4cc6-46c3-a861-4239a25d0253) 2026-03-29 00:44:27.622169 | orchestrator | 2026-03-29 00:44:27.622177 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 00:44:27.622202 | orchestrator | Sunday 29 March 2026 00:44:23 +0000 (0:00:00.445) 0:00:03.272 ********** 2026-03-29 00:44:27.622210 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_cf707a58-c66d-4c72-840a-e00f4b50b6ac) 2026-03-29 00:44:27.622218 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_cf707a58-c66d-4c72-840a-e00f4b50b6ac) 2026-03-29 00:44:27.622226 | orchestrator | 2026-03-29 00:44:27.622234 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 00:44:27.622242 | orchestrator | Sunday 29 March 2026 00:44:24 +0000 (0:00:00.487) 0:00:03.759 ********** 2026-03-29 00:44:27.622249 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_756b3521-cc64-4337-8d74-551033403337) 2026-03-29 00:44:27.622257 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_756b3521-cc64-4337-8d74-551033403337) 2026-03-29 00:44:27.622265 | orchestrator | 2026-03-29 00:44:27.622272 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 00:44:27.622280 | orchestrator | Sunday 29 March 2026 00:44:24 +0000 (0:00:00.511) 0:00:04.271 ********** 2026-03-29 00:44:27.622288 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_006c3921-cee3-45d1-95d5-34c501bc63f9) 2026-03-29 00:44:27.622296 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_006c3921-cee3-45d1-95d5-34c501bc63f9) 2026-03-29 00:44:27.622303 | orchestrator | 2026-03-29 00:44:27.622311 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 00:44:27.622319 | orchestrator | Sunday 29 March 2026 00:44:25 +0000 (0:00:00.584) 0:00:04.855 ********** 2026-03-29 00:44:27.622327 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-29 00:44:27.622334 | orchestrator | 2026-03-29 00:44:27.622342 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 00:44:27.622350 | orchestrator | Sunday 29 March 2026 00:44:25 +0000 (0:00:00.625) 0:00:05.480 ********** 2026-03-29 00:44:27.622358 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-03-29 00:44:27.622366 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-03-29 00:44:27.622374 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-03-29 00:44:27.622382 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-03-29 00:44:27.622390 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-03-29 00:44:27.622398 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-03-29 00:44:27.622405 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-03-29 00:44:27.622413 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-03-29 00:44:27.622421 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-03-29 00:44:27.622428 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-03-29 00:44:27.622436 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-03-29 00:44:27.622444 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-03-29 00:44:27.622467 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-03-29 00:44:27.622475 | orchestrator | 2026-03-29 00:44:27.622483 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 00:44:27.622490 | orchestrator | Sunday 29 March 2026 00:44:26 +0000 (0:00:00.378) 0:00:05.859 ********** 2026-03-29 00:44:27.622498 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:44:27.622506 | orchestrator | 2026-03-29 00:44:27.622514 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 00:44:27.622521 | orchestrator | Sunday 29 March 2026 00:44:26 +0000 (0:00:00.183) 0:00:06.042 ********** 2026-03-29 00:44:27.622529 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:44:27.622537 | orchestrator | 2026-03-29 00:44:27.622576 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 00:44:27.622584 | orchestrator | Sunday 29 March 2026 00:44:26 +0000 (0:00:00.175) 0:00:06.218 ********** 2026-03-29 00:44:27.622592 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:44:27.622600 | orchestrator | 2026-03-29 00:44:27.622608 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 00:44:27.622616 | orchestrator | Sunday 29 March 2026 00:44:26 +0000 (0:00:00.187) 0:00:06.406 ********** 2026-03-29 00:44:27.622623 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:44:27.622631 | orchestrator | 2026-03-29 00:44:27.622639 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 00:44:27.622647 | orchestrator | Sunday 29 March 2026 00:44:27 +0000 (0:00:00.185) 0:00:06.591 ********** 2026-03-29 00:44:27.622655 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:44:27.622662 | orchestrator | 2026-03-29 00:44:27.622670 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 00:44:27.622678 | orchestrator | Sunday 29 March 2026 00:44:27 +0000 (0:00:00.184) 0:00:06.776 ********** 2026-03-29 00:44:27.622686 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:44:27.622693 | orchestrator | 2026-03-29 00:44:27.622701 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 00:44:27.622709 | orchestrator | Sunday 29 March 2026 00:44:27 +0000 (0:00:00.191) 0:00:06.967 ********** 2026-03-29 00:44:27.622717 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:44:27.622725 | orchestrator | 2026-03-29 00:44:27.622738 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 00:44:35.105877 | orchestrator | Sunday 29 March 2026 00:44:27 +0000 (0:00:00.198) 0:00:07.166 ********** 2026-03-29 00:44:35.105953 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:44:35.105960 | orchestrator | 2026-03-29 00:44:35.105965 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 00:44:35.105969 | orchestrator | Sunday 29 March 2026 00:44:27 +0000 (0:00:00.216) 0:00:07.383 ********** 2026-03-29 00:44:35.105974 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-03-29 00:44:35.105978 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-03-29 00:44:35.105983 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-03-29 00:44:35.105987 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-03-29 00:44:35.105990 | orchestrator | 2026-03-29 00:44:35.106054 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 00:44:35.106083 | orchestrator | Sunday 29 March 2026 00:44:28 +0000 (0:00:00.848) 0:00:08.231 ********** 2026-03-29 00:44:35.106088 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:44:35.106092 | orchestrator | 2026-03-29 00:44:35.106096 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 00:44:35.106100 | orchestrator | Sunday 29 March 2026 00:44:28 +0000 (0:00:00.179) 0:00:08.411 ********** 2026-03-29 00:44:35.106104 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:44:35.106107 | orchestrator | 2026-03-29 00:44:35.106111 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 00:44:35.106133 | orchestrator | Sunday 29 March 2026 00:44:29 +0000 (0:00:00.179) 0:00:08.591 ********** 2026-03-29 00:44:35.106137 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:44:35.106141 | orchestrator | 2026-03-29 00:44:35.106145 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 00:44:35.106149 | orchestrator | Sunday 29 March 2026 00:44:29 +0000 (0:00:00.185) 0:00:08.776 ********** 2026-03-29 00:44:35.106152 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:44:35.106156 | orchestrator | 2026-03-29 00:44:35.106170 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-03-29 00:44:35.106174 | orchestrator | Sunday 29 March 2026 00:44:29 +0000 (0:00:00.191) 0:00:08.968 ********** 2026-03-29 00:44:35.106178 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:44:35.106181 | orchestrator | 2026-03-29 00:44:35.106185 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-03-29 00:44:35.106189 | orchestrator | Sunday 29 March 2026 00:44:29 +0000 (0:00:00.120) 0:00:09.088 ********** 2026-03-29 00:44:35.106193 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'cb4f0063-6caa-55a9-9ed6-73f648958ae5'}}) 2026-03-29 00:44:35.106198 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '9db53e8f-4e16-545c-9934-db4b909c3b32'}}) 2026-03-29 00:44:35.106202 | orchestrator | 2026-03-29 00:44:35.106205 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-03-29 00:44:35.106209 | orchestrator | Sunday 29 March 2026 00:44:29 +0000 (0:00:00.167) 0:00:09.255 ********** 2026-03-29 00:44:35.106214 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-cb4f0063-6caa-55a9-9ed6-73f648958ae5', 'data_vg': 'ceph-cb4f0063-6caa-55a9-9ed6-73f648958ae5'}) 2026-03-29 00:44:35.106220 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-9db53e8f-4e16-545c-9934-db4b909c3b32', 'data_vg': 'ceph-9db53e8f-4e16-545c-9934-db4b909c3b32'}) 2026-03-29 00:44:35.106223 | orchestrator | 2026-03-29 00:44:35.106228 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-03-29 00:44:35.106231 | orchestrator | Sunday 29 March 2026 00:44:31 +0000 (0:00:02.025) 0:00:11.281 ********** 2026-03-29 00:44:35.106235 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-cb4f0063-6caa-55a9-9ed6-73f648958ae5', 'data_vg': 'ceph-cb4f0063-6caa-55a9-9ed6-73f648958ae5'})  2026-03-29 00:44:35.106240 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9db53e8f-4e16-545c-9934-db4b909c3b32', 'data_vg': 'ceph-9db53e8f-4e16-545c-9934-db4b909c3b32'})  2026-03-29 00:44:35.106244 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:44:35.106248 | orchestrator | 2026-03-29 00:44:35.106252 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-03-29 00:44:35.106255 | orchestrator | Sunday 29 March 2026 00:44:31 +0000 (0:00:00.140) 0:00:11.421 ********** 2026-03-29 00:44:35.106259 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-cb4f0063-6caa-55a9-9ed6-73f648958ae5', 'data_vg': 'ceph-cb4f0063-6caa-55a9-9ed6-73f648958ae5'}) 2026-03-29 00:44:35.106263 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-9db53e8f-4e16-545c-9934-db4b909c3b32', 'data_vg': 'ceph-9db53e8f-4e16-545c-9934-db4b909c3b32'}) 2026-03-29 00:44:35.106267 | orchestrator | 2026-03-29 00:44:35.106270 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-03-29 00:44:35.106274 | orchestrator | Sunday 29 March 2026 00:44:33 +0000 (0:00:01.452) 0:00:12.874 ********** 2026-03-29 00:44:35.106278 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-cb4f0063-6caa-55a9-9ed6-73f648958ae5', 'data_vg': 'ceph-cb4f0063-6caa-55a9-9ed6-73f648958ae5'})  2026-03-29 00:44:35.106282 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9db53e8f-4e16-545c-9934-db4b909c3b32', 'data_vg': 'ceph-9db53e8f-4e16-545c-9934-db4b909c3b32'})  2026-03-29 00:44:35.106286 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:44:35.106290 | orchestrator | 2026-03-29 00:44:35.106293 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-03-29 00:44:35.106301 | orchestrator | Sunday 29 March 2026 00:44:33 +0000 (0:00:00.141) 0:00:13.016 ********** 2026-03-29 00:44:35.106316 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:44:35.106320 | orchestrator | 2026-03-29 00:44:35.106324 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-03-29 00:44:35.106328 | orchestrator | Sunday 29 March 2026 00:44:33 +0000 (0:00:00.119) 0:00:13.135 ********** 2026-03-29 00:44:35.106332 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-cb4f0063-6caa-55a9-9ed6-73f648958ae5', 'data_vg': 'ceph-cb4f0063-6caa-55a9-9ed6-73f648958ae5'})  2026-03-29 00:44:35.106336 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9db53e8f-4e16-545c-9934-db4b909c3b32', 'data_vg': 'ceph-9db53e8f-4e16-545c-9934-db4b909c3b32'})  2026-03-29 00:44:35.106340 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:44:35.106343 | orchestrator | 2026-03-29 00:44:35.106347 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-03-29 00:44:35.106351 | orchestrator | Sunday 29 March 2026 00:44:33 +0000 (0:00:00.274) 0:00:13.410 ********** 2026-03-29 00:44:35.106355 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:44:35.106359 | orchestrator | 2026-03-29 00:44:35.106362 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-03-29 00:44:35.106366 | orchestrator | Sunday 29 March 2026 00:44:33 +0000 (0:00:00.124) 0:00:13.534 ********** 2026-03-29 00:44:35.106370 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-cb4f0063-6caa-55a9-9ed6-73f648958ae5', 'data_vg': 'ceph-cb4f0063-6caa-55a9-9ed6-73f648958ae5'})  2026-03-29 00:44:35.106374 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9db53e8f-4e16-545c-9934-db4b909c3b32', 'data_vg': 'ceph-9db53e8f-4e16-545c-9934-db4b909c3b32'})  2026-03-29 00:44:35.106378 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:44:35.106382 | orchestrator | 2026-03-29 00:44:35.106386 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-03-29 00:44:35.106391 | orchestrator | Sunday 29 March 2026 00:44:34 +0000 (0:00:00.154) 0:00:13.688 ********** 2026-03-29 00:44:35.106395 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:44:35.106400 | orchestrator | 2026-03-29 00:44:35.106404 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-03-29 00:44:35.106409 | orchestrator | Sunday 29 March 2026 00:44:34 +0000 (0:00:00.121) 0:00:13.810 ********** 2026-03-29 00:44:35.106413 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-cb4f0063-6caa-55a9-9ed6-73f648958ae5', 'data_vg': 'ceph-cb4f0063-6caa-55a9-9ed6-73f648958ae5'})  2026-03-29 00:44:35.106418 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9db53e8f-4e16-545c-9934-db4b909c3b32', 'data_vg': 'ceph-9db53e8f-4e16-545c-9934-db4b909c3b32'})  2026-03-29 00:44:35.106422 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:44:35.106426 | orchestrator | 2026-03-29 00:44:35.106431 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-03-29 00:44:35.106435 | orchestrator | Sunday 29 March 2026 00:44:34 +0000 (0:00:00.135) 0:00:13.945 ********** 2026-03-29 00:44:35.106439 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:44:35.106444 | orchestrator | 2026-03-29 00:44:35.106448 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-03-29 00:44:35.106453 | orchestrator | Sunday 29 March 2026 00:44:34 +0000 (0:00:00.147) 0:00:14.093 ********** 2026-03-29 00:44:35.106457 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-cb4f0063-6caa-55a9-9ed6-73f648958ae5', 'data_vg': 'ceph-cb4f0063-6caa-55a9-9ed6-73f648958ae5'})  2026-03-29 00:44:35.106462 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9db53e8f-4e16-545c-9934-db4b909c3b32', 'data_vg': 'ceph-9db53e8f-4e16-545c-9934-db4b909c3b32'})  2026-03-29 00:44:35.106466 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:44:35.106471 | orchestrator | 2026-03-29 00:44:35.106475 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-03-29 00:44:35.106483 | orchestrator | Sunday 29 March 2026 00:44:34 +0000 (0:00:00.157) 0:00:14.250 ********** 2026-03-29 00:44:35.106488 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-cb4f0063-6caa-55a9-9ed6-73f648958ae5', 'data_vg': 'ceph-cb4f0063-6caa-55a9-9ed6-73f648958ae5'})  2026-03-29 00:44:35.106492 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9db53e8f-4e16-545c-9934-db4b909c3b32', 'data_vg': 'ceph-9db53e8f-4e16-545c-9934-db4b909c3b32'})  2026-03-29 00:44:35.106497 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:44:35.106501 | orchestrator | 2026-03-29 00:44:35.106505 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-03-29 00:44:35.106510 | orchestrator | Sunday 29 March 2026 00:44:34 +0000 (0:00:00.135) 0:00:14.386 ********** 2026-03-29 00:44:35.106514 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-cb4f0063-6caa-55a9-9ed6-73f648958ae5', 'data_vg': 'ceph-cb4f0063-6caa-55a9-9ed6-73f648958ae5'})  2026-03-29 00:44:35.106518 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9db53e8f-4e16-545c-9934-db4b909c3b32', 'data_vg': 'ceph-9db53e8f-4e16-545c-9934-db4b909c3b32'})  2026-03-29 00:44:35.106523 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:44:35.106527 | orchestrator | 2026-03-29 00:44:35.106531 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-03-29 00:44:35.106597 | orchestrator | Sunday 29 March 2026 00:44:34 +0000 (0:00:00.139) 0:00:14.525 ********** 2026-03-29 00:44:35.106609 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:44:35.106615 | orchestrator | 2026-03-29 00:44:35.106620 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-03-29 00:44:35.106631 | orchestrator | Sunday 29 March 2026 00:44:35 +0000 (0:00:00.120) 0:00:14.645 ********** 2026-03-29 00:44:41.152240 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:44:41.152357 | orchestrator | 2026-03-29 00:44:41.152412 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-03-29 00:44:41.152433 | orchestrator | Sunday 29 March 2026 00:44:35 +0000 (0:00:00.125) 0:00:14.771 ********** 2026-03-29 00:44:41.152448 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:44:41.152465 | orchestrator | 2026-03-29 00:44:41.152475 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-03-29 00:44:41.152484 | orchestrator | Sunday 29 March 2026 00:44:35 +0000 (0:00:00.123) 0:00:14.895 ********** 2026-03-29 00:44:41.152493 | orchestrator | ok: [testbed-node-3] => { 2026-03-29 00:44:41.152504 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-03-29 00:44:41.152514 | orchestrator | } 2026-03-29 00:44:41.152523 | orchestrator | 2026-03-29 00:44:41.152532 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-03-29 00:44:41.152541 | orchestrator | Sunday 29 March 2026 00:44:35 +0000 (0:00:00.242) 0:00:15.137 ********** 2026-03-29 00:44:41.152549 | orchestrator | ok: [testbed-node-3] => { 2026-03-29 00:44:41.152558 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-03-29 00:44:41.152607 | orchestrator | } 2026-03-29 00:44:41.152616 | orchestrator | 2026-03-29 00:44:41.152625 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-03-29 00:44:41.152634 | orchestrator | Sunday 29 March 2026 00:44:35 +0000 (0:00:00.135) 0:00:15.273 ********** 2026-03-29 00:44:41.152643 | orchestrator | ok: [testbed-node-3] => { 2026-03-29 00:44:41.152652 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-03-29 00:44:41.152660 | orchestrator | } 2026-03-29 00:44:41.152669 | orchestrator | 2026-03-29 00:44:41.152684 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-03-29 00:44:41.152699 | orchestrator | Sunday 29 March 2026 00:44:35 +0000 (0:00:00.127) 0:00:15.400 ********** 2026-03-29 00:44:41.152714 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:44:41.152730 | orchestrator | 2026-03-29 00:44:41.152764 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-03-29 00:44:41.152779 | orchestrator | Sunday 29 March 2026 00:44:36 +0000 (0:00:00.643) 0:00:16.044 ********** 2026-03-29 00:44:41.152821 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:44:41.152837 | orchestrator | 2026-03-29 00:44:41.152852 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-03-29 00:44:41.152867 | orchestrator | Sunday 29 March 2026 00:44:37 +0000 (0:00:00.511) 0:00:16.556 ********** 2026-03-29 00:44:41.152883 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:44:41.152897 | orchestrator | 2026-03-29 00:44:41.152912 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-03-29 00:44:41.152927 | orchestrator | Sunday 29 March 2026 00:44:37 +0000 (0:00:00.497) 0:00:17.054 ********** 2026-03-29 00:44:41.152943 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:44:41.152956 | orchestrator | 2026-03-29 00:44:41.152971 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-03-29 00:44:41.152985 | orchestrator | Sunday 29 March 2026 00:44:37 +0000 (0:00:00.132) 0:00:17.186 ********** 2026-03-29 00:44:41.152999 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:44:41.153013 | orchestrator | 2026-03-29 00:44:41.153027 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-03-29 00:44:41.153041 | orchestrator | Sunday 29 March 2026 00:44:37 +0000 (0:00:00.107) 0:00:17.294 ********** 2026-03-29 00:44:41.153058 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:44:41.153072 | orchestrator | 2026-03-29 00:44:41.153088 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-03-29 00:44:41.153104 | orchestrator | Sunday 29 March 2026 00:44:37 +0000 (0:00:00.104) 0:00:17.398 ********** 2026-03-29 00:44:41.153120 | orchestrator | ok: [testbed-node-3] => { 2026-03-29 00:44:41.153135 | orchestrator |  "vgs_report": { 2026-03-29 00:44:41.153146 | orchestrator |  "vg": [] 2026-03-29 00:44:41.153155 | orchestrator |  } 2026-03-29 00:44:41.153164 | orchestrator | } 2026-03-29 00:44:41.153173 | orchestrator | 2026-03-29 00:44:41.153181 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-03-29 00:44:41.153190 | orchestrator | Sunday 29 March 2026 00:44:37 +0000 (0:00:00.136) 0:00:17.535 ********** 2026-03-29 00:44:41.153198 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:44:41.153207 | orchestrator | 2026-03-29 00:44:41.153216 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-03-29 00:44:41.153225 | orchestrator | Sunday 29 March 2026 00:44:38 +0000 (0:00:00.144) 0:00:17.680 ********** 2026-03-29 00:44:41.153233 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:44:41.153242 | orchestrator | 2026-03-29 00:44:41.153250 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-03-29 00:44:41.153259 | orchestrator | Sunday 29 March 2026 00:44:38 +0000 (0:00:00.121) 0:00:17.802 ********** 2026-03-29 00:44:41.153268 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:44:41.153276 | orchestrator | 2026-03-29 00:44:41.153285 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-03-29 00:44:41.153293 | orchestrator | Sunday 29 March 2026 00:44:38 +0000 (0:00:00.348) 0:00:18.150 ********** 2026-03-29 00:44:41.153302 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:44:41.153311 | orchestrator | 2026-03-29 00:44:41.153319 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-03-29 00:44:41.153328 | orchestrator | Sunday 29 March 2026 00:44:38 +0000 (0:00:00.123) 0:00:18.274 ********** 2026-03-29 00:44:41.153337 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:44:41.153345 | orchestrator | 2026-03-29 00:44:41.153356 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-03-29 00:44:41.153371 | orchestrator | Sunday 29 March 2026 00:44:38 +0000 (0:00:00.141) 0:00:18.416 ********** 2026-03-29 00:44:41.153392 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:44:41.153407 | orchestrator | 2026-03-29 00:44:41.153421 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-03-29 00:44:41.153434 | orchestrator | Sunday 29 March 2026 00:44:38 +0000 (0:00:00.131) 0:00:18.548 ********** 2026-03-29 00:44:41.153448 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:44:41.153475 | orchestrator | 2026-03-29 00:44:41.153489 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-03-29 00:44:41.153504 | orchestrator | Sunday 29 March 2026 00:44:39 +0000 (0:00:00.135) 0:00:18.684 ********** 2026-03-29 00:44:41.153543 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:44:41.153560 | orchestrator | 2026-03-29 00:44:41.153602 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-03-29 00:44:41.153629 | orchestrator | Sunday 29 March 2026 00:44:39 +0000 (0:00:00.137) 0:00:18.822 ********** 2026-03-29 00:44:41.153644 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:44:41.153659 | orchestrator | 2026-03-29 00:44:41.153672 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-03-29 00:44:41.153687 | orchestrator | Sunday 29 March 2026 00:44:39 +0000 (0:00:00.136) 0:00:18.958 ********** 2026-03-29 00:44:41.153702 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:44:41.153717 | orchestrator | 2026-03-29 00:44:41.153731 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-03-29 00:44:41.153746 | orchestrator | Sunday 29 March 2026 00:44:39 +0000 (0:00:00.137) 0:00:19.095 ********** 2026-03-29 00:44:41.153760 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:44:41.153773 | orchestrator | 2026-03-29 00:44:41.153788 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-03-29 00:44:41.153803 | orchestrator | Sunday 29 March 2026 00:44:39 +0000 (0:00:00.125) 0:00:19.221 ********** 2026-03-29 00:44:41.153818 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:44:41.153832 | orchestrator | 2026-03-29 00:44:41.153846 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-03-29 00:44:41.153860 | orchestrator | Sunday 29 March 2026 00:44:39 +0000 (0:00:00.151) 0:00:19.373 ********** 2026-03-29 00:44:41.153875 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:44:41.153890 | orchestrator | 2026-03-29 00:44:41.153905 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-03-29 00:44:41.153919 | orchestrator | Sunday 29 March 2026 00:44:39 +0000 (0:00:00.134) 0:00:19.508 ********** 2026-03-29 00:44:41.153933 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:44:41.153944 | orchestrator | 2026-03-29 00:44:41.153963 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-03-29 00:44:41.153971 | orchestrator | Sunday 29 March 2026 00:44:40 +0000 (0:00:00.121) 0:00:19.629 ********** 2026-03-29 00:44:41.153982 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-cb4f0063-6caa-55a9-9ed6-73f648958ae5', 'data_vg': 'ceph-cb4f0063-6caa-55a9-9ed6-73f648958ae5'})  2026-03-29 00:44:41.153992 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9db53e8f-4e16-545c-9934-db4b909c3b32', 'data_vg': 'ceph-9db53e8f-4e16-545c-9934-db4b909c3b32'})  2026-03-29 00:44:41.154001 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:44:41.154010 | orchestrator | 2026-03-29 00:44:41.154079 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-03-29 00:44:41.154089 | orchestrator | Sunday 29 March 2026 00:44:40 +0000 (0:00:00.153) 0:00:19.783 ********** 2026-03-29 00:44:41.154098 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-cb4f0063-6caa-55a9-9ed6-73f648958ae5', 'data_vg': 'ceph-cb4f0063-6caa-55a9-9ed6-73f648958ae5'})  2026-03-29 00:44:41.154107 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9db53e8f-4e16-545c-9934-db4b909c3b32', 'data_vg': 'ceph-9db53e8f-4e16-545c-9934-db4b909c3b32'})  2026-03-29 00:44:41.154116 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:44:41.154124 | orchestrator | 2026-03-29 00:44:41.154133 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-03-29 00:44:41.154141 | orchestrator | Sunday 29 March 2026 00:44:40 +0000 (0:00:00.353) 0:00:20.136 ********** 2026-03-29 00:44:41.154150 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-cb4f0063-6caa-55a9-9ed6-73f648958ae5', 'data_vg': 'ceph-cb4f0063-6caa-55a9-9ed6-73f648958ae5'})  2026-03-29 00:44:41.154159 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9db53e8f-4e16-545c-9934-db4b909c3b32', 'data_vg': 'ceph-9db53e8f-4e16-545c-9934-db4b909c3b32'})  2026-03-29 00:44:41.154187 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:44:41.154228 | orchestrator | 2026-03-29 00:44:41.154238 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-03-29 00:44:41.154246 | orchestrator | Sunday 29 March 2026 00:44:40 +0000 (0:00:00.160) 0:00:20.297 ********** 2026-03-29 00:44:41.154255 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-cb4f0063-6caa-55a9-9ed6-73f648958ae5', 'data_vg': 'ceph-cb4f0063-6caa-55a9-9ed6-73f648958ae5'})  2026-03-29 00:44:41.154264 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9db53e8f-4e16-545c-9934-db4b909c3b32', 'data_vg': 'ceph-9db53e8f-4e16-545c-9934-db4b909c3b32'})  2026-03-29 00:44:41.154273 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:44:41.154281 | orchestrator | 2026-03-29 00:44:41.154290 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-03-29 00:44:41.154299 | orchestrator | Sunday 29 March 2026 00:44:40 +0000 (0:00:00.170) 0:00:20.468 ********** 2026-03-29 00:44:41.154307 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-cb4f0063-6caa-55a9-9ed6-73f648958ae5', 'data_vg': 'ceph-cb4f0063-6caa-55a9-9ed6-73f648958ae5'})  2026-03-29 00:44:41.154316 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9db53e8f-4e16-545c-9934-db4b909c3b32', 'data_vg': 'ceph-9db53e8f-4e16-545c-9934-db4b909c3b32'})  2026-03-29 00:44:41.154325 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:44:41.154333 | orchestrator | 2026-03-29 00:44:41.154342 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-03-29 00:44:41.154350 | orchestrator | Sunday 29 March 2026 00:44:41 +0000 (0:00:00.165) 0:00:20.633 ********** 2026-03-29 00:44:41.154370 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-cb4f0063-6caa-55a9-9ed6-73f648958ae5', 'data_vg': 'ceph-cb4f0063-6caa-55a9-9ed6-73f648958ae5'})  2026-03-29 00:44:45.706287 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9db53e8f-4e16-545c-9934-db4b909c3b32', 'data_vg': 'ceph-9db53e8f-4e16-545c-9934-db4b909c3b32'})  2026-03-29 00:44:45.706443 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:44:45.706460 | orchestrator | 2026-03-29 00:44:45.706473 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-03-29 00:44:45.706485 | orchestrator | Sunday 29 March 2026 00:44:41 +0000 (0:00:00.160) 0:00:20.793 ********** 2026-03-29 00:44:45.706495 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-cb4f0063-6caa-55a9-9ed6-73f648958ae5', 'data_vg': 'ceph-cb4f0063-6caa-55a9-9ed6-73f648958ae5'})  2026-03-29 00:44:45.706506 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9db53e8f-4e16-545c-9934-db4b909c3b32', 'data_vg': 'ceph-9db53e8f-4e16-545c-9934-db4b909c3b32'})  2026-03-29 00:44:45.706516 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:44:45.706526 | orchestrator | 2026-03-29 00:44:45.706535 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-03-29 00:44:45.706545 | orchestrator | Sunday 29 March 2026 00:44:41 +0000 (0:00:00.141) 0:00:20.934 ********** 2026-03-29 00:44:45.706555 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-cb4f0063-6caa-55a9-9ed6-73f648958ae5', 'data_vg': 'ceph-cb4f0063-6caa-55a9-9ed6-73f648958ae5'})  2026-03-29 00:44:45.706565 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9db53e8f-4e16-545c-9934-db4b909c3b32', 'data_vg': 'ceph-9db53e8f-4e16-545c-9934-db4b909c3b32'})  2026-03-29 00:44:45.706612 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:44:45.706629 | orchestrator | 2026-03-29 00:44:45.706640 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-03-29 00:44:45.706650 | orchestrator | Sunday 29 March 2026 00:44:41 +0000 (0:00:00.124) 0:00:21.059 ********** 2026-03-29 00:44:45.706660 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:44:45.706671 | orchestrator | 2026-03-29 00:44:45.706711 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-03-29 00:44:45.706721 | orchestrator | Sunday 29 March 2026 00:44:42 +0000 (0:00:00.495) 0:00:21.555 ********** 2026-03-29 00:44:45.706731 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:44:45.706741 | orchestrator | 2026-03-29 00:44:45.706751 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-03-29 00:44:45.706781 | orchestrator | Sunday 29 March 2026 00:44:42 +0000 (0:00:00.490) 0:00:22.045 ********** 2026-03-29 00:44:45.706791 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:44:45.706800 | orchestrator | 2026-03-29 00:44:45.706810 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-03-29 00:44:45.706820 | orchestrator | Sunday 29 March 2026 00:44:42 +0000 (0:00:00.138) 0:00:22.184 ********** 2026-03-29 00:44:45.706830 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-9db53e8f-4e16-545c-9934-db4b909c3b32', 'vg_name': 'ceph-9db53e8f-4e16-545c-9934-db4b909c3b32'}) 2026-03-29 00:44:45.706842 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-cb4f0063-6caa-55a9-9ed6-73f648958ae5', 'vg_name': 'ceph-cb4f0063-6caa-55a9-9ed6-73f648958ae5'}) 2026-03-29 00:44:45.706851 | orchestrator | 2026-03-29 00:44:45.706862 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-03-29 00:44:45.706872 | orchestrator | Sunday 29 March 2026 00:44:42 +0000 (0:00:00.142) 0:00:22.326 ********** 2026-03-29 00:44:45.706881 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-cb4f0063-6caa-55a9-9ed6-73f648958ae5', 'data_vg': 'ceph-cb4f0063-6caa-55a9-9ed6-73f648958ae5'})  2026-03-29 00:44:45.706892 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9db53e8f-4e16-545c-9934-db4b909c3b32', 'data_vg': 'ceph-9db53e8f-4e16-545c-9934-db4b909c3b32'})  2026-03-29 00:44:45.706901 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:44:45.706911 | orchestrator | 2026-03-29 00:44:45.706921 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-03-29 00:44:45.706930 | orchestrator | Sunday 29 March 2026 00:44:42 +0000 (0:00:00.128) 0:00:22.455 ********** 2026-03-29 00:44:45.706940 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-cb4f0063-6caa-55a9-9ed6-73f648958ae5', 'data_vg': 'ceph-cb4f0063-6caa-55a9-9ed6-73f648958ae5'})  2026-03-29 00:44:45.706950 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9db53e8f-4e16-545c-9934-db4b909c3b32', 'data_vg': 'ceph-9db53e8f-4e16-545c-9934-db4b909c3b32'})  2026-03-29 00:44:45.706960 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:44:45.706969 | orchestrator | 2026-03-29 00:44:45.706979 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-03-29 00:44:45.706988 | orchestrator | Sunday 29 March 2026 00:44:43 +0000 (0:00:00.263) 0:00:22.719 ********** 2026-03-29 00:44:45.706998 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-cb4f0063-6caa-55a9-9ed6-73f648958ae5', 'data_vg': 'ceph-cb4f0063-6caa-55a9-9ed6-73f648958ae5'})  2026-03-29 00:44:45.707008 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9db53e8f-4e16-545c-9934-db4b909c3b32', 'data_vg': 'ceph-9db53e8f-4e16-545c-9934-db4b909c3b32'})  2026-03-29 00:44:45.707018 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:44:45.707027 | orchestrator | 2026-03-29 00:44:45.707037 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-03-29 00:44:45.707047 | orchestrator | Sunday 29 March 2026 00:44:43 +0000 (0:00:00.138) 0:00:22.858 ********** 2026-03-29 00:44:45.707077 | orchestrator | ok: [testbed-node-3] => { 2026-03-29 00:44:45.707088 | orchestrator |  "lvm_report": { 2026-03-29 00:44:45.707098 | orchestrator |  "lv": [ 2026-03-29 00:44:45.707108 | orchestrator |  { 2026-03-29 00:44:45.707119 | orchestrator |  "lv_name": "osd-block-9db53e8f-4e16-545c-9934-db4b909c3b32", 2026-03-29 00:44:45.707129 | orchestrator |  "vg_name": "ceph-9db53e8f-4e16-545c-9934-db4b909c3b32" 2026-03-29 00:44:45.707139 | orchestrator |  }, 2026-03-29 00:44:45.707157 | orchestrator |  { 2026-03-29 00:44:45.707166 | orchestrator |  "lv_name": "osd-block-cb4f0063-6caa-55a9-9ed6-73f648958ae5", 2026-03-29 00:44:45.707176 | orchestrator |  "vg_name": "ceph-cb4f0063-6caa-55a9-9ed6-73f648958ae5" 2026-03-29 00:44:45.707185 | orchestrator |  } 2026-03-29 00:44:45.707195 | orchestrator |  ], 2026-03-29 00:44:45.707204 | orchestrator |  "pv": [ 2026-03-29 00:44:45.707214 | orchestrator |  { 2026-03-29 00:44:45.707223 | orchestrator |  "pv_name": "/dev/sdb", 2026-03-29 00:44:45.707233 | orchestrator |  "vg_name": "ceph-cb4f0063-6caa-55a9-9ed6-73f648958ae5" 2026-03-29 00:44:45.707242 | orchestrator |  }, 2026-03-29 00:44:45.707252 | orchestrator |  { 2026-03-29 00:44:45.707261 | orchestrator |  "pv_name": "/dev/sdc", 2026-03-29 00:44:45.707271 | orchestrator |  "vg_name": "ceph-9db53e8f-4e16-545c-9934-db4b909c3b32" 2026-03-29 00:44:45.707280 | orchestrator |  } 2026-03-29 00:44:45.707291 | orchestrator |  ] 2026-03-29 00:44:45.707308 | orchestrator |  } 2026-03-29 00:44:45.707324 | orchestrator | } 2026-03-29 00:44:45.707340 | orchestrator | 2026-03-29 00:44:45.707354 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-03-29 00:44:45.707368 | orchestrator | 2026-03-29 00:44:45.707383 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-29 00:44:45.707408 | orchestrator | Sunday 29 March 2026 00:44:43 +0000 (0:00:00.273) 0:00:23.131 ********** 2026-03-29 00:44:45.707425 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-03-29 00:44:45.707442 | orchestrator | 2026-03-29 00:44:45.707456 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-29 00:44:45.707466 | orchestrator | Sunday 29 March 2026 00:44:43 +0000 (0:00:00.216) 0:00:23.348 ********** 2026-03-29 00:44:45.707476 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:44:45.707486 | orchestrator | 2026-03-29 00:44:45.707496 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 00:44:45.707505 | orchestrator | Sunday 29 March 2026 00:44:43 +0000 (0:00:00.188) 0:00:23.536 ********** 2026-03-29 00:44:45.707515 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-03-29 00:44:45.707525 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-03-29 00:44:45.707534 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-03-29 00:44:45.707544 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-03-29 00:44:45.707553 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-03-29 00:44:45.707563 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-03-29 00:44:45.707625 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-03-29 00:44:45.707635 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-03-29 00:44:45.707645 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-03-29 00:44:45.707654 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-03-29 00:44:45.707664 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-03-29 00:44:45.707673 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-03-29 00:44:45.707683 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-03-29 00:44:45.707692 | orchestrator | 2026-03-29 00:44:45.707702 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 00:44:45.707711 | orchestrator | Sunday 29 March 2026 00:44:44 +0000 (0:00:00.373) 0:00:23.910 ********** 2026-03-29 00:44:45.707721 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:44:45.707739 | orchestrator | 2026-03-29 00:44:45.707749 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 00:44:45.707759 | orchestrator | Sunday 29 March 2026 00:44:44 +0000 (0:00:00.166) 0:00:24.077 ********** 2026-03-29 00:44:45.707768 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:44:45.707778 | orchestrator | 2026-03-29 00:44:45.707787 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 00:44:45.707797 | orchestrator | Sunday 29 March 2026 00:44:44 +0000 (0:00:00.172) 0:00:24.250 ********** 2026-03-29 00:44:45.707806 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:44:45.707816 | orchestrator | 2026-03-29 00:44:45.707825 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 00:44:45.707835 | orchestrator | Sunday 29 March 2026 00:44:44 +0000 (0:00:00.173) 0:00:24.423 ********** 2026-03-29 00:44:45.707845 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:44:45.707855 | orchestrator | 2026-03-29 00:44:45.707864 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 00:44:45.707874 | orchestrator | Sunday 29 March 2026 00:44:45 +0000 (0:00:00.452) 0:00:24.876 ********** 2026-03-29 00:44:45.707883 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:44:45.707893 | orchestrator | 2026-03-29 00:44:45.707902 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 00:44:45.707912 | orchestrator | Sunday 29 March 2026 00:44:45 +0000 (0:00:00.194) 0:00:25.070 ********** 2026-03-29 00:44:45.707922 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:44:45.707931 | orchestrator | 2026-03-29 00:44:45.707950 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 00:44:56.171823 | orchestrator | Sunday 29 March 2026 00:44:45 +0000 (0:00:00.180) 0:00:25.250 ********** 2026-03-29 00:44:56.171929 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:44:56.171943 | orchestrator | 2026-03-29 00:44:56.171952 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 00:44:56.171961 | orchestrator | Sunday 29 March 2026 00:44:45 +0000 (0:00:00.204) 0:00:25.454 ********** 2026-03-29 00:44:56.171969 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:44:56.171977 | orchestrator | 2026-03-29 00:44:56.171985 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 00:44:56.171993 | orchestrator | Sunday 29 March 2026 00:44:46 +0000 (0:00:00.185) 0:00:25.640 ********** 2026-03-29 00:44:56.172001 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_331a20ca-a2d6-4acb-b247-8df95204773a) 2026-03-29 00:44:56.172011 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_331a20ca-a2d6-4acb-b247-8df95204773a) 2026-03-29 00:44:56.172019 | orchestrator | 2026-03-29 00:44:56.172026 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 00:44:56.172034 | orchestrator | Sunday 29 March 2026 00:44:46 +0000 (0:00:00.380) 0:00:26.021 ********** 2026-03-29 00:44:56.172042 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_f8431fa8-afc6-4068-bff4-a67d5c0799f9) 2026-03-29 00:44:56.172050 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_f8431fa8-afc6-4068-bff4-a67d5c0799f9) 2026-03-29 00:44:56.172059 | orchestrator | 2026-03-29 00:44:56.172066 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 00:44:56.172074 | orchestrator | Sunday 29 March 2026 00:44:46 +0000 (0:00:00.397) 0:00:26.419 ********** 2026-03-29 00:44:56.172082 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_08797191-4f26-4e13-8d53-ed6640c6fbd2) 2026-03-29 00:44:56.172091 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_08797191-4f26-4e13-8d53-ed6640c6fbd2) 2026-03-29 00:44:56.172098 | orchestrator | 2026-03-29 00:44:56.172106 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 00:44:56.172114 | orchestrator | Sunday 29 March 2026 00:44:47 +0000 (0:00:00.405) 0:00:26.824 ********** 2026-03-29 00:44:56.172122 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_6eff12ff-972f-42e1-84ee-23c8e4926f48) 2026-03-29 00:44:56.172151 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_6eff12ff-972f-42e1-84ee-23c8e4926f48) 2026-03-29 00:44:56.172160 | orchestrator | 2026-03-29 00:44:56.172168 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 00:44:56.172175 | orchestrator | Sunday 29 March 2026 00:44:47 +0000 (0:00:00.407) 0:00:27.231 ********** 2026-03-29 00:44:56.172183 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-29 00:44:56.172191 | orchestrator | 2026-03-29 00:44:56.172199 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 00:44:56.172206 | orchestrator | Sunday 29 March 2026 00:44:48 +0000 (0:00:00.368) 0:00:27.600 ********** 2026-03-29 00:44:56.172214 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-03-29 00:44:56.172223 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-03-29 00:44:56.172231 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-03-29 00:44:56.172238 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-03-29 00:44:56.172246 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-03-29 00:44:56.172254 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-03-29 00:44:56.172261 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-03-29 00:44:56.172270 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-03-29 00:44:56.172277 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-03-29 00:44:56.172285 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-03-29 00:44:56.172293 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-03-29 00:44:56.172301 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-03-29 00:44:56.172308 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-03-29 00:44:56.172316 | orchestrator | 2026-03-29 00:44:56.172324 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 00:44:56.172332 | orchestrator | Sunday 29 March 2026 00:44:48 +0000 (0:00:00.508) 0:00:28.109 ********** 2026-03-29 00:44:56.172341 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:44:56.172350 | orchestrator | 2026-03-29 00:44:56.172359 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 00:44:56.172368 | orchestrator | Sunday 29 March 2026 00:44:48 +0000 (0:00:00.188) 0:00:28.298 ********** 2026-03-29 00:44:56.172378 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:44:56.172387 | orchestrator | 2026-03-29 00:44:56.172395 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 00:44:56.172404 | orchestrator | Sunday 29 March 2026 00:44:48 +0000 (0:00:00.185) 0:00:28.484 ********** 2026-03-29 00:44:56.172413 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:44:56.172422 | orchestrator | 2026-03-29 00:44:56.172445 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 00:44:56.172456 | orchestrator | Sunday 29 March 2026 00:44:49 +0000 (0:00:00.199) 0:00:28.684 ********** 2026-03-29 00:44:56.172464 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:44:56.172473 | orchestrator | 2026-03-29 00:44:56.172482 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 00:44:56.172491 | orchestrator | Sunday 29 March 2026 00:44:49 +0000 (0:00:00.187) 0:00:28.871 ********** 2026-03-29 00:44:56.172500 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:44:56.172509 | orchestrator | 2026-03-29 00:44:56.172519 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 00:44:56.172533 | orchestrator | Sunday 29 March 2026 00:44:49 +0000 (0:00:00.183) 0:00:29.055 ********** 2026-03-29 00:44:56.172542 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:44:56.172551 | orchestrator | 2026-03-29 00:44:56.172560 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 00:44:56.172569 | orchestrator | Sunday 29 March 2026 00:44:49 +0000 (0:00:00.208) 0:00:29.264 ********** 2026-03-29 00:44:56.172579 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:44:56.172607 | orchestrator | 2026-03-29 00:44:56.172617 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 00:44:56.172626 | orchestrator | Sunday 29 March 2026 00:44:49 +0000 (0:00:00.203) 0:00:29.467 ********** 2026-03-29 00:44:56.172650 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:44:56.172660 | orchestrator | 2026-03-29 00:44:56.172669 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 00:44:56.172682 | orchestrator | Sunday 29 March 2026 00:44:50 +0000 (0:00:00.192) 0:00:29.660 ********** 2026-03-29 00:44:56.172692 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-03-29 00:44:56.172701 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-03-29 00:44:56.172709 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-03-29 00:44:56.172717 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-03-29 00:44:56.172724 | orchestrator | 2026-03-29 00:44:56.172732 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 00:44:56.172740 | orchestrator | Sunday 29 March 2026 00:44:51 +0000 (0:00:01.038) 0:00:30.698 ********** 2026-03-29 00:44:56.172748 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:44:56.172755 | orchestrator | 2026-03-29 00:44:56.172763 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 00:44:56.172771 | orchestrator | Sunday 29 March 2026 00:44:51 +0000 (0:00:00.220) 0:00:30.918 ********** 2026-03-29 00:44:56.172778 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:44:56.172786 | orchestrator | 2026-03-29 00:44:56.172794 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 00:44:56.172801 | orchestrator | Sunday 29 March 2026 00:44:51 +0000 (0:00:00.202) 0:00:31.121 ********** 2026-03-29 00:44:56.172809 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:44:56.172817 | orchestrator | 2026-03-29 00:44:56.172825 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 00:44:56.172832 | orchestrator | Sunday 29 March 2026 00:44:52 +0000 (0:00:00.710) 0:00:31.832 ********** 2026-03-29 00:44:56.172840 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:44:56.172847 | orchestrator | 2026-03-29 00:44:56.172855 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-03-29 00:44:56.172863 | orchestrator | Sunday 29 March 2026 00:44:52 +0000 (0:00:00.252) 0:00:32.084 ********** 2026-03-29 00:44:56.172871 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:44:56.172878 | orchestrator | 2026-03-29 00:44:56.172886 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-03-29 00:44:56.172894 | orchestrator | Sunday 29 March 2026 00:44:52 +0000 (0:00:00.152) 0:00:32.237 ********** 2026-03-29 00:44:56.172901 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'ce40293b-1bc0-5558-a1b7-16c9a624d7c9'}}) 2026-03-29 00:44:56.172910 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'c9903f66-e17d-5d19-b140-42471f0a3aa8'}}) 2026-03-29 00:44:56.172917 | orchestrator | 2026-03-29 00:44:56.172925 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-03-29 00:44:56.172933 | orchestrator | Sunday 29 March 2026 00:44:52 +0000 (0:00:00.225) 0:00:32.462 ********** 2026-03-29 00:44:56.172942 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-ce40293b-1bc0-5558-a1b7-16c9a624d7c9', 'data_vg': 'ceph-ce40293b-1bc0-5558-a1b7-16c9a624d7c9'}) 2026-03-29 00:44:56.172951 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-c9903f66-e17d-5d19-b140-42471f0a3aa8', 'data_vg': 'ceph-c9903f66-e17d-5d19-b140-42471f0a3aa8'}) 2026-03-29 00:44:56.172965 | orchestrator | 2026-03-29 00:44:56.172973 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-03-29 00:44:56.172981 | orchestrator | Sunday 29 March 2026 00:44:54 +0000 (0:00:01.882) 0:00:34.345 ********** 2026-03-29 00:44:56.172989 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ce40293b-1bc0-5558-a1b7-16c9a624d7c9', 'data_vg': 'ceph-ce40293b-1bc0-5558-a1b7-16c9a624d7c9'})  2026-03-29 00:44:56.172997 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c9903f66-e17d-5d19-b140-42471f0a3aa8', 'data_vg': 'ceph-c9903f66-e17d-5d19-b140-42471f0a3aa8'})  2026-03-29 00:44:56.173005 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:44:56.173013 | orchestrator | 2026-03-29 00:44:56.173021 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-03-29 00:44:56.173028 | orchestrator | Sunday 29 March 2026 00:44:54 +0000 (0:00:00.156) 0:00:34.502 ********** 2026-03-29 00:44:56.173036 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-ce40293b-1bc0-5558-a1b7-16c9a624d7c9', 'data_vg': 'ceph-ce40293b-1bc0-5558-a1b7-16c9a624d7c9'}) 2026-03-29 00:44:56.173050 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-c9903f66-e17d-5d19-b140-42471f0a3aa8', 'data_vg': 'ceph-c9903f66-e17d-5d19-b140-42471f0a3aa8'}) 2026-03-29 00:45:01.743248 | orchestrator | 2026-03-29 00:45:01.743363 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-03-29 00:45:01.743374 | orchestrator | Sunday 29 March 2026 00:44:56 +0000 (0:00:01.285) 0:00:35.787 ********** 2026-03-29 00:45:01.743381 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ce40293b-1bc0-5558-a1b7-16c9a624d7c9', 'data_vg': 'ceph-ce40293b-1bc0-5558-a1b7-16c9a624d7c9'})  2026-03-29 00:45:01.743389 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c9903f66-e17d-5d19-b140-42471f0a3aa8', 'data_vg': 'ceph-c9903f66-e17d-5d19-b140-42471f0a3aa8'})  2026-03-29 00:45:01.743396 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:45:01.743402 | orchestrator | 2026-03-29 00:45:01.743409 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-03-29 00:45:01.743414 | orchestrator | Sunday 29 March 2026 00:44:56 +0000 (0:00:00.151) 0:00:35.938 ********** 2026-03-29 00:45:01.743420 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:45:01.743425 | orchestrator | 2026-03-29 00:45:01.743430 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-03-29 00:45:01.743436 | orchestrator | Sunday 29 March 2026 00:44:56 +0000 (0:00:00.137) 0:00:36.076 ********** 2026-03-29 00:45:01.743460 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ce40293b-1bc0-5558-a1b7-16c9a624d7c9', 'data_vg': 'ceph-ce40293b-1bc0-5558-a1b7-16c9a624d7c9'})  2026-03-29 00:45:01.743466 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c9903f66-e17d-5d19-b140-42471f0a3aa8', 'data_vg': 'ceph-c9903f66-e17d-5d19-b140-42471f0a3aa8'})  2026-03-29 00:45:01.743471 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:45:01.743477 | orchestrator | 2026-03-29 00:45:01.743483 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-03-29 00:45:01.743488 | orchestrator | Sunday 29 March 2026 00:44:56 +0000 (0:00:00.162) 0:00:36.238 ********** 2026-03-29 00:45:01.743494 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:45:01.743499 | orchestrator | 2026-03-29 00:45:01.743505 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-03-29 00:45:01.743511 | orchestrator | Sunday 29 March 2026 00:44:56 +0000 (0:00:00.150) 0:00:36.389 ********** 2026-03-29 00:45:01.743516 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ce40293b-1bc0-5558-a1b7-16c9a624d7c9', 'data_vg': 'ceph-ce40293b-1bc0-5558-a1b7-16c9a624d7c9'})  2026-03-29 00:45:01.743522 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c9903f66-e17d-5d19-b140-42471f0a3aa8', 'data_vg': 'ceph-c9903f66-e17d-5d19-b140-42471f0a3aa8'})  2026-03-29 00:45:01.743566 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:45:01.743573 | orchestrator | 2026-03-29 00:45:01.743578 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-03-29 00:45:01.743584 | orchestrator | Sunday 29 March 2026 00:44:56 +0000 (0:00:00.148) 0:00:36.538 ********** 2026-03-29 00:45:01.743589 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:45:01.743596 | orchestrator | 2026-03-29 00:45:01.743638 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-03-29 00:45:01.743644 | orchestrator | Sunday 29 March 2026 00:44:57 +0000 (0:00:00.328) 0:00:36.867 ********** 2026-03-29 00:45:01.743650 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ce40293b-1bc0-5558-a1b7-16c9a624d7c9', 'data_vg': 'ceph-ce40293b-1bc0-5558-a1b7-16c9a624d7c9'})  2026-03-29 00:45:01.743655 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c9903f66-e17d-5d19-b140-42471f0a3aa8', 'data_vg': 'ceph-c9903f66-e17d-5d19-b140-42471f0a3aa8'})  2026-03-29 00:45:01.743661 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:45:01.743667 | orchestrator | 2026-03-29 00:45:01.743672 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-03-29 00:45:01.743678 | orchestrator | Sunday 29 March 2026 00:44:57 +0000 (0:00:00.149) 0:00:37.016 ********** 2026-03-29 00:45:01.743683 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:45:01.743690 | orchestrator | 2026-03-29 00:45:01.743695 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-03-29 00:45:01.743701 | orchestrator | Sunday 29 March 2026 00:44:57 +0000 (0:00:00.128) 0:00:37.144 ********** 2026-03-29 00:45:01.743706 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ce40293b-1bc0-5558-a1b7-16c9a624d7c9', 'data_vg': 'ceph-ce40293b-1bc0-5558-a1b7-16c9a624d7c9'})  2026-03-29 00:45:01.743712 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c9903f66-e17d-5d19-b140-42471f0a3aa8', 'data_vg': 'ceph-c9903f66-e17d-5d19-b140-42471f0a3aa8'})  2026-03-29 00:45:01.743718 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:45:01.743723 | orchestrator | 2026-03-29 00:45:01.743728 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-03-29 00:45:01.743734 | orchestrator | Sunday 29 March 2026 00:44:57 +0000 (0:00:00.171) 0:00:37.315 ********** 2026-03-29 00:45:01.743739 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ce40293b-1bc0-5558-a1b7-16c9a624d7c9', 'data_vg': 'ceph-ce40293b-1bc0-5558-a1b7-16c9a624d7c9'})  2026-03-29 00:45:01.743745 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c9903f66-e17d-5d19-b140-42471f0a3aa8', 'data_vg': 'ceph-c9903f66-e17d-5d19-b140-42471f0a3aa8'})  2026-03-29 00:45:01.743750 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:45:01.743756 | orchestrator | 2026-03-29 00:45:01.743761 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-03-29 00:45:01.743781 | orchestrator | Sunday 29 March 2026 00:44:57 +0000 (0:00:00.141) 0:00:37.457 ********** 2026-03-29 00:45:01.743788 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ce40293b-1bc0-5558-a1b7-16c9a624d7c9', 'data_vg': 'ceph-ce40293b-1bc0-5558-a1b7-16c9a624d7c9'})  2026-03-29 00:45:01.743795 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c9903f66-e17d-5d19-b140-42471f0a3aa8', 'data_vg': 'ceph-c9903f66-e17d-5d19-b140-42471f0a3aa8'})  2026-03-29 00:45:01.743801 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:45:01.743807 | orchestrator | 2026-03-29 00:45:01.743813 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-03-29 00:45:01.743819 | orchestrator | Sunday 29 March 2026 00:44:58 +0000 (0:00:00.149) 0:00:37.607 ********** 2026-03-29 00:45:01.743826 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:45:01.743832 | orchestrator | 2026-03-29 00:45:01.743838 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-03-29 00:45:01.743844 | orchestrator | Sunday 29 March 2026 00:44:58 +0000 (0:00:00.130) 0:00:37.737 ********** 2026-03-29 00:45:01.743856 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:45:01.743863 | orchestrator | 2026-03-29 00:45:01.743869 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-03-29 00:45:01.743879 | orchestrator | Sunday 29 March 2026 00:44:58 +0000 (0:00:00.133) 0:00:37.870 ********** 2026-03-29 00:45:01.743885 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:45:01.743892 | orchestrator | 2026-03-29 00:45:01.743898 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-03-29 00:45:01.743904 | orchestrator | Sunday 29 March 2026 00:44:58 +0000 (0:00:00.117) 0:00:37.988 ********** 2026-03-29 00:45:01.743911 | orchestrator | ok: [testbed-node-4] => { 2026-03-29 00:45:01.743918 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-03-29 00:45:01.743924 | orchestrator | } 2026-03-29 00:45:01.743931 | orchestrator | 2026-03-29 00:45:01.743937 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-03-29 00:45:01.743945 | orchestrator | Sunday 29 March 2026 00:44:58 +0000 (0:00:00.141) 0:00:38.130 ********** 2026-03-29 00:45:01.743954 | orchestrator | ok: [testbed-node-4] => { 2026-03-29 00:45:01.743963 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-03-29 00:45:01.743971 | orchestrator | } 2026-03-29 00:45:01.743979 | orchestrator | 2026-03-29 00:45:01.743988 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-03-29 00:45:01.743996 | orchestrator | Sunday 29 March 2026 00:44:58 +0000 (0:00:00.139) 0:00:38.270 ********** 2026-03-29 00:45:01.744004 | orchestrator | ok: [testbed-node-4] => { 2026-03-29 00:45:01.744012 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-03-29 00:45:01.744021 | orchestrator | } 2026-03-29 00:45:01.744032 | orchestrator | 2026-03-29 00:45:01.744047 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-03-29 00:45:01.744056 | orchestrator | Sunday 29 March 2026 00:44:58 +0000 (0:00:00.138) 0:00:38.408 ********** 2026-03-29 00:45:01.744065 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:45:01.744073 | orchestrator | 2026-03-29 00:45:01.744082 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-03-29 00:45:01.744092 | orchestrator | Sunday 29 March 2026 00:44:59 +0000 (0:00:00.713) 0:00:39.121 ********** 2026-03-29 00:45:01.744101 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:45:01.744109 | orchestrator | 2026-03-29 00:45:01.744117 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-03-29 00:45:01.744124 | orchestrator | Sunday 29 March 2026 00:45:00 +0000 (0:00:00.556) 0:00:39.678 ********** 2026-03-29 00:45:01.744133 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:45:01.744141 | orchestrator | 2026-03-29 00:45:01.744150 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-03-29 00:45:01.744159 | orchestrator | Sunday 29 March 2026 00:45:00 +0000 (0:00:00.508) 0:00:40.187 ********** 2026-03-29 00:45:01.744167 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:45:01.744176 | orchestrator | 2026-03-29 00:45:01.744184 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-03-29 00:45:01.744193 | orchestrator | Sunday 29 March 2026 00:45:00 +0000 (0:00:00.147) 0:00:40.334 ********** 2026-03-29 00:45:01.744200 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:45:01.744206 | orchestrator | 2026-03-29 00:45:01.744211 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-03-29 00:45:01.744216 | orchestrator | Sunday 29 March 2026 00:45:00 +0000 (0:00:00.109) 0:00:40.444 ********** 2026-03-29 00:45:01.744222 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:45:01.744227 | orchestrator | 2026-03-29 00:45:01.744232 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-03-29 00:45:01.744238 | orchestrator | Sunday 29 March 2026 00:45:01 +0000 (0:00:00.136) 0:00:40.581 ********** 2026-03-29 00:45:01.744243 | orchestrator | ok: [testbed-node-4] => { 2026-03-29 00:45:01.744249 | orchestrator |  "vgs_report": { 2026-03-29 00:45:01.744255 | orchestrator |  "vg": [] 2026-03-29 00:45:01.744261 | orchestrator |  } 2026-03-29 00:45:01.744267 | orchestrator | } 2026-03-29 00:45:01.744278 | orchestrator | 2026-03-29 00:45:01.744284 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-03-29 00:45:01.744289 | orchestrator | Sunday 29 March 2026 00:45:01 +0000 (0:00:00.164) 0:00:40.745 ********** 2026-03-29 00:45:01.744294 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:45:01.744300 | orchestrator | 2026-03-29 00:45:01.744305 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-03-29 00:45:01.744311 | orchestrator | Sunday 29 March 2026 00:45:01 +0000 (0:00:00.135) 0:00:40.881 ********** 2026-03-29 00:45:01.744316 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:45:01.744321 | orchestrator | 2026-03-29 00:45:01.744326 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-03-29 00:45:01.744332 | orchestrator | Sunday 29 March 2026 00:45:01 +0000 (0:00:00.137) 0:00:41.018 ********** 2026-03-29 00:45:01.744337 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:45:01.744342 | orchestrator | 2026-03-29 00:45:01.744348 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-03-29 00:45:01.744353 | orchestrator | Sunday 29 March 2026 00:45:01 +0000 (0:00:00.138) 0:00:41.157 ********** 2026-03-29 00:45:01.744359 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:45:01.744364 | orchestrator | 2026-03-29 00:45:01.744376 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-03-29 00:45:06.012965 | orchestrator | Sunday 29 March 2026 00:45:01 +0000 (0:00:00.133) 0:00:41.290 ********** 2026-03-29 00:45:06.013070 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:45:06.013083 | orchestrator | 2026-03-29 00:45:06.013094 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-03-29 00:45:06.013104 | orchestrator | Sunday 29 March 2026 00:45:01 +0000 (0:00:00.141) 0:00:41.432 ********** 2026-03-29 00:45:06.013112 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:45:06.013121 | orchestrator | 2026-03-29 00:45:06.013130 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-03-29 00:45:06.013139 | orchestrator | Sunday 29 March 2026 00:45:02 +0000 (0:00:00.322) 0:00:41.754 ********** 2026-03-29 00:45:06.013148 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:45:06.013156 | orchestrator | 2026-03-29 00:45:06.013165 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-03-29 00:45:06.013174 | orchestrator | Sunday 29 March 2026 00:45:02 +0000 (0:00:00.136) 0:00:41.891 ********** 2026-03-29 00:45:06.013182 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:45:06.013191 | orchestrator | 2026-03-29 00:45:06.013200 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-03-29 00:45:06.013208 | orchestrator | Sunday 29 March 2026 00:45:02 +0000 (0:00:00.134) 0:00:42.026 ********** 2026-03-29 00:45:06.013217 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:45:06.013226 | orchestrator | 2026-03-29 00:45:06.013235 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-03-29 00:45:06.013243 | orchestrator | Sunday 29 March 2026 00:45:02 +0000 (0:00:00.132) 0:00:42.159 ********** 2026-03-29 00:45:06.013252 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:45:06.013261 | orchestrator | 2026-03-29 00:45:06.013269 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-03-29 00:45:06.013278 | orchestrator | Sunday 29 March 2026 00:45:02 +0000 (0:00:00.135) 0:00:42.295 ********** 2026-03-29 00:45:06.013287 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:45:06.013296 | orchestrator | 2026-03-29 00:45:06.013323 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-03-29 00:45:06.013333 | orchestrator | Sunday 29 March 2026 00:45:02 +0000 (0:00:00.140) 0:00:42.435 ********** 2026-03-29 00:45:06.013342 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:45:06.013351 | orchestrator | 2026-03-29 00:45:06.013359 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-03-29 00:45:06.013368 | orchestrator | Sunday 29 March 2026 00:45:03 +0000 (0:00:00.143) 0:00:42.578 ********** 2026-03-29 00:45:06.013376 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:45:06.013407 | orchestrator | 2026-03-29 00:45:06.013416 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-03-29 00:45:06.013425 | orchestrator | Sunday 29 March 2026 00:45:03 +0000 (0:00:00.140) 0:00:42.719 ********** 2026-03-29 00:45:06.013433 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:45:06.013442 | orchestrator | 2026-03-29 00:45:06.013450 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-03-29 00:45:06.013459 | orchestrator | Sunday 29 March 2026 00:45:03 +0000 (0:00:00.150) 0:00:42.869 ********** 2026-03-29 00:45:06.013469 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ce40293b-1bc0-5558-a1b7-16c9a624d7c9', 'data_vg': 'ceph-ce40293b-1bc0-5558-a1b7-16c9a624d7c9'})  2026-03-29 00:45:06.013481 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c9903f66-e17d-5d19-b140-42471f0a3aa8', 'data_vg': 'ceph-c9903f66-e17d-5d19-b140-42471f0a3aa8'})  2026-03-29 00:45:06.013492 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:45:06.013503 | orchestrator | 2026-03-29 00:45:06.013513 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-03-29 00:45:06.013523 | orchestrator | Sunday 29 March 2026 00:45:03 +0000 (0:00:00.160) 0:00:43.029 ********** 2026-03-29 00:45:06.013534 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ce40293b-1bc0-5558-a1b7-16c9a624d7c9', 'data_vg': 'ceph-ce40293b-1bc0-5558-a1b7-16c9a624d7c9'})  2026-03-29 00:45:06.013545 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c9903f66-e17d-5d19-b140-42471f0a3aa8', 'data_vg': 'ceph-c9903f66-e17d-5d19-b140-42471f0a3aa8'})  2026-03-29 00:45:06.013555 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:45:06.013565 | orchestrator | 2026-03-29 00:45:06.013576 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-03-29 00:45:06.013586 | orchestrator | Sunday 29 March 2026 00:45:03 +0000 (0:00:00.156) 0:00:43.186 ********** 2026-03-29 00:45:06.013596 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ce40293b-1bc0-5558-a1b7-16c9a624d7c9', 'data_vg': 'ceph-ce40293b-1bc0-5558-a1b7-16c9a624d7c9'})  2026-03-29 00:45:06.013672 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c9903f66-e17d-5d19-b140-42471f0a3aa8', 'data_vg': 'ceph-c9903f66-e17d-5d19-b140-42471f0a3aa8'})  2026-03-29 00:45:06.013691 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:45:06.013706 | orchestrator | 2026-03-29 00:45:06.013715 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-03-29 00:45:06.013724 | orchestrator | Sunday 29 March 2026 00:45:03 +0000 (0:00:00.137) 0:00:43.323 ********** 2026-03-29 00:45:06.013733 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ce40293b-1bc0-5558-a1b7-16c9a624d7c9', 'data_vg': 'ceph-ce40293b-1bc0-5558-a1b7-16c9a624d7c9'})  2026-03-29 00:45:06.013742 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c9903f66-e17d-5d19-b140-42471f0a3aa8', 'data_vg': 'ceph-c9903f66-e17d-5d19-b140-42471f0a3aa8'})  2026-03-29 00:45:06.013752 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:45:06.013760 | orchestrator | 2026-03-29 00:45:06.013788 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-03-29 00:45:06.013804 | orchestrator | Sunday 29 March 2026 00:45:04 +0000 (0:00:00.272) 0:00:43.596 ********** 2026-03-29 00:45:06.013819 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ce40293b-1bc0-5558-a1b7-16c9a624d7c9', 'data_vg': 'ceph-ce40293b-1bc0-5558-a1b7-16c9a624d7c9'})  2026-03-29 00:45:06.013833 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c9903f66-e17d-5d19-b140-42471f0a3aa8', 'data_vg': 'ceph-c9903f66-e17d-5d19-b140-42471f0a3aa8'})  2026-03-29 00:45:06.013847 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:45:06.013861 | orchestrator | 2026-03-29 00:45:06.013874 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-03-29 00:45:06.013887 | orchestrator | Sunday 29 March 2026 00:45:04 +0000 (0:00:00.141) 0:00:43.738 ********** 2026-03-29 00:45:06.013912 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ce40293b-1bc0-5558-a1b7-16c9a624d7c9', 'data_vg': 'ceph-ce40293b-1bc0-5558-a1b7-16c9a624d7c9'})  2026-03-29 00:45:06.013935 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c9903f66-e17d-5d19-b140-42471f0a3aa8', 'data_vg': 'ceph-c9903f66-e17d-5d19-b140-42471f0a3aa8'})  2026-03-29 00:45:06.013951 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:45:06.013966 | orchestrator | 2026-03-29 00:45:06.013981 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-03-29 00:45:06.013994 | orchestrator | Sunday 29 March 2026 00:45:04 +0000 (0:00:00.125) 0:00:43.863 ********** 2026-03-29 00:45:06.014010 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ce40293b-1bc0-5558-a1b7-16c9a624d7c9', 'data_vg': 'ceph-ce40293b-1bc0-5558-a1b7-16c9a624d7c9'})  2026-03-29 00:45:06.014081 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c9903f66-e17d-5d19-b140-42471f0a3aa8', 'data_vg': 'ceph-c9903f66-e17d-5d19-b140-42471f0a3aa8'})  2026-03-29 00:45:06.014098 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:45:06.014114 | orchestrator | 2026-03-29 00:45:06.014128 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-03-29 00:45:06.014142 | orchestrator | Sunday 29 March 2026 00:45:04 +0000 (0:00:00.145) 0:00:44.009 ********** 2026-03-29 00:45:06.014151 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ce40293b-1bc0-5558-a1b7-16c9a624d7c9', 'data_vg': 'ceph-ce40293b-1bc0-5558-a1b7-16c9a624d7c9'})  2026-03-29 00:45:06.014160 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c9903f66-e17d-5d19-b140-42471f0a3aa8', 'data_vg': 'ceph-c9903f66-e17d-5d19-b140-42471f0a3aa8'})  2026-03-29 00:45:06.014169 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:45:06.014177 | orchestrator | 2026-03-29 00:45:06.014186 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-03-29 00:45:06.014195 | orchestrator | Sunday 29 March 2026 00:45:04 +0000 (0:00:00.122) 0:00:44.132 ********** 2026-03-29 00:45:06.014203 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:45:06.014212 | orchestrator | 2026-03-29 00:45:06.014221 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-03-29 00:45:06.014229 | orchestrator | Sunday 29 March 2026 00:45:05 +0000 (0:00:00.461) 0:00:44.593 ********** 2026-03-29 00:45:06.014238 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:45:06.014247 | orchestrator | 2026-03-29 00:45:06.014255 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-03-29 00:45:06.014264 | orchestrator | Sunday 29 March 2026 00:45:05 +0000 (0:00:00.483) 0:00:45.077 ********** 2026-03-29 00:45:06.014272 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:45:06.014280 | orchestrator | 2026-03-29 00:45:06.014289 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-03-29 00:45:06.014297 | orchestrator | Sunday 29 March 2026 00:45:05 +0000 (0:00:00.123) 0:00:45.201 ********** 2026-03-29 00:45:06.014306 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-c9903f66-e17d-5d19-b140-42471f0a3aa8', 'vg_name': 'ceph-c9903f66-e17d-5d19-b140-42471f0a3aa8'}) 2026-03-29 00:45:06.014317 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-ce40293b-1bc0-5558-a1b7-16c9a624d7c9', 'vg_name': 'ceph-ce40293b-1bc0-5558-a1b7-16c9a624d7c9'}) 2026-03-29 00:45:06.014325 | orchestrator | 2026-03-29 00:45:06.014334 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-03-29 00:45:06.014342 | orchestrator | Sunday 29 March 2026 00:45:05 +0000 (0:00:00.154) 0:00:45.355 ********** 2026-03-29 00:45:06.014351 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ce40293b-1bc0-5558-a1b7-16c9a624d7c9', 'data_vg': 'ceph-ce40293b-1bc0-5558-a1b7-16c9a624d7c9'})  2026-03-29 00:45:06.014360 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c9903f66-e17d-5d19-b140-42471f0a3aa8', 'data_vg': 'ceph-c9903f66-e17d-5d19-b140-42471f0a3aa8'})  2026-03-29 00:45:06.014369 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:45:06.014385 | orchestrator | 2026-03-29 00:45:06.014394 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-03-29 00:45:06.014402 | orchestrator | Sunday 29 March 2026 00:45:05 +0000 (0:00:00.135) 0:00:45.491 ********** 2026-03-29 00:45:06.014411 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ce40293b-1bc0-5558-a1b7-16c9a624d7c9', 'data_vg': 'ceph-ce40293b-1bc0-5558-a1b7-16c9a624d7c9'})  2026-03-29 00:45:06.014429 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c9903f66-e17d-5d19-b140-42471f0a3aa8', 'data_vg': 'ceph-c9903f66-e17d-5d19-b140-42471f0a3aa8'})  2026-03-29 00:45:11.437907 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:45:11.438098 | orchestrator | 2026-03-29 00:45:11.438120 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-03-29 00:45:11.438128 | orchestrator | Sunday 29 March 2026 00:45:06 +0000 (0:00:00.141) 0:00:45.632 ********** 2026-03-29 00:45:11.438135 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ce40293b-1bc0-5558-a1b7-16c9a624d7c9', 'data_vg': 'ceph-ce40293b-1bc0-5558-a1b7-16c9a624d7c9'})  2026-03-29 00:45:11.438143 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c9903f66-e17d-5d19-b140-42471f0a3aa8', 'data_vg': 'ceph-c9903f66-e17d-5d19-b140-42471f0a3aa8'})  2026-03-29 00:45:11.438149 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:45:11.438155 | orchestrator | 2026-03-29 00:45:11.438160 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-03-29 00:45:11.438166 | orchestrator | Sunday 29 March 2026 00:45:06 +0000 (0:00:00.149) 0:00:45.782 ********** 2026-03-29 00:45:11.438172 | orchestrator | ok: [testbed-node-4] => { 2026-03-29 00:45:11.438178 | orchestrator |  "lvm_report": { 2026-03-29 00:45:11.438186 | orchestrator |  "lv": [ 2026-03-29 00:45:11.438209 | orchestrator |  { 2026-03-29 00:45:11.438215 | orchestrator |  "lv_name": "osd-block-c9903f66-e17d-5d19-b140-42471f0a3aa8", 2026-03-29 00:45:11.438222 | orchestrator |  "vg_name": "ceph-c9903f66-e17d-5d19-b140-42471f0a3aa8" 2026-03-29 00:45:11.438227 | orchestrator |  }, 2026-03-29 00:45:11.438233 | orchestrator |  { 2026-03-29 00:45:11.438238 | orchestrator |  "lv_name": "osd-block-ce40293b-1bc0-5558-a1b7-16c9a624d7c9", 2026-03-29 00:45:11.438244 | orchestrator |  "vg_name": "ceph-ce40293b-1bc0-5558-a1b7-16c9a624d7c9" 2026-03-29 00:45:11.438249 | orchestrator |  } 2026-03-29 00:45:11.438254 | orchestrator |  ], 2026-03-29 00:45:11.438260 | orchestrator |  "pv": [ 2026-03-29 00:45:11.438266 | orchestrator |  { 2026-03-29 00:45:11.438271 | orchestrator |  "pv_name": "/dev/sdb", 2026-03-29 00:45:11.438276 | orchestrator |  "vg_name": "ceph-ce40293b-1bc0-5558-a1b7-16c9a624d7c9" 2026-03-29 00:45:11.438282 | orchestrator |  }, 2026-03-29 00:45:11.438287 | orchestrator |  { 2026-03-29 00:45:11.438293 | orchestrator |  "pv_name": "/dev/sdc", 2026-03-29 00:45:11.438298 | orchestrator |  "vg_name": "ceph-c9903f66-e17d-5d19-b140-42471f0a3aa8" 2026-03-29 00:45:11.438304 | orchestrator |  } 2026-03-29 00:45:11.438310 | orchestrator |  ] 2026-03-29 00:45:11.438316 | orchestrator |  } 2026-03-29 00:45:11.438321 | orchestrator | } 2026-03-29 00:45:11.438327 | orchestrator | 2026-03-29 00:45:11.438333 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-03-29 00:45:11.438338 | orchestrator | 2026-03-29 00:45:11.438344 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-29 00:45:11.438349 | orchestrator | Sunday 29 March 2026 00:45:06 +0000 (0:00:00.375) 0:00:46.158 ********** 2026-03-29 00:45:11.438355 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-03-29 00:45:11.438360 | orchestrator | 2026-03-29 00:45:11.438366 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-29 00:45:11.438371 | orchestrator | Sunday 29 March 2026 00:45:06 +0000 (0:00:00.227) 0:00:46.385 ********** 2026-03-29 00:45:11.438399 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:45:11.438404 | orchestrator | 2026-03-29 00:45:11.438410 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 00:45:11.438415 | orchestrator | Sunday 29 March 2026 00:45:07 +0000 (0:00:00.202) 0:00:46.588 ********** 2026-03-29 00:45:11.438421 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-03-29 00:45:11.438428 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-03-29 00:45:11.438436 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-03-29 00:45:11.438449 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-03-29 00:45:11.438459 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-03-29 00:45:11.438468 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-03-29 00:45:11.438477 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-03-29 00:45:11.438486 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-03-29 00:45:11.438494 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-03-29 00:45:11.438503 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-03-29 00:45:11.438511 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-03-29 00:45:11.438521 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-03-29 00:45:11.438529 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-03-29 00:45:11.438539 | orchestrator | 2026-03-29 00:45:11.438549 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 00:45:11.438558 | orchestrator | Sunday 29 March 2026 00:45:07 +0000 (0:00:00.390) 0:00:46.979 ********** 2026-03-29 00:45:11.438568 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:45:11.438576 | orchestrator | 2026-03-29 00:45:11.438582 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 00:45:11.438588 | orchestrator | Sunday 29 March 2026 00:45:07 +0000 (0:00:00.162) 0:00:47.142 ********** 2026-03-29 00:45:11.438595 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:45:11.438601 | orchestrator | 2026-03-29 00:45:11.438607 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 00:45:11.438651 | orchestrator | Sunday 29 March 2026 00:45:07 +0000 (0:00:00.191) 0:00:47.333 ********** 2026-03-29 00:45:11.438658 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:45:11.438663 | orchestrator | 2026-03-29 00:45:11.438669 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 00:45:11.438674 | orchestrator | Sunday 29 March 2026 00:45:07 +0000 (0:00:00.195) 0:00:47.529 ********** 2026-03-29 00:45:11.438679 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:45:11.438685 | orchestrator | 2026-03-29 00:45:11.438690 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 00:45:11.438695 | orchestrator | Sunday 29 March 2026 00:45:08 +0000 (0:00:00.155) 0:00:47.684 ********** 2026-03-29 00:45:11.438701 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:45:11.438706 | orchestrator | 2026-03-29 00:45:11.438711 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 00:45:11.438717 | orchestrator | Sunday 29 March 2026 00:45:08 +0000 (0:00:00.168) 0:00:47.853 ********** 2026-03-29 00:45:11.438722 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:45:11.438728 | orchestrator | 2026-03-29 00:45:11.438733 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 00:45:11.438739 | orchestrator | Sunday 29 March 2026 00:45:08 +0000 (0:00:00.444) 0:00:48.297 ********** 2026-03-29 00:45:11.438744 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:45:11.438757 | orchestrator | 2026-03-29 00:45:11.438762 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 00:45:11.438768 | orchestrator | Sunday 29 March 2026 00:45:08 +0000 (0:00:00.181) 0:00:48.479 ********** 2026-03-29 00:45:11.438774 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:45:11.438779 | orchestrator | 2026-03-29 00:45:11.438785 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 00:45:11.438790 | orchestrator | Sunday 29 March 2026 00:45:09 +0000 (0:00:00.189) 0:00:48.669 ********** 2026-03-29 00:45:11.438795 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_7f3fbef7-4677-4949-824a-e0d60c532987) 2026-03-29 00:45:11.438802 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_7f3fbef7-4677-4949-824a-e0d60c532987) 2026-03-29 00:45:11.438808 | orchestrator | 2026-03-29 00:45:11.438813 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 00:45:11.438818 | orchestrator | Sunday 29 March 2026 00:45:09 +0000 (0:00:00.382) 0:00:49.052 ********** 2026-03-29 00:45:11.438824 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_19b179cd-386f-4584-8a4b-106e5ad8592d) 2026-03-29 00:45:11.438829 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_19b179cd-386f-4584-8a4b-106e5ad8592d) 2026-03-29 00:45:11.438834 | orchestrator | 2026-03-29 00:45:11.438840 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 00:45:11.438845 | orchestrator | Sunday 29 March 2026 00:45:09 +0000 (0:00:00.380) 0:00:49.432 ********** 2026-03-29 00:45:11.438851 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_64dd44e8-56db-4990-9653-26f9a904c769) 2026-03-29 00:45:11.438856 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_64dd44e8-56db-4990-9653-26f9a904c769) 2026-03-29 00:45:11.438865 | orchestrator | 2026-03-29 00:45:11.438874 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 00:45:11.438882 | orchestrator | Sunday 29 March 2026 00:45:10 +0000 (0:00:00.398) 0:00:49.830 ********** 2026-03-29 00:45:11.438889 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_66d732d5-e9a7-47c2-8d7a-ba89d690a00e) 2026-03-29 00:45:11.438897 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_66d732d5-e9a7-47c2-8d7a-ba89d690a00e) 2026-03-29 00:45:11.438905 | orchestrator | 2026-03-29 00:45:11.438912 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 00:45:11.438921 | orchestrator | Sunday 29 March 2026 00:45:10 +0000 (0:00:00.481) 0:00:50.312 ********** 2026-03-29 00:45:11.438930 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-29 00:45:11.438939 | orchestrator | 2026-03-29 00:45:11.438949 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 00:45:11.438957 | orchestrator | Sunday 29 March 2026 00:45:11 +0000 (0:00:00.354) 0:00:50.666 ********** 2026-03-29 00:45:11.438967 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-03-29 00:45:11.438974 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-03-29 00:45:11.438980 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-03-29 00:45:11.438985 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-03-29 00:45:11.438990 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-03-29 00:45:11.438996 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-03-29 00:45:11.439040 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-03-29 00:45:11.439046 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-03-29 00:45:11.439051 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-03-29 00:45:11.439063 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-03-29 00:45:11.439069 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-03-29 00:45:11.439080 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-03-29 00:45:20.048575 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-03-29 00:45:20.048691 | orchestrator | 2026-03-29 00:45:20.048700 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 00:45:20.048706 | orchestrator | Sunday 29 March 2026 00:45:11 +0000 (0:00:00.397) 0:00:51.064 ********** 2026-03-29 00:45:20.048712 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:45:20.048718 | orchestrator | 2026-03-29 00:45:20.048723 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 00:45:20.048728 | orchestrator | Sunday 29 March 2026 00:45:11 +0000 (0:00:00.210) 0:00:51.275 ********** 2026-03-29 00:45:20.048733 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:45:20.048737 | orchestrator | 2026-03-29 00:45:20.048742 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 00:45:20.048747 | orchestrator | Sunday 29 March 2026 00:45:11 +0000 (0:00:00.217) 0:00:51.492 ********** 2026-03-29 00:45:20.048752 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:45:20.048756 | orchestrator | 2026-03-29 00:45:20.048761 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 00:45:20.048777 | orchestrator | Sunday 29 March 2026 00:45:12 +0000 (0:00:00.562) 0:00:52.054 ********** 2026-03-29 00:45:20.048782 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:45:20.048786 | orchestrator | 2026-03-29 00:45:20.048791 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 00:45:20.048795 | orchestrator | Sunday 29 March 2026 00:45:12 +0000 (0:00:00.201) 0:00:52.256 ********** 2026-03-29 00:45:20.048800 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:45:20.048804 | orchestrator | 2026-03-29 00:45:20.048809 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 00:45:20.048814 | orchestrator | Sunday 29 March 2026 00:45:12 +0000 (0:00:00.214) 0:00:52.470 ********** 2026-03-29 00:45:20.048818 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:45:20.048823 | orchestrator | 2026-03-29 00:45:20.048827 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 00:45:20.048832 | orchestrator | Sunday 29 March 2026 00:45:13 +0000 (0:00:00.199) 0:00:52.669 ********** 2026-03-29 00:45:20.048836 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:45:20.048841 | orchestrator | 2026-03-29 00:45:20.048846 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 00:45:20.048850 | orchestrator | Sunday 29 March 2026 00:45:13 +0000 (0:00:00.199) 0:00:52.868 ********** 2026-03-29 00:45:20.048855 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:45:20.048859 | orchestrator | 2026-03-29 00:45:20.048864 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 00:45:20.048869 | orchestrator | Sunday 29 March 2026 00:45:13 +0000 (0:00:00.231) 0:00:53.100 ********** 2026-03-29 00:45:20.048874 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-03-29 00:45:20.048879 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-03-29 00:45:20.048884 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-03-29 00:45:20.048889 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-03-29 00:45:20.048893 | orchestrator | 2026-03-29 00:45:20.048898 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 00:45:20.048903 | orchestrator | Sunday 29 March 2026 00:45:14 +0000 (0:00:00.685) 0:00:53.785 ********** 2026-03-29 00:45:20.048907 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:45:20.048912 | orchestrator | 2026-03-29 00:45:20.048916 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 00:45:20.048935 | orchestrator | Sunday 29 March 2026 00:45:14 +0000 (0:00:00.192) 0:00:53.978 ********** 2026-03-29 00:45:20.048939 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:45:20.048944 | orchestrator | 2026-03-29 00:45:20.048949 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 00:45:20.048953 | orchestrator | Sunday 29 March 2026 00:45:14 +0000 (0:00:00.205) 0:00:54.184 ********** 2026-03-29 00:45:20.048958 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:45:20.048962 | orchestrator | 2026-03-29 00:45:20.048967 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 00:45:20.048971 | orchestrator | Sunday 29 March 2026 00:45:14 +0000 (0:00:00.190) 0:00:54.374 ********** 2026-03-29 00:45:20.048976 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:45:20.048980 | orchestrator | 2026-03-29 00:45:20.048985 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-03-29 00:45:20.048989 | orchestrator | Sunday 29 March 2026 00:45:15 +0000 (0:00:00.199) 0:00:54.574 ********** 2026-03-29 00:45:20.048994 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:45:20.048998 | orchestrator | 2026-03-29 00:45:20.049003 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-03-29 00:45:20.049007 | orchestrator | Sunday 29 March 2026 00:45:15 +0000 (0:00:00.317) 0:00:54.891 ********** 2026-03-29 00:45:20.049012 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '185c2dd0-6b1c-571f-b734-244d928106eb'}}) 2026-03-29 00:45:20.049017 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '18721a71-2d87-5ab0-bec8-5e03a015e695'}}) 2026-03-29 00:45:20.049021 | orchestrator | 2026-03-29 00:45:20.049026 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-03-29 00:45:20.049031 | orchestrator | Sunday 29 March 2026 00:45:15 +0000 (0:00:00.217) 0:00:55.109 ********** 2026-03-29 00:45:20.049036 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-185c2dd0-6b1c-571f-b734-244d928106eb', 'data_vg': 'ceph-185c2dd0-6b1c-571f-b734-244d928106eb'}) 2026-03-29 00:45:20.049043 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-18721a71-2d87-5ab0-bec8-5e03a015e695', 'data_vg': 'ceph-18721a71-2d87-5ab0-bec8-5e03a015e695'}) 2026-03-29 00:45:20.049048 | orchestrator | 2026-03-29 00:45:20.049053 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-03-29 00:45:20.049068 | orchestrator | Sunday 29 March 2026 00:45:17 +0000 (0:00:01.824) 0:00:56.933 ********** 2026-03-29 00:45:20.049073 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-185c2dd0-6b1c-571f-b734-244d928106eb', 'data_vg': 'ceph-185c2dd0-6b1c-571f-b734-244d928106eb'})  2026-03-29 00:45:20.049079 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-18721a71-2d87-5ab0-bec8-5e03a015e695', 'data_vg': 'ceph-18721a71-2d87-5ab0-bec8-5e03a015e695'})  2026-03-29 00:45:20.049083 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:45:20.049088 | orchestrator | 2026-03-29 00:45:20.049093 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-03-29 00:45:20.049097 | orchestrator | Sunday 29 March 2026 00:45:17 +0000 (0:00:00.156) 0:00:57.090 ********** 2026-03-29 00:45:20.049102 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-185c2dd0-6b1c-571f-b734-244d928106eb', 'data_vg': 'ceph-185c2dd0-6b1c-571f-b734-244d928106eb'}) 2026-03-29 00:45:20.049110 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-18721a71-2d87-5ab0-bec8-5e03a015e695', 'data_vg': 'ceph-18721a71-2d87-5ab0-bec8-5e03a015e695'}) 2026-03-29 00:45:20.049114 | orchestrator | 2026-03-29 00:45:20.049119 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-03-29 00:45:20.049123 | orchestrator | Sunday 29 March 2026 00:45:18 +0000 (0:00:01.311) 0:00:58.401 ********** 2026-03-29 00:45:20.049128 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-185c2dd0-6b1c-571f-b734-244d928106eb', 'data_vg': 'ceph-185c2dd0-6b1c-571f-b734-244d928106eb'})  2026-03-29 00:45:20.049137 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-18721a71-2d87-5ab0-bec8-5e03a015e695', 'data_vg': 'ceph-18721a71-2d87-5ab0-bec8-5e03a015e695'})  2026-03-29 00:45:20.049143 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:45:20.049148 | orchestrator | 2026-03-29 00:45:20.049153 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-03-29 00:45:20.049159 | orchestrator | Sunday 29 March 2026 00:45:19 +0000 (0:00:00.148) 0:00:58.549 ********** 2026-03-29 00:45:20.049164 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:45:20.049169 | orchestrator | 2026-03-29 00:45:20.049174 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-03-29 00:45:20.049180 | orchestrator | Sunday 29 March 2026 00:45:19 +0000 (0:00:00.137) 0:00:58.687 ********** 2026-03-29 00:45:20.049185 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-185c2dd0-6b1c-571f-b734-244d928106eb', 'data_vg': 'ceph-185c2dd0-6b1c-571f-b734-244d928106eb'})  2026-03-29 00:45:20.049190 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-18721a71-2d87-5ab0-bec8-5e03a015e695', 'data_vg': 'ceph-18721a71-2d87-5ab0-bec8-5e03a015e695'})  2026-03-29 00:45:20.049196 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:45:20.049201 | orchestrator | 2026-03-29 00:45:20.049206 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-03-29 00:45:20.049211 | orchestrator | Sunday 29 March 2026 00:45:19 +0000 (0:00:00.137) 0:00:58.825 ********** 2026-03-29 00:45:20.049216 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:45:20.049222 | orchestrator | 2026-03-29 00:45:20.049227 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-03-29 00:45:20.049232 | orchestrator | Sunday 29 March 2026 00:45:19 +0000 (0:00:00.131) 0:00:58.956 ********** 2026-03-29 00:45:20.049237 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-185c2dd0-6b1c-571f-b734-244d928106eb', 'data_vg': 'ceph-185c2dd0-6b1c-571f-b734-244d928106eb'})  2026-03-29 00:45:20.049243 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-18721a71-2d87-5ab0-bec8-5e03a015e695', 'data_vg': 'ceph-18721a71-2d87-5ab0-bec8-5e03a015e695'})  2026-03-29 00:45:20.049248 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:45:20.049253 | orchestrator | 2026-03-29 00:45:20.049258 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-03-29 00:45:20.049264 | orchestrator | Sunday 29 March 2026 00:45:19 +0000 (0:00:00.160) 0:00:59.117 ********** 2026-03-29 00:45:20.049269 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:45:20.049274 | orchestrator | 2026-03-29 00:45:20.049279 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-03-29 00:45:20.049284 | orchestrator | Sunday 29 March 2026 00:45:19 +0000 (0:00:00.141) 0:00:59.259 ********** 2026-03-29 00:45:20.049290 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-185c2dd0-6b1c-571f-b734-244d928106eb', 'data_vg': 'ceph-185c2dd0-6b1c-571f-b734-244d928106eb'})  2026-03-29 00:45:20.049295 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-18721a71-2d87-5ab0-bec8-5e03a015e695', 'data_vg': 'ceph-18721a71-2d87-5ab0-bec8-5e03a015e695'})  2026-03-29 00:45:20.049301 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:45:20.049306 | orchestrator | 2026-03-29 00:45:20.049311 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-03-29 00:45:20.049317 | orchestrator | Sunday 29 March 2026 00:45:19 +0000 (0:00:00.146) 0:00:59.405 ********** 2026-03-29 00:45:20.049322 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:45:20.049327 | orchestrator | 2026-03-29 00:45:20.049332 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-03-29 00:45:20.049337 | orchestrator | Sunday 29 March 2026 00:45:19 +0000 (0:00:00.130) 0:00:59.536 ********** 2026-03-29 00:45:20.049346 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-185c2dd0-6b1c-571f-b734-244d928106eb', 'data_vg': 'ceph-185c2dd0-6b1c-571f-b734-244d928106eb'})  2026-03-29 00:45:25.932242 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-18721a71-2d87-5ab0-bec8-5e03a015e695', 'data_vg': 'ceph-18721a71-2d87-5ab0-bec8-5e03a015e695'})  2026-03-29 00:45:25.932343 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:45:25.932355 | orchestrator | 2026-03-29 00:45:25.932366 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-03-29 00:45:25.932378 | orchestrator | Sunday 29 March 2026 00:45:20 +0000 (0:00:00.364) 0:00:59.900 ********** 2026-03-29 00:45:25.932388 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-185c2dd0-6b1c-571f-b734-244d928106eb', 'data_vg': 'ceph-185c2dd0-6b1c-571f-b734-244d928106eb'})  2026-03-29 00:45:25.932397 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-18721a71-2d87-5ab0-bec8-5e03a015e695', 'data_vg': 'ceph-18721a71-2d87-5ab0-bec8-5e03a015e695'})  2026-03-29 00:45:25.932407 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:45:25.932416 | orchestrator | 2026-03-29 00:45:25.932439 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-03-29 00:45:25.932449 | orchestrator | Sunday 29 March 2026 00:45:20 +0000 (0:00:00.146) 0:01:00.046 ********** 2026-03-29 00:45:25.932458 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-185c2dd0-6b1c-571f-b734-244d928106eb', 'data_vg': 'ceph-185c2dd0-6b1c-571f-b734-244d928106eb'})  2026-03-29 00:45:25.932467 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-18721a71-2d87-5ab0-bec8-5e03a015e695', 'data_vg': 'ceph-18721a71-2d87-5ab0-bec8-5e03a015e695'})  2026-03-29 00:45:25.932476 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:45:25.932485 | orchestrator | 2026-03-29 00:45:25.932494 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-03-29 00:45:25.932503 | orchestrator | Sunday 29 March 2026 00:45:20 +0000 (0:00:00.151) 0:01:00.198 ********** 2026-03-29 00:45:25.932512 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:45:25.932522 | orchestrator | 2026-03-29 00:45:25.932531 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-03-29 00:45:25.932540 | orchestrator | Sunday 29 March 2026 00:45:20 +0000 (0:00:00.136) 0:01:00.335 ********** 2026-03-29 00:45:25.932549 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:45:25.932558 | orchestrator | 2026-03-29 00:45:25.932567 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-03-29 00:45:25.932576 | orchestrator | Sunday 29 March 2026 00:45:20 +0000 (0:00:00.133) 0:01:00.469 ********** 2026-03-29 00:45:25.932584 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:45:25.932594 | orchestrator | 2026-03-29 00:45:25.932603 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-03-29 00:45:25.932613 | orchestrator | Sunday 29 March 2026 00:45:21 +0000 (0:00:00.133) 0:01:00.602 ********** 2026-03-29 00:45:25.932622 | orchestrator | ok: [testbed-node-5] => { 2026-03-29 00:45:25.932631 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-03-29 00:45:25.932640 | orchestrator | } 2026-03-29 00:45:25.932694 | orchestrator | 2026-03-29 00:45:25.932704 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-03-29 00:45:25.932713 | orchestrator | Sunday 29 March 2026 00:45:21 +0000 (0:00:00.129) 0:01:00.732 ********** 2026-03-29 00:45:25.932722 | orchestrator | ok: [testbed-node-5] => { 2026-03-29 00:45:25.932731 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-03-29 00:45:25.932740 | orchestrator | } 2026-03-29 00:45:25.932759 | orchestrator | 2026-03-29 00:45:25.932772 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-03-29 00:45:25.932782 | orchestrator | Sunday 29 March 2026 00:45:21 +0000 (0:00:00.129) 0:01:00.862 ********** 2026-03-29 00:45:25.932791 | orchestrator | ok: [testbed-node-5] => { 2026-03-29 00:45:25.932801 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-03-29 00:45:25.932810 | orchestrator | } 2026-03-29 00:45:25.932819 | orchestrator | 2026-03-29 00:45:25.932828 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-03-29 00:45:25.932837 | orchestrator | Sunday 29 March 2026 00:45:21 +0000 (0:00:00.136) 0:01:00.998 ********** 2026-03-29 00:45:25.932867 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:45:25.932877 | orchestrator | 2026-03-29 00:45:25.932887 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-03-29 00:45:25.932896 | orchestrator | Sunday 29 March 2026 00:45:21 +0000 (0:00:00.496) 0:01:01.494 ********** 2026-03-29 00:45:25.932906 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:45:25.932915 | orchestrator | 2026-03-29 00:45:25.932925 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-03-29 00:45:25.932934 | orchestrator | Sunday 29 March 2026 00:45:22 +0000 (0:00:00.499) 0:01:01.994 ********** 2026-03-29 00:45:25.932943 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:45:25.932953 | orchestrator | 2026-03-29 00:45:25.932962 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-03-29 00:45:25.932972 | orchestrator | Sunday 29 March 2026 00:45:22 +0000 (0:00:00.487) 0:01:02.481 ********** 2026-03-29 00:45:25.932981 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:45:25.932990 | orchestrator | 2026-03-29 00:45:25.933000 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-03-29 00:45:25.933009 | orchestrator | Sunday 29 March 2026 00:45:23 +0000 (0:00:00.305) 0:01:02.787 ********** 2026-03-29 00:45:25.933019 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:45:25.933028 | orchestrator | 2026-03-29 00:45:25.933038 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-03-29 00:45:25.933047 | orchestrator | Sunday 29 March 2026 00:45:23 +0000 (0:00:00.111) 0:01:02.898 ********** 2026-03-29 00:45:25.933057 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:45:25.933066 | orchestrator | 2026-03-29 00:45:25.933075 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-03-29 00:45:25.933085 | orchestrator | Sunday 29 March 2026 00:45:23 +0000 (0:00:00.097) 0:01:02.996 ********** 2026-03-29 00:45:25.933095 | orchestrator | ok: [testbed-node-5] => { 2026-03-29 00:45:25.933104 | orchestrator |  "vgs_report": { 2026-03-29 00:45:25.933114 | orchestrator |  "vg": [] 2026-03-29 00:45:25.933148 | orchestrator |  } 2026-03-29 00:45:25.933163 | orchestrator | } 2026-03-29 00:45:25.933173 | orchestrator | 2026-03-29 00:45:25.933183 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-03-29 00:45:25.933193 | orchestrator | Sunday 29 March 2026 00:45:23 +0000 (0:00:00.136) 0:01:03.132 ********** 2026-03-29 00:45:25.933202 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:45:25.933212 | orchestrator | 2026-03-29 00:45:25.933221 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-03-29 00:45:25.933230 | orchestrator | Sunday 29 March 2026 00:45:23 +0000 (0:00:00.144) 0:01:03.277 ********** 2026-03-29 00:45:25.933239 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:45:25.933248 | orchestrator | 2026-03-29 00:45:25.933257 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-03-29 00:45:25.933266 | orchestrator | Sunday 29 March 2026 00:45:23 +0000 (0:00:00.150) 0:01:03.427 ********** 2026-03-29 00:45:25.933274 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:45:25.933282 | orchestrator | 2026-03-29 00:45:25.933288 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-03-29 00:45:25.933294 | orchestrator | Sunday 29 March 2026 00:45:24 +0000 (0:00:00.136) 0:01:03.563 ********** 2026-03-29 00:45:25.933299 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:45:25.933304 | orchestrator | 2026-03-29 00:45:25.933311 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-03-29 00:45:25.933320 | orchestrator | Sunday 29 March 2026 00:45:24 +0000 (0:00:00.155) 0:01:03.718 ********** 2026-03-29 00:45:25.933329 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:45:25.933337 | orchestrator | 2026-03-29 00:45:25.933346 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-03-29 00:45:25.933354 | orchestrator | Sunday 29 March 2026 00:45:24 +0000 (0:00:00.145) 0:01:03.864 ********** 2026-03-29 00:45:25.933363 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:45:25.933380 | orchestrator | 2026-03-29 00:45:25.933390 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-03-29 00:45:25.933396 | orchestrator | Sunday 29 March 2026 00:45:24 +0000 (0:00:00.133) 0:01:03.997 ********** 2026-03-29 00:45:25.933401 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:45:25.933406 | orchestrator | 2026-03-29 00:45:25.933412 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-03-29 00:45:25.933417 | orchestrator | Sunday 29 March 2026 00:45:24 +0000 (0:00:00.133) 0:01:04.131 ********** 2026-03-29 00:45:25.933423 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:45:25.933428 | orchestrator | 2026-03-29 00:45:25.933433 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-03-29 00:45:25.933439 | orchestrator | Sunday 29 March 2026 00:45:24 +0000 (0:00:00.140) 0:01:04.271 ********** 2026-03-29 00:45:25.933444 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:45:25.933449 | orchestrator | 2026-03-29 00:45:25.933455 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-03-29 00:45:25.933460 | orchestrator | Sunday 29 March 2026 00:45:24 +0000 (0:00:00.242) 0:01:04.514 ********** 2026-03-29 00:45:25.933466 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:45:25.933471 | orchestrator | 2026-03-29 00:45:25.933476 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-03-29 00:45:25.933482 | orchestrator | Sunday 29 March 2026 00:45:25 +0000 (0:00:00.123) 0:01:04.638 ********** 2026-03-29 00:45:25.933487 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:45:25.933492 | orchestrator | 2026-03-29 00:45:25.933498 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-03-29 00:45:25.933503 | orchestrator | Sunday 29 March 2026 00:45:25 +0000 (0:00:00.124) 0:01:04.763 ********** 2026-03-29 00:45:25.933509 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:45:25.933514 | orchestrator | 2026-03-29 00:45:25.933519 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-03-29 00:45:25.933525 | orchestrator | Sunday 29 March 2026 00:45:25 +0000 (0:00:00.120) 0:01:04.883 ********** 2026-03-29 00:45:25.933530 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:45:25.933535 | orchestrator | 2026-03-29 00:45:25.933541 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-03-29 00:45:25.933546 | orchestrator | Sunday 29 March 2026 00:45:25 +0000 (0:00:00.123) 0:01:05.006 ********** 2026-03-29 00:45:25.933551 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:45:25.933557 | orchestrator | 2026-03-29 00:45:25.933562 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-03-29 00:45:25.933567 | orchestrator | Sunday 29 March 2026 00:45:25 +0000 (0:00:00.126) 0:01:05.132 ********** 2026-03-29 00:45:25.933573 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-185c2dd0-6b1c-571f-b734-244d928106eb', 'data_vg': 'ceph-185c2dd0-6b1c-571f-b734-244d928106eb'})  2026-03-29 00:45:25.933578 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-18721a71-2d87-5ab0-bec8-5e03a015e695', 'data_vg': 'ceph-18721a71-2d87-5ab0-bec8-5e03a015e695'})  2026-03-29 00:45:25.933584 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:45:25.933589 | orchestrator | 2026-03-29 00:45:25.933595 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-03-29 00:45:25.933600 | orchestrator | Sunday 29 March 2026 00:45:25 +0000 (0:00:00.150) 0:01:05.283 ********** 2026-03-29 00:45:25.933613 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-185c2dd0-6b1c-571f-b734-244d928106eb', 'data_vg': 'ceph-185c2dd0-6b1c-571f-b734-244d928106eb'})  2026-03-29 00:45:25.933619 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-18721a71-2d87-5ab0-bec8-5e03a015e695', 'data_vg': 'ceph-18721a71-2d87-5ab0-bec8-5e03a015e695'})  2026-03-29 00:45:25.933625 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:45:25.933630 | orchestrator | 2026-03-29 00:45:25.933636 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-03-29 00:45:25.933687 | orchestrator | Sunday 29 March 2026 00:45:25 +0000 (0:00:00.139) 0:01:05.422 ********** 2026-03-29 00:45:25.933700 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-185c2dd0-6b1c-571f-b734-244d928106eb', 'data_vg': 'ceph-185c2dd0-6b1c-571f-b734-244d928106eb'})  2026-03-29 00:45:28.742635 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-18721a71-2d87-5ab0-bec8-5e03a015e695', 'data_vg': 'ceph-18721a71-2d87-5ab0-bec8-5e03a015e695'})  2026-03-29 00:45:28.742730 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:45:28.742738 | orchestrator | 2026-03-29 00:45:28.742747 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-03-29 00:45:28.742755 | orchestrator | Sunday 29 March 2026 00:45:25 +0000 (0:00:00.125) 0:01:05.548 ********** 2026-03-29 00:45:28.742761 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-185c2dd0-6b1c-571f-b734-244d928106eb', 'data_vg': 'ceph-185c2dd0-6b1c-571f-b734-244d928106eb'})  2026-03-29 00:45:28.742782 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-18721a71-2d87-5ab0-bec8-5e03a015e695', 'data_vg': 'ceph-18721a71-2d87-5ab0-bec8-5e03a015e695'})  2026-03-29 00:45:28.742790 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:45:28.742797 | orchestrator | 2026-03-29 00:45:28.742803 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-03-29 00:45:28.742809 | orchestrator | Sunday 29 March 2026 00:45:26 +0000 (0:00:00.133) 0:01:05.681 ********** 2026-03-29 00:45:28.742817 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-185c2dd0-6b1c-571f-b734-244d928106eb', 'data_vg': 'ceph-185c2dd0-6b1c-571f-b734-244d928106eb'})  2026-03-29 00:45:28.742821 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-18721a71-2d87-5ab0-bec8-5e03a015e695', 'data_vg': 'ceph-18721a71-2d87-5ab0-bec8-5e03a015e695'})  2026-03-29 00:45:28.742825 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:45:28.742831 | orchestrator | 2026-03-29 00:45:28.742837 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-03-29 00:45:28.742843 | orchestrator | Sunday 29 March 2026 00:45:26 +0000 (0:00:00.147) 0:01:05.829 ********** 2026-03-29 00:45:28.742850 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-185c2dd0-6b1c-571f-b734-244d928106eb', 'data_vg': 'ceph-185c2dd0-6b1c-571f-b734-244d928106eb'})  2026-03-29 00:45:28.742856 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-18721a71-2d87-5ab0-bec8-5e03a015e695', 'data_vg': 'ceph-18721a71-2d87-5ab0-bec8-5e03a015e695'})  2026-03-29 00:45:28.742863 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:45:28.742869 | orchestrator | 2026-03-29 00:45:28.742875 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-03-29 00:45:28.742881 | orchestrator | Sunday 29 March 2026 00:45:26 +0000 (0:00:00.131) 0:01:05.961 ********** 2026-03-29 00:45:28.742887 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-185c2dd0-6b1c-571f-b734-244d928106eb', 'data_vg': 'ceph-185c2dd0-6b1c-571f-b734-244d928106eb'})  2026-03-29 00:45:28.742894 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-18721a71-2d87-5ab0-bec8-5e03a015e695', 'data_vg': 'ceph-18721a71-2d87-5ab0-bec8-5e03a015e695'})  2026-03-29 00:45:28.742901 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:45:28.742907 | orchestrator | 2026-03-29 00:45:28.742914 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-03-29 00:45:28.742920 | orchestrator | Sunday 29 March 2026 00:45:26 +0000 (0:00:00.250) 0:01:06.211 ********** 2026-03-29 00:45:28.742926 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-185c2dd0-6b1c-571f-b734-244d928106eb', 'data_vg': 'ceph-185c2dd0-6b1c-571f-b734-244d928106eb'})  2026-03-29 00:45:28.742933 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-18721a71-2d87-5ab0-bec8-5e03a015e695', 'data_vg': 'ceph-18721a71-2d87-5ab0-bec8-5e03a015e695'})  2026-03-29 00:45:28.742939 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:45:28.742962 | orchestrator | 2026-03-29 00:45:28.742969 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-03-29 00:45:28.742975 | orchestrator | Sunday 29 March 2026 00:45:26 +0000 (0:00:00.134) 0:01:06.346 ********** 2026-03-29 00:45:28.742982 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:45:28.742989 | orchestrator | 2026-03-29 00:45:28.742996 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-03-29 00:45:28.743002 | orchestrator | Sunday 29 March 2026 00:45:27 +0000 (0:00:00.526) 0:01:06.873 ********** 2026-03-29 00:45:28.743009 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:45:28.743015 | orchestrator | 2026-03-29 00:45:28.743021 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-03-29 00:45:28.743027 | orchestrator | Sunday 29 March 2026 00:45:27 +0000 (0:00:00.515) 0:01:07.388 ********** 2026-03-29 00:45:28.743033 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:45:28.743039 | orchestrator | 2026-03-29 00:45:28.743045 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-03-29 00:45:28.743052 | orchestrator | Sunday 29 March 2026 00:45:27 +0000 (0:00:00.131) 0:01:07.519 ********** 2026-03-29 00:45:28.743057 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-185c2dd0-6b1c-571f-b734-244d928106eb', 'vg_name': 'ceph-185c2dd0-6b1c-571f-b734-244d928106eb'}) 2026-03-29 00:45:28.743065 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-18721a71-2d87-5ab0-bec8-5e03a015e695', 'vg_name': 'ceph-18721a71-2d87-5ab0-bec8-5e03a015e695'}) 2026-03-29 00:45:28.743072 | orchestrator | 2026-03-29 00:45:28.743078 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-03-29 00:45:28.743085 | orchestrator | Sunday 29 March 2026 00:45:28 +0000 (0:00:00.155) 0:01:07.675 ********** 2026-03-29 00:45:28.743106 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-185c2dd0-6b1c-571f-b734-244d928106eb', 'data_vg': 'ceph-185c2dd0-6b1c-571f-b734-244d928106eb'})  2026-03-29 00:45:28.743113 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-18721a71-2d87-5ab0-bec8-5e03a015e695', 'data_vg': 'ceph-18721a71-2d87-5ab0-bec8-5e03a015e695'})  2026-03-29 00:45:28.743119 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:45:28.743126 | orchestrator | 2026-03-29 00:45:28.743132 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-03-29 00:45:28.743139 | orchestrator | Sunday 29 March 2026 00:45:28 +0000 (0:00:00.140) 0:01:07.815 ********** 2026-03-29 00:45:28.743150 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-185c2dd0-6b1c-571f-b734-244d928106eb', 'data_vg': 'ceph-185c2dd0-6b1c-571f-b734-244d928106eb'})  2026-03-29 00:45:28.743157 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-18721a71-2d87-5ab0-bec8-5e03a015e695', 'data_vg': 'ceph-18721a71-2d87-5ab0-bec8-5e03a015e695'})  2026-03-29 00:45:28.743163 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:45:28.743169 | orchestrator | 2026-03-29 00:45:28.743176 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-03-29 00:45:28.743182 | orchestrator | Sunday 29 March 2026 00:45:28 +0000 (0:00:00.140) 0:01:07.955 ********** 2026-03-29 00:45:28.743189 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-185c2dd0-6b1c-571f-b734-244d928106eb', 'data_vg': 'ceph-185c2dd0-6b1c-571f-b734-244d928106eb'})  2026-03-29 00:45:28.743195 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-18721a71-2d87-5ab0-bec8-5e03a015e695', 'data_vg': 'ceph-18721a71-2d87-5ab0-bec8-5e03a015e695'})  2026-03-29 00:45:28.743202 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:45:28.743209 | orchestrator | 2026-03-29 00:45:28.743216 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-03-29 00:45:28.743222 | orchestrator | Sunday 29 March 2026 00:45:28 +0000 (0:00:00.198) 0:01:08.154 ********** 2026-03-29 00:45:28.743229 | orchestrator | ok: [testbed-node-5] => { 2026-03-29 00:45:28.743236 | orchestrator |  "lvm_report": { 2026-03-29 00:45:28.743244 | orchestrator |  "lv": [ 2026-03-29 00:45:28.743257 | orchestrator |  { 2026-03-29 00:45:28.743264 | orchestrator |  "lv_name": "osd-block-185c2dd0-6b1c-571f-b734-244d928106eb", 2026-03-29 00:45:28.743272 | orchestrator |  "vg_name": "ceph-185c2dd0-6b1c-571f-b734-244d928106eb" 2026-03-29 00:45:28.743279 | orchestrator |  }, 2026-03-29 00:45:28.743286 | orchestrator |  { 2026-03-29 00:45:28.743293 | orchestrator |  "lv_name": "osd-block-18721a71-2d87-5ab0-bec8-5e03a015e695", 2026-03-29 00:45:28.743300 | orchestrator |  "vg_name": "ceph-18721a71-2d87-5ab0-bec8-5e03a015e695" 2026-03-29 00:45:28.743306 | orchestrator |  } 2026-03-29 00:45:28.743313 | orchestrator |  ], 2026-03-29 00:45:28.743320 | orchestrator |  "pv": [ 2026-03-29 00:45:28.743327 | orchestrator |  { 2026-03-29 00:45:28.743334 | orchestrator |  "pv_name": "/dev/sdb", 2026-03-29 00:45:28.743341 | orchestrator |  "vg_name": "ceph-185c2dd0-6b1c-571f-b734-244d928106eb" 2026-03-29 00:45:28.743347 | orchestrator |  }, 2026-03-29 00:45:28.743354 | orchestrator |  { 2026-03-29 00:45:28.743361 | orchestrator |  "pv_name": "/dev/sdc", 2026-03-29 00:45:28.743368 | orchestrator |  "vg_name": "ceph-18721a71-2d87-5ab0-bec8-5e03a015e695" 2026-03-29 00:45:28.743374 | orchestrator |  } 2026-03-29 00:45:28.743381 | orchestrator |  ] 2026-03-29 00:45:28.743388 | orchestrator |  } 2026-03-29 00:45:28.743395 | orchestrator | } 2026-03-29 00:45:28.743402 | orchestrator | 2026-03-29 00:45:28.743409 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 00:45:28.743416 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-03-29 00:45:28.743423 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-03-29 00:45:28.743430 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-03-29 00:45:28.743437 | orchestrator | 2026-03-29 00:45:28.743444 | orchestrator | 2026-03-29 00:45:28.743451 | orchestrator | 2026-03-29 00:45:28.743458 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-29 00:45:28.743464 | orchestrator | Sunday 29 March 2026 00:45:28 +0000 (0:00:00.127) 0:01:08.282 ********** 2026-03-29 00:45:28.743471 | orchestrator | =============================================================================== 2026-03-29 00:45:28.743478 | orchestrator | Create block VGs -------------------------------------------------------- 5.73s 2026-03-29 00:45:28.743485 | orchestrator | Create block LVs -------------------------------------------------------- 4.05s 2026-03-29 00:45:28.743492 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 1.85s 2026-03-29 00:45:28.743499 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.57s 2026-03-29 00:45:28.743505 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.49s 2026-03-29 00:45:28.743512 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.49s 2026-03-29 00:45:28.743519 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.48s 2026-03-29 00:45:28.743526 | orchestrator | Add known partitions to the list of available block devices ------------- 1.28s 2026-03-29 00:45:28.743536 | orchestrator | Add known links to the list of available block devices ------------------ 1.12s 2026-03-29 00:45:28.986549 | orchestrator | Add known partitions to the list of available block devices ------------- 1.04s 2026-03-29 00:45:28.986779 | orchestrator | Add known partitions to the list of available block devices ------------- 0.85s 2026-03-29 00:45:28.986806 | orchestrator | Print LVM report data --------------------------------------------------- 0.78s 2026-03-29 00:45:28.986826 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.71s 2026-03-29 00:45:28.986845 | orchestrator | Add known partitions to the list of available block devices ------------- 0.71s 2026-03-29 00:45:28.986898 | orchestrator | Count OSDs put on ceph_db_devices defined in lvm_volumes ---------------- 0.69s 2026-03-29 00:45:28.986918 | orchestrator | Add known partitions to the list of available block devices ------------- 0.69s 2026-03-29 00:45:28.986956 | orchestrator | Print 'Create DB LVs for ceph_db_devices' ------------------------------- 0.65s 2026-03-29 00:45:28.986976 | orchestrator | Add known links to the list of available block devices ------------------ 0.63s 2026-03-29 00:45:28.986995 | orchestrator | Print size needed for LVs on ceph_db_devices ---------------------------- 0.62s 2026-03-29 00:45:28.987015 | orchestrator | Create dict of block VGs -> PVs from ceph_osd_devices ------------------- 0.61s 2026-03-29 00:45:40.264107 | orchestrator | 2026-03-29 00:45:40 | INFO  | Prepare task for execution of facts. 2026-03-29 00:45:40.332586 | orchestrator | 2026-03-29 00:45:40 | INFO  | Task 9f42a17a-ab80-4594-b358-01339ce10d88 (facts) was prepared for execution. 2026-03-29 00:45:40.332660 | orchestrator | 2026-03-29 00:45:40 | INFO  | It takes a moment until task 9f42a17a-ab80-4594-b358-01339ce10d88 (facts) has been started and output is visible here. 2026-03-29 00:45:50.853148 | orchestrator | 2026-03-29 00:45:50.853217 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-03-29 00:45:50.853228 | orchestrator | 2026-03-29 00:45:50.853235 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-03-29 00:45:50.853242 | orchestrator | Sunday 29 March 2026 00:45:43 +0000 (0:00:00.286) 0:00:00.286 ********** 2026-03-29 00:45:50.853250 | orchestrator | ok: [testbed-manager] 2026-03-29 00:45:50.853258 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:45:50.853265 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:45:50.853272 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:45:50.853279 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:45:50.853286 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:45:50.853293 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:45:50.853300 | orchestrator | 2026-03-29 00:45:50.853307 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-03-29 00:45:50.853314 | orchestrator | Sunday 29 March 2026 00:45:44 +0000 (0:00:01.161) 0:00:01.448 ********** 2026-03-29 00:45:50.853321 | orchestrator | skipping: [testbed-manager] 2026-03-29 00:45:50.853329 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:45:50.853336 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:45:50.853343 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:45:50.853350 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:45:50.853357 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:45:50.853364 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:45:50.853371 | orchestrator | 2026-03-29 00:45:50.853378 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-03-29 00:45:50.853386 | orchestrator | 2026-03-29 00:45:50.853393 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-29 00:45:50.853399 | orchestrator | Sunday 29 March 2026 00:45:45 +0000 (0:00:01.073) 0:00:02.522 ********** 2026-03-29 00:45:50.853407 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:45:50.853414 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:45:50.853421 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:45:50.853428 | orchestrator | ok: [testbed-manager] 2026-03-29 00:45:50.853434 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:45:50.853441 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:45:50.853448 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:45:50.853455 | orchestrator | 2026-03-29 00:45:50.853462 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-03-29 00:45:50.853469 | orchestrator | 2026-03-29 00:45:50.853476 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-03-29 00:45:50.853483 | orchestrator | Sunday 29 March 2026 00:45:50 +0000 (0:00:04.587) 0:00:07.110 ********** 2026-03-29 00:45:50.853490 | orchestrator | skipping: [testbed-manager] 2026-03-29 00:45:50.853497 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:45:50.853521 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:45:50.853529 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:45:50.853536 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:45:50.853543 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:45:50.853549 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:45:50.853556 | orchestrator | 2026-03-29 00:45:50.853564 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 00:45:50.853571 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-29 00:45:50.853578 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-29 00:45:50.853585 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-29 00:45:50.853592 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-29 00:45:50.853599 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-29 00:45:50.853606 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-29 00:45:50.853614 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-29 00:45:50.853620 | orchestrator | 2026-03-29 00:45:50.853626 | orchestrator | 2026-03-29 00:45:50.853632 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-29 00:45:50.853640 | orchestrator | Sunday 29 March 2026 00:45:50 +0000 (0:00:00.469) 0:00:07.579 ********** 2026-03-29 00:45:50.853646 | orchestrator | =============================================================================== 2026-03-29 00:45:50.853653 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.59s 2026-03-29 00:45:50.853659 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.16s 2026-03-29 00:45:50.853675 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.07s 2026-03-29 00:45:50.853682 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.47s 2026-03-29 00:46:02.122608 | orchestrator | 2026-03-29 00:46:02 | INFO  | Prepare task for execution of frr. 2026-03-29 00:46:02.199385 | orchestrator | 2026-03-29 00:46:02 | INFO  | Task 21aa51e4-8dea-455b-a3f2-979c27192324 (frr) was prepared for execution. 2026-03-29 00:46:02.199483 | orchestrator | 2026-03-29 00:46:02 | INFO  | It takes a moment until task 21aa51e4-8dea-455b-a3f2-979c27192324 (frr) has been started and output is visible here. 2026-03-29 00:46:24.341358 | orchestrator | 2026-03-29 00:46:24.341467 | orchestrator | PLAY [Apply role frr] ********************************************************** 2026-03-29 00:46:24.341481 | orchestrator | 2026-03-29 00:46:24.341489 | orchestrator | TASK [osism.services.frr : Include distribution specific install tasks] ******** 2026-03-29 00:46:24.341497 | orchestrator | Sunday 29 March 2026 00:46:05 +0000 (0:00:00.269) 0:00:00.269 ********** 2026-03-29 00:46:24.341504 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/frr/tasks/install-Debian-family.yml for testbed-manager 2026-03-29 00:46:24.341512 | orchestrator | 2026-03-29 00:46:24.341518 | orchestrator | TASK [osism.services.frr : Pin frr package version] **************************** 2026-03-29 00:46:24.341526 | orchestrator | Sunday 29 March 2026 00:46:05 +0000 (0:00:00.197) 0:00:00.466 ********** 2026-03-29 00:46:24.341534 | orchestrator | changed: [testbed-manager] 2026-03-29 00:46:24.341543 | orchestrator | 2026-03-29 00:46:24.341550 | orchestrator | TASK [osism.services.frr : Install frr package] ******************************** 2026-03-29 00:46:24.341582 | orchestrator | Sunday 29 March 2026 00:46:06 +0000 (0:00:01.381) 0:00:01.847 ********** 2026-03-29 00:46:24.341590 | orchestrator | changed: [testbed-manager] 2026-03-29 00:46:24.341598 | orchestrator | 2026-03-29 00:46:24.341604 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/vtysh.conf] ********************* 2026-03-29 00:46:24.341611 | orchestrator | Sunday 29 March 2026 00:46:15 +0000 (0:00:08.316) 0:00:10.164 ********** 2026-03-29 00:46:24.341619 | orchestrator | ok: [testbed-manager] 2026-03-29 00:46:24.341627 | orchestrator | 2026-03-29 00:46:24.341634 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/daemons] ************************ 2026-03-29 00:46:24.341641 | orchestrator | Sunday 29 March 2026 00:46:16 +0000 (0:00:00.924) 0:00:11.089 ********** 2026-03-29 00:46:24.341648 | orchestrator | changed: [testbed-manager] 2026-03-29 00:46:24.341656 | orchestrator | 2026-03-29 00:46:24.341663 | orchestrator | TASK [osism.services.frr : Set _frr_uplinks fact] ****************************** 2026-03-29 00:46:24.341670 | orchestrator | Sunday 29 March 2026 00:46:16 +0000 (0:00:00.863) 0:00:11.953 ********** 2026-03-29 00:46:24.341678 | orchestrator | ok: [testbed-manager] 2026-03-29 00:46:24.341685 | orchestrator | 2026-03-29 00:46:24.341692 | orchestrator | TASK [osism.services.frr : Write frr_config_template to temporary file] ******** 2026-03-29 00:46:24.341700 | orchestrator | Sunday 29 March 2026 00:46:17 +0000 (0:00:01.094) 0:00:13.047 ********** 2026-03-29 00:46:24.341707 | orchestrator | skipping: [testbed-manager] 2026-03-29 00:46:24.341714 | orchestrator | 2026-03-29 00:46:24.341722 | orchestrator | TASK [osism.services.frr : Render frr.conf from frr_config_template variable] *** 2026-03-29 00:46:24.341729 | orchestrator | Sunday 29 March 2026 00:46:18 +0000 (0:00:00.144) 0:00:13.191 ********** 2026-03-29 00:46:24.341736 | orchestrator | skipping: [testbed-manager] 2026-03-29 00:46:24.341744 | orchestrator | 2026-03-29 00:46:24.341809 | orchestrator | TASK [osism.services.frr : Remove temporary frr_config_template file] ********** 2026-03-29 00:46:24.341816 | orchestrator | Sunday 29 March 2026 00:46:18 +0000 (0:00:00.216) 0:00:13.408 ********** 2026-03-29 00:46:24.341822 | orchestrator | skipping: [testbed-manager] 2026-03-29 00:46:24.341828 | orchestrator | 2026-03-29 00:46:24.341834 | orchestrator | TASK [osism.services.frr : Check for frr.conf file in the configuration repository] *** 2026-03-29 00:46:24.341842 | orchestrator | Sunday 29 March 2026 00:46:18 +0000 (0:00:00.154) 0:00:13.562 ********** 2026-03-29 00:46:24.341849 | orchestrator | skipping: [testbed-manager] 2026-03-29 00:46:24.341856 | orchestrator | 2026-03-29 00:46:24.341864 | orchestrator | TASK [osism.services.frr : Copy frr.conf file from the configuration repository] *** 2026-03-29 00:46:24.341871 | orchestrator | Sunday 29 March 2026 00:46:18 +0000 (0:00:00.108) 0:00:13.671 ********** 2026-03-29 00:46:24.341878 | orchestrator | skipping: [testbed-manager] 2026-03-29 00:46:24.341884 | orchestrator | 2026-03-29 00:46:24.341891 | orchestrator | TASK [osism.services.frr : Copy default frr.conf file of type k3s_cilium] ****** 2026-03-29 00:46:24.341898 | orchestrator | Sunday 29 March 2026 00:46:18 +0000 (0:00:00.136) 0:00:13.807 ********** 2026-03-29 00:46:24.341906 | orchestrator | changed: [testbed-manager] 2026-03-29 00:46:24.341913 | orchestrator | 2026-03-29 00:46:24.341920 | orchestrator | TASK [osism.services.frr : Set sysctl parameters] ****************************** 2026-03-29 00:46:24.341927 | orchestrator | Sunday 29 March 2026 00:46:19 +0000 (0:00:00.858) 0:00:14.665 ********** 2026-03-29 00:46:24.341934 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.ip_forward', 'value': 1}) 2026-03-29 00:46:24.341962 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.send_redirects', 'value': 0}) 2026-03-29 00:46:24.341972 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.accept_redirects', 'value': 0}) 2026-03-29 00:46:24.341988 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.fib_multipath_hash_policy', 'value': 1}) 2026-03-29 00:46:24.341997 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.default.ignore_routes_with_linkdown', 'value': 1}) 2026-03-29 00:46:24.342005 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.rp_filter', 'value': 2}) 2026-03-29 00:46:24.342068 | orchestrator | 2026-03-29 00:46:24.342079 | orchestrator | TASK [osism.services.frr : Manage frr service] ********************************* 2026-03-29 00:46:24.342087 | orchestrator | Sunday 29 March 2026 00:46:21 +0000 (0:00:02.046) 0:00:16.712 ********** 2026-03-29 00:46:24.342095 | orchestrator | ok: [testbed-manager] 2026-03-29 00:46:24.342103 | orchestrator | 2026-03-29 00:46:24.342110 | orchestrator | RUNNING HANDLER [osism.services.frr : Restart frr service] ********************* 2026-03-29 00:46:24.342118 | orchestrator | Sunday 29 March 2026 00:46:22 +0000 (0:00:01.110) 0:00:17.822 ********** 2026-03-29 00:46:24.342127 | orchestrator | changed: [testbed-manager] 2026-03-29 00:46:24.342134 | orchestrator | 2026-03-29 00:46:24.342141 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 00:46:24.342149 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-29 00:46:24.342158 | orchestrator | 2026-03-29 00:46:24.342166 | orchestrator | 2026-03-29 00:46:24.342194 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-29 00:46:24.342203 | orchestrator | Sunday 29 March 2026 00:46:24 +0000 (0:00:01.330) 0:00:19.153 ********** 2026-03-29 00:46:24.342210 | orchestrator | =============================================================================== 2026-03-29 00:46:24.342217 | orchestrator | osism.services.frr : Install frr package -------------------------------- 8.32s 2026-03-29 00:46:24.342239 | orchestrator | osism.services.frr : Set sysctl parameters ------------------------------ 2.05s 2026-03-29 00:46:24.342247 | orchestrator | osism.services.frr : Pin frr package version ---------------------------- 1.38s 2026-03-29 00:46:24.342254 | orchestrator | osism.services.frr : Restart frr service -------------------------------- 1.33s 2026-03-29 00:46:24.342261 | orchestrator | osism.services.frr : Manage frr service --------------------------------- 1.11s 2026-03-29 00:46:24.342269 | orchestrator | osism.services.frr : Set _frr_uplinks fact ------------------------------ 1.09s 2026-03-29 00:46:24.342276 | orchestrator | osism.services.frr : Copy file: /etc/frr/vtysh.conf --------------------- 0.92s 2026-03-29 00:46:24.342284 | orchestrator | osism.services.frr : Copy file: /etc/frr/daemons ------------------------ 0.86s 2026-03-29 00:46:24.342291 | orchestrator | osism.services.frr : Copy default frr.conf file of type k3s_cilium ------ 0.86s 2026-03-29 00:46:24.342299 | orchestrator | osism.services.frr : Render frr.conf from frr_config_template variable --- 0.22s 2026-03-29 00:46:24.342306 | orchestrator | osism.services.frr : Include distribution specific install tasks -------- 0.20s 2026-03-29 00:46:24.342313 | orchestrator | osism.services.frr : Remove temporary frr_config_template file ---------- 0.15s 2026-03-29 00:46:24.342321 | orchestrator | osism.services.frr : Write frr_config_template to temporary file -------- 0.14s 2026-03-29 00:46:24.342328 | orchestrator | osism.services.frr : Copy frr.conf file from the configuration repository --- 0.14s 2026-03-29 00:46:24.342336 | orchestrator | osism.services.frr : Check for frr.conf file in the configuration repository --- 0.11s 2026-03-29 00:46:24.474295 | orchestrator | 2026-03-29 00:46:24.478100 | orchestrator | --> DEPLOY IN A NUTSHELL -- START -- Sun Mar 29 00:46:24 UTC 2026 2026-03-29 00:46:24.478161 | orchestrator | 2026-03-29 00:46:25.521939 | orchestrator | 2026-03-29 00:46:25 | INFO  | Collection nutshell is prepared for execution 2026-03-29 00:46:25.644886 | orchestrator | 2026-03-29 00:46:25 | INFO  | A [0] - dotfiles 2026-03-29 00:46:35.738383 | orchestrator | 2026-03-29 00:46:35 | INFO  | A [0] - homer 2026-03-29 00:46:35.738473 | orchestrator | 2026-03-29 00:46:35 | INFO  | A [0] - netdata 2026-03-29 00:46:35.738485 | orchestrator | 2026-03-29 00:46:35 | INFO  | A [0] - openstackclient 2026-03-29 00:46:35.738494 | orchestrator | 2026-03-29 00:46:35 | INFO  | A [0] - phpmyadmin 2026-03-29 00:46:35.738502 | orchestrator | 2026-03-29 00:46:35 | INFO  | A [0] - common 2026-03-29 00:46:35.741611 | orchestrator | 2026-03-29 00:46:35 | INFO  | A [1] -- loadbalancer 2026-03-29 00:46:35.741672 | orchestrator | 2026-03-29 00:46:35 | INFO  | A [2] --- opensearch 2026-03-29 00:46:35.741718 | orchestrator | 2026-03-29 00:46:35 | INFO  | A [2] --- mariadb-ng 2026-03-29 00:46:35.741878 | orchestrator | 2026-03-29 00:46:35 | INFO  | A [3] ---- horizon 2026-03-29 00:46:35.742488 | orchestrator | 2026-03-29 00:46:35 | INFO  | A [3] ---- keystone 2026-03-29 00:46:35.742527 | orchestrator | 2026-03-29 00:46:35 | INFO  | A [4] ----- neutron 2026-03-29 00:46:35.742533 | orchestrator | 2026-03-29 00:46:35 | INFO  | A [5] ------ wait-for-nova 2026-03-29 00:46:35.742853 | orchestrator | 2026-03-29 00:46:35 | INFO  | A [6] ------- octavia 2026-03-29 00:46:35.744337 | orchestrator | 2026-03-29 00:46:35 | INFO  | A [4] ----- barbican 2026-03-29 00:46:35.744418 | orchestrator | 2026-03-29 00:46:35 | INFO  | A [4] ----- designate 2026-03-29 00:46:35.744746 | orchestrator | 2026-03-29 00:46:35 | INFO  | A [4] ----- ironic 2026-03-29 00:46:35.744759 | orchestrator | 2026-03-29 00:46:35 | INFO  | A [4] ----- placement 2026-03-29 00:46:35.744951 | orchestrator | 2026-03-29 00:46:35 | INFO  | A [4] ----- magnum 2026-03-29 00:46:35.746996 | orchestrator | 2026-03-29 00:46:35 | INFO  | A [1] -- openvswitch 2026-03-29 00:46:35.747017 | orchestrator | 2026-03-29 00:46:35 | INFO  | A [2] --- ovn 2026-03-29 00:46:35.747577 | orchestrator | 2026-03-29 00:46:35 | INFO  | A [1] -- memcached 2026-03-29 00:46:35.747786 | orchestrator | 2026-03-29 00:46:35 | INFO  | A [1] -- redis 2026-03-29 00:46:35.747803 | orchestrator | 2026-03-29 00:46:35 | INFO  | A [1] -- rabbitmq-ng 2026-03-29 00:46:35.748105 | orchestrator | 2026-03-29 00:46:35 | INFO  | A [0] - kubernetes 2026-03-29 00:46:35.751094 | orchestrator | 2026-03-29 00:46:35 | INFO  | A [1] -- kubeconfig 2026-03-29 00:46:35.751130 | orchestrator | 2026-03-29 00:46:35 | INFO  | A [1] -- copy-kubeconfig 2026-03-29 00:46:35.751361 | orchestrator | 2026-03-29 00:46:35 | INFO  | A [0] - ceph 2026-03-29 00:46:35.753388 | orchestrator | 2026-03-29 00:46:35 | INFO  | A [1] -- ceph-pools 2026-03-29 00:46:35.753537 | orchestrator | 2026-03-29 00:46:35 | INFO  | A [2] --- copy-ceph-keys 2026-03-29 00:46:35.753660 | orchestrator | 2026-03-29 00:46:35 | INFO  | A [3] ---- cephclient 2026-03-29 00:46:35.753716 | orchestrator | 2026-03-29 00:46:35 | INFO  | A [4] ----- ceph-bootstrap-dashboard 2026-03-29 00:46:35.754046 | orchestrator | 2026-03-29 00:46:35 | INFO  | A [4] ----- wait-for-keystone 2026-03-29 00:46:35.754062 | orchestrator | 2026-03-29 00:46:35 | INFO  | A [5] ------ kolla-ceph-rgw 2026-03-29 00:46:35.754175 | orchestrator | 2026-03-29 00:46:35 | INFO  | A [5] ------ glance 2026-03-29 00:46:35.754290 | orchestrator | 2026-03-29 00:46:35 | INFO  | A [5] ------ cinder 2026-03-29 00:46:35.754421 | orchestrator | 2026-03-29 00:46:35 | INFO  | A [5] ------ nova 2026-03-29 00:46:35.754754 | orchestrator | 2026-03-29 00:46:35 | INFO  | A [4] ----- prometheus 2026-03-29 00:46:35.754831 | orchestrator | 2026-03-29 00:46:35 | INFO  | A [5] ------ grafana 2026-03-29 00:46:35.925620 | orchestrator | 2026-03-29 00:46:35 | INFO  | All tasks of the collection nutshell are prepared for execution 2026-03-29 00:46:35.925704 | orchestrator | 2026-03-29 00:46:35 | INFO  | Tasks are running in the background 2026-03-29 00:46:37.297369 | orchestrator | 2026-03-29 00:46:37 | INFO  | No task IDs specified, wait for all currently running tasks 2026-03-29 00:46:39.475943 | orchestrator | 2026-03-29 00:46:39 | INFO  | Task feb9c89a-0c45-49b3-8df3-c9abfd37c122 is in state STARTED 2026-03-29 00:46:39.476261 | orchestrator | 2026-03-29 00:46:39 | INFO  | Task d22b2db3-6420-47af-b5af-887cbf3817b9 is in state STARTED 2026-03-29 00:46:39.477084 | orchestrator | 2026-03-29 00:46:39 | INFO  | Task cc451751-1fed-4ef5-b623-0676d09a4162 is in state STARTED 2026-03-29 00:46:39.477720 | orchestrator | 2026-03-29 00:46:39 | INFO  | Task b2be653b-bac8-4300-bc75-a5fe2da4c5df is in state STARTED 2026-03-29 00:46:39.481195 | orchestrator | 2026-03-29 00:46:39 | INFO  | Task 3cdffb8c-8a3a-4f1e-869f-60f7ab62bd47 is in state STARTED 2026-03-29 00:46:39.481713 | orchestrator | 2026-03-29 00:46:39 | INFO  | Task 3263f584-1399-454b-8de0-da9439e1a5cc is in state STARTED 2026-03-29 00:46:39.482365 | orchestrator | 2026-03-29 00:46:39 | INFO  | Task 1a16439d-2e03-442a-b330-ac506537ee12 is in state STARTED 2026-03-29 00:46:39.482542 | orchestrator | 2026-03-29 00:46:39 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:46:42.549934 | orchestrator | 2026-03-29 00:46:42 | INFO  | Task feb9c89a-0c45-49b3-8df3-c9abfd37c122 is in state STARTED 2026-03-29 00:46:42.550798 | orchestrator | 2026-03-29 00:46:42 | INFO  | Task d22b2db3-6420-47af-b5af-887cbf3817b9 is in state STARTED 2026-03-29 00:46:42.550859 | orchestrator | 2026-03-29 00:46:42 | INFO  | Task cc451751-1fed-4ef5-b623-0676d09a4162 is in state STARTED 2026-03-29 00:46:42.550868 | orchestrator | 2026-03-29 00:46:42 | INFO  | Task b2be653b-bac8-4300-bc75-a5fe2da4c5df is in state STARTED 2026-03-29 00:46:42.550875 | orchestrator | 2026-03-29 00:46:42 | INFO  | Task 3cdffb8c-8a3a-4f1e-869f-60f7ab62bd47 is in state STARTED 2026-03-29 00:46:42.550882 | orchestrator | 2026-03-29 00:46:42 | INFO  | Task 3263f584-1399-454b-8de0-da9439e1a5cc is in state STARTED 2026-03-29 00:46:42.550888 | orchestrator | 2026-03-29 00:46:42 | INFO  | Task 1a16439d-2e03-442a-b330-ac506537ee12 is in state STARTED 2026-03-29 00:46:42.550896 | orchestrator | 2026-03-29 00:46:42 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:46:45.576340 | orchestrator | 2026-03-29 00:46:45 | INFO  | Task feb9c89a-0c45-49b3-8df3-c9abfd37c122 is in state STARTED 2026-03-29 00:46:45.577837 | orchestrator | 2026-03-29 00:46:45 | INFO  | Task d22b2db3-6420-47af-b5af-887cbf3817b9 is in state STARTED 2026-03-29 00:46:45.578357 | orchestrator | 2026-03-29 00:46:45 | INFO  | Task cc451751-1fed-4ef5-b623-0676d09a4162 is in state STARTED 2026-03-29 00:46:45.579559 | orchestrator | 2026-03-29 00:46:45 | INFO  | Task b2be653b-bac8-4300-bc75-a5fe2da4c5df is in state STARTED 2026-03-29 00:46:45.580314 | orchestrator | 2026-03-29 00:46:45 | INFO  | Task 3cdffb8c-8a3a-4f1e-869f-60f7ab62bd47 is in state STARTED 2026-03-29 00:46:45.584054 | orchestrator | 2026-03-29 00:46:45 | INFO  | Task 3263f584-1399-454b-8de0-da9439e1a5cc is in state STARTED 2026-03-29 00:46:45.584344 | orchestrator | 2026-03-29 00:46:45 | INFO  | Task 1a16439d-2e03-442a-b330-ac506537ee12 is in state STARTED 2026-03-29 00:46:45.584741 | orchestrator | 2026-03-29 00:46:45 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:46:48.762570 | orchestrator | 2026-03-29 00:46:48 | INFO  | Task feb9c89a-0c45-49b3-8df3-c9abfd37c122 is in state STARTED 2026-03-29 00:46:48.762643 | orchestrator | 2026-03-29 00:46:48 | INFO  | Task d22b2db3-6420-47af-b5af-887cbf3817b9 is in state STARTED 2026-03-29 00:46:48.762649 | orchestrator | 2026-03-29 00:46:48 | INFO  | Task cc451751-1fed-4ef5-b623-0676d09a4162 is in state STARTED 2026-03-29 00:46:48.762653 | orchestrator | 2026-03-29 00:46:48 | INFO  | Task b2be653b-bac8-4300-bc75-a5fe2da4c5df is in state STARTED 2026-03-29 00:46:48.762657 | orchestrator | 2026-03-29 00:46:48 | INFO  | Task 3cdffb8c-8a3a-4f1e-869f-60f7ab62bd47 is in state STARTED 2026-03-29 00:46:48.762661 | orchestrator | 2026-03-29 00:46:48 | INFO  | Task 3263f584-1399-454b-8de0-da9439e1a5cc is in state STARTED 2026-03-29 00:46:48.762682 | orchestrator | 2026-03-29 00:46:48 | INFO  | Task 1a16439d-2e03-442a-b330-ac506537ee12 is in state STARTED 2026-03-29 00:46:48.762687 | orchestrator | 2026-03-29 00:46:48 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:46:51.819004 | orchestrator | 2026-03-29 00:46:51 | INFO  | Task feb9c89a-0c45-49b3-8df3-c9abfd37c122 is in state STARTED 2026-03-29 00:46:51.825424 | orchestrator | 2026-03-29 00:46:51 | INFO  | Task d22b2db3-6420-47af-b5af-887cbf3817b9 is in state STARTED 2026-03-29 00:46:51.826557 | orchestrator | 2026-03-29 00:46:51 | INFO  | Task cc451751-1fed-4ef5-b623-0676d09a4162 is in state STARTED 2026-03-29 00:46:51.828880 | orchestrator | 2026-03-29 00:46:51 | INFO  | Task b2be653b-bac8-4300-bc75-a5fe2da4c5df is in state STARTED 2026-03-29 00:46:51.830508 | orchestrator | 2026-03-29 00:46:51 | INFO  | Task 3cdffb8c-8a3a-4f1e-869f-60f7ab62bd47 is in state STARTED 2026-03-29 00:46:51.830845 | orchestrator | 2026-03-29 00:46:51 | INFO  | Task 3263f584-1399-454b-8de0-da9439e1a5cc is in state STARTED 2026-03-29 00:46:51.833596 | orchestrator | 2026-03-29 00:46:51 | INFO  | Task 1a16439d-2e03-442a-b330-ac506537ee12 is in state STARTED 2026-03-29 00:46:51.834317 | orchestrator | 2026-03-29 00:46:51 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:46:54.936854 | orchestrator | 2026-03-29 00:46:54 | INFO  | Task feb9c89a-0c45-49b3-8df3-c9abfd37c122 is in state STARTED 2026-03-29 00:46:54.937941 | orchestrator | 2026-03-29 00:46:54 | INFO  | Task d22b2db3-6420-47af-b5af-887cbf3817b9 is in state STARTED 2026-03-29 00:46:54.939938 | orchestrator | 2026-03-29 00:46:54 | INFO  | Task cc451751-1fed-4ef5-b623-0676d09a4162 is in state STARTED 2026-03-29 00:46:54.940911 | orchestrator | 2026-03-29 00:46:54 | INFO  | Task b2be653b-bac8-4300-bc75-a5fe2da4c5df is in state STARTED 2026-03-29 00:46:54.943474 | orchestrator | 2026-03-29 00:46:54 | INFO  | Task 3cdffb8c-8a3a-4f1e-869f-60f7ab62bd47 is in state STARTED 2026-03-29 00:46:54.944390 | orchestrator | 2026-03-29 00:46:54 | INFO  | Task 3263f584-1399-454b-8de0-da9439e1a5cc is in state STARTED 2026-03-29 00:46:54.946086 | orchestrator | 2026-03-29 00:46:54 | INFO  | Task 1a16439d-2e03-442a-b330-ac506537ee12 is in state STARTED 2026-03-29 00:46:54.946854 | orchestrator | 2026-03-29 00:46:54 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:46:58.100269 | orchestrator | 2026-03-29 00:46:58 | INFO  | Task feb9c89a-0c45-49b3-8df3-c9abfd37c122 is in state STARTED 2026-03-29 00:46:58.101297 | orchestrator | 2026-03-29 00:46:58 | INFO  | Task d22b2db3-6420-47af-b5af-887cbf3817b9 is in state STARTED 2026-03-29 00:46:58.102592 | orchestrator | 2026-03-29 00:46:58 | INFO  | Task cc451751-1fed-4ef5-b623-0676d09a4162 is in state STARTED 2026-03-29 00:46:58.103744 | orchestrator | 2026-03-29 00:46:58 | INFO  | Task b2be653b-bac8-4300-bc75-a5fe2da4c5df is in state STARTED 2026-03-29 00:46:58.105249 | orchestrator | 2026-03-29 00:46:58 | INFO  | Task 3cdffb8c-8a3a-4f1e-869f-60f7ab62bd47 is in state STARTED 2026-03-29 00:46:58.106048 | orchestrator | 2026-03-29 00:46:58 | INFO  | Task 3263f584-1399-454b-8de0-da9439e1a5cc is in state STARTED 2026-03-29 00:46:58.107301 | orchestrator | 2026-03-29 00:46:58 | INFO  | Task 1a16439d-2e03-442a-b330-ac506537ee12 is in state STARTED 2026-03-29 00:46:58.107329 | orchestrator | 2026-03-29 00:46:58 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:47:01.322197 | orchestrator | 2026-03-29 00:47:01 | INFO  | Task feb9c89a-0c45-49b3-8df3-c9abfd37c122 is in state STARTED 2026-03-29 00:47:01.322739 | orchestrator | 2026-03-29 00:47:01 | INFO  | Task d22b2db3-6420-47af-b5af-887cbf3817b9 is in state STARTED 2026-03-29 00:47:01.323881 | orchestrator | 2026-03-29 00:47:01 | INFO  | Task d10becc9-8919-4be5-af0b-496400f02fc4 is in state STARTED 2026-03-29 00:47:01.325773 | orchestrator | 2026-03-29 00:47:01 | INFO  | Task cc451751-1fed-4ef5-b623-0676d09a4162 is in state STARTED 2026-03-29 00:47:01.326358 | orchestrator | 2026-03-29 00:47:01 | INFO  | Task b2be653b-bac8-4300-bc75-a5fe2da4c5df is in state STARTED 2026-03-29 00:47:01.326840 | orchestrator | 2026-03-29 00:47:01 | INFO  | Task 3cdffb8c-8a3a-4f1e-869f-60f7ab62bd47 is in state SUCCESS 2026-03-29 00:47:01.327037 | orchestrator | 2026-03-29 00:47:01.327054 | orchestrator | PLAY [Apply role geerlingguy.dotfiles] ***************************************** 2026-03-29 00:47:01.327059 | orchestrator | 2026-03-29 00:47:01.327063 | orchestrator | TASK [geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally.] **** 2026-03-29 00:47:01.327067 | orchestrator | Sunday 29 March 2026 00:46:45 +0000 (0:00:00.755) 0:00:00.755 ********** 2026-03-29 00:47:01.327071 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:47:01.327076 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:47:01.327080 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:47:01.327083 | orchestrator | changed: [testbed-manager] 2026-03-29 00:47:01.327087 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:47:01.327091 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:47:01.327095 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:47:01.327099 | orchestrator | 2026-03-29 00:47:01.327102 | orchestrator | TASK [geerlingguy.dotfiles : Ensure all configured dotfiles are links.] ******** 2026-03-29 00:47:01.327106 | orchestrator | Sunday 29 March 2026 00:46:50 +0000 (0:00:04.682) 0:00:05.438 ********** 2026-03-29 00:47:01.327110 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2026-03-29 00:47:01.327115 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2026-03-29 00:47:01.327118 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2026-03-29 00:47:01.327122 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2026-03-29 00:47:01.327126 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2026-03-29 00:47:01.327130 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2026-03-29 00:47:01.327133 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2026-03-29 00:47:01.327137 | orchestrator | 2026-03-29 00:47:01.327141 | orchestrator | TASK [geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked.] *** 2026-03-29 00:47:01.327145 | orchestrator | Sunday 29 March 2026 00:46:51 +0000 (0:00:01.742) 0:00:07.181 ********** 2026-03-29 00:47:01.327151 | orchestrator | ok: [testbed-manager] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-03-29 00:46:51.721237', 'end': '2026-03-29 00:46:51.740235', 'delta': '0:00:00.018998', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-03-29 00:47:01.327156 | orchestrator | ok: [testbed-node-1] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-03-29 00:46:51.562437', 'end': '2026-03-29 00:46:51.567743', 'delta': '0:00:00.005306', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-03-29 00:47:01.327174 | orchestrator | ok: [testbed-node-0] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-03-29 00:46:51.607440', 'end': '2026-03-29 00:46:51.613339', 'delta': '0:00:00.005899', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-03-29 00:47:01.327184 | orchestrator | ok: [testbed-node-2] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-03-29 00:46:51.710638', 'end': '2026-03-29 00:46:51.714675', 'delta': '0:00:00.004037', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-03-29 00:47:01.327189 | orchestrator | ok: [testbed-node-3] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-03-29 00:46:51.592416', 'end': '2026-03-29 00:46:51.596529', 'delta': '0:00:00.004113', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-03-29 00:47:01.327193 | orchestrator | ok: [testbed-node-4] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-03-29 00:46:51.654605', 'end': '2026-03-29 00:46:51.658954', 'delta': '0:00:00.004349', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-03-29 00:47:01.327197 | orchestrator | ok: [testbed-node-5] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-03-29 00:46:51.677582', 'end': '2026-03-29 00:46:51.683086', 'delta': '0:00:00.005504', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-03-29 00:47:01.327204 | orchestrator | 2026-03-29 00:47:01.327208 | orchestrator | TASK [geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist.] **** 2026-03-29 00:47:01.327212 | orchestrator | Sunday 29 March 2026 00:46:54 +0000 (0:00:02.499) 0:00:09.680 ********** 2026-03-29 00:47:01.327216 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2026-03-29 00:47:01.327220 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2026-03-29 00:47:01.327224 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2026-03-29 00:47:01.327235 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2026-03-29 00:47:01.327239 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2026-03-29 00:47:01.327243 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2026-03-29 00:47:01.327251 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2026-03-29 00:47:01.327255 | orchestrator | 2026-03-29 00:47:01.327259 | orchestrator | TASK [geerlingguy.dotfiles : Link dotfiles into home folder.] ****************** 2026-03-29 00:47:01.327263 | orchestrator | Sunday 29 March 2026 00:46:55 +0000 (0:00:01.472) 0:00:11.152 ********** 2026-03-29 00:47:01.327267 | orchestrator | changed: [testbed-manager] => (item=.tmux.conf) 2026-03-29 00:47:01.327270 | orchestrator | changed: [testbed-node-1] => (item=.tmux.conf) 2026-03-29 00:47:01.327274 | orchestrator | changed: [testbed-node-0] => (item=.tmux.conf) 2026-03-29 00:47:01.327278 | orchestrator | changed: [testbed-node-2] => (item=.tmux.conf) 2026-03-29 00:47:01.327282 | orchestrator | changed: [testbed-node-4] => (item=.tmux.conf) 2026-03-29 00:47:01.327286 | orchestrator | changed: [testbed-node-3] => (item=.tmux.conf) 2026-03-29 00:47:01.327290 | orchestrator | changed: [testbed-node-5] => (item=.tmux.conf) 2026-03-29 00:47:01.327293 | orchestrator | 2026-03-29 00:47:01.327297 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 00:47:01.327304 | orchestrator | testbed-manager : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-29 00:47:01.327309 | orchestrator | testbed-node-0 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-29 00:47:01.327313 | orchestrator | testbed-node-1 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-29 00:47:01.327317 | orchestrator | testbed-node-2 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-29 00:47:01.327459 | orchestrator | testbed-node-3 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-29 00:47:01.327473 | orchestrator | testbed-node-4 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-29 00:47:01.327480 | orchestrator | testbed-node-5 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-29 00:47:01.327487 | orchestrator | 2026-03-29 00:47:01.327493 | orchestrator | 2026-03-29 00:47:01.327499 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-29 00:47:01.327506 | orchestrator | Sunday 29 March 2026 00:46:59 +0000 (0:00:03.417) 0:00:14.570 ********** 2026-03-29 00:47:01.327512 | orchestrator | =============================================================================== 2026-03-29 00:47:01.327518 | orchestrator | geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally. ---- 4.68s 2026-03-29 00:47:01.327524 | orchestrator | geerlingguy.dotfiles : Link dotfiles into home folder. ------------------ 3.42s 2026-03-29 00:47:01.327530 | orchestrator | geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked. --- 2.50s 2026-03-29 00:47:01.327541 | orchestrator | geerlingguy.dotfiles : Ensure all configured dotfiles are links. -------- 1.74s 2026-03-29 00:47:01.327545 | orchestrator | geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist. ---- 1.47s 2026-03-29 00:47:01.327797 | orchestrator | 2026-03-29 00:47:01 | INFO  | Task 3263f584-1399-454b-8de0-da9439e1a5cc is in state STARTED 2026-03-29 00:47:01.328672 | orchestrator | 2026-03-29 00:47:01 | INFO  | Task 1a16439d-2e03-442a-b330-ac506537ee12 is in state STARTED 2026-03-29 00:47:01.329072 | orchestrator | 2026-03-29 00:47:01 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:47:04.601774 | orchestrator | 2026-03-29 00:47:04 | INFO  | Task feb9c89a-0c45-49b3-8df3-c9abfd37c122 is in state STARTED 2026-03-29 00:47:04.602793 | orchestrator | 2026-03-29 00:47:04 | INFO  | Task d22b2db3-6420-47af-b5af-887cbf3817b9 is in state STARTED 2026-03-29 00:47:04.602894 | orchestrator | 2026-03-29 00:47:04 | INFO  | Task d10becc9-8919-4be5-af0b-496400f02fc4 is in state STARTED 2026-03-29 00:47:04.602905 | orchestrator | 2026-03-29 00:47:04 | INFO  | Task cc451751-1fed-4ef5-b623-0676d09a4162 is in state STARTED 2026-03-29 00:47:04.606187 | orchestrator | 2026-03-29 00:47:04 | INFO  | Task b2be653b-bac8-4300-bc75-a5fe2da4c5df is in state STARTED 2026-03-29 00:47:04.606525 | orchestrator | 2026-03-29 00:47:04 | INFO  | Task 3263f584-1399-454b-8de0-da9439e1a5cc is in state STARTED 2026-03-29 00:47:04.607299 | orchestrator | 2026-03-29 00:47:04 | INFO  | Task 1a16439d-2e03-442a-b330-ac506537ee12 is in state STARTED 2026-03-29 00:47:04.607344 | orchestrator | 2026-03-29 00:47:04 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:47:07.716473 | orchestrator | 2026-03-29 00:47:07 | INFO  | Task feb9c89a-0c45-49b3-8df3-c9abfd37c122 is in state STARTED 2026-03-29 00:47:07.717098 | orchestrator | 2026-03-29 00:47:07 | INFO  | Task d22b2db3-6420-47af-b5af-887cbf3817b9 is in state STARTED 2026-03-29 00:47:07.718885 | orchestrator | 2026-03-29 00:47:07 | INFO  | Task d10becc9-8919-4be5-af0b-496400f02fc4 is in state STARTED 2026-03-29 00:47:07.720861 | orchestrator | 2026-03-29 00:47:07 | INFO  | Task cc451751-1fed-4ef5-b623-0676d09a4162 is in state STARTED 2026-03-29 00:47:07.722312 | orchestrator | 2026-03-29 00:47:07 | INFO  | Task b2be653b-bac8-4300-bc75-a5fe2da4c5df is in state STARTED 2026-03-29 00:47:07.723379 | orchestrator | 2026-03-29 00:47:07 | INFO  | Task 3263f584-1399-454b-8de0-da9439e1a5cc is in state STARTED 2026-03-29 00:47:07.724908 | orchestrator | 2026-03-29 00:47:07 | INFO  | Task 1a16439d-2e03-442a-b330-ac506537ee12 is in state STARTED 2026-03-29 00:47:07.725335 | orchestrator | 2026-03-29 00:47:07 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:47:10.764528 | orchestrator | 2026-03-29 00:47:10 | INFO  | Task feb9c89a-0c45-49b3-8df3-c9abfd37c122 is in state STARTED 2026-03-29 00:47:10.767579 | orchestrator | 2026-03-29 00:47:10 | INFO  | Task d22b2db3-6420-47af-b5af-887cbf3817b9 is in state STARTED 2026-03-29 00:47:10.771582 | orchestrator | 2026-03-29 00:47:10 | INFO  | Task d10becc9-8919-4be5-af0b-496400f02fc4 is in state STARTED 2026-03-29 00:47:10.774368 | orchestrator | 2026-03-29 00:47:10 | INFO  | Task cc451751-1fed-4ef5-b623-0676d09a4162 is in state STARTED 2026-03-29 00:47:10.778063 | orchestrator | 2026-03-29 00:47:10 | INFO  | Task b2be653b-bac8-4300-bc75-a5fe2da4c5df is in state STARTED 2026-03-29 00:47:10.781980 | orchestrator | 2026-03-29 00:47:10 | INFO  | Task 3263f584-1399-454b-8de0-da9439e1a5cc is in state STARTED 2026-03-29 00:47:10.791624 | orchestrator | 2026-03-29 00:47:10 | INFO  | Task 1a16439d-2e03-442a-b330-ac506537ee12 is in state STARTED 2026-03-29 00:47:10.791689 | orchestrator | 2026-03-29 00:47:10 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:47:13.831683 | orchestrator | 2026-03-29 00:47:13 | INFO  | Task feb9c89a-0c45-49b3-8df3-c9abfd37c122 is in state STARTED 2026-03-29 00:47:13.834540 | orchestrator | 2026-03-29 00:47:13 | INFO  | Task d22b2db3-6420-47af-b5af-887cbf3817b9 is in state STARTED 2026-03-29 00:47:13.837479 | orchestrator | 2026-03-29 00:47:13 | INFO  | Task d10becc9-8919-4be5-af0b-496400f02fc4 is in state STARTED 2026-03-29 00:47:13.850713 | orchestrator | 2026-03-29 00:47:13 | INFO  | Task cc451751-1fed-4ef5-b623-0676d09a4162 is in state STARTED 2026-03-29 00:47:13.854795 | orchestrator | 2026-03-29 00:47:13 | INFO  | Task b2be653b-bac8-4300-bc75-a5fe2da4c5df is in state STARTED 2026-03-29 00:47:13.857235 | orchestrator | 2026-03-29 00:47:13 | INFO  | Task 3263f584-1399-454b-8de0-da9439e1a5cc is in state STARTED 2026-03-29 00:47:13.863039 | orchestrator | 2026-03-29 00:47:13 | INFO  | Task 1a16439d-2e03-442a-b330-ac506537ee12 is in state STARTED 2026-03-29 00:47:13.863311 | orchestrator | 2026-03-29 00:47:13 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:47:16.916924 | orchestrator | 2026-03-29 00:47:16 | INFO  | Task feb9c89a-0c45-49b3-8df3-c9abfd37c122 is in state STARTED 2026-03-29 00:47:16.916983 | orchestrator | 2026-03-29 00:47:16 | INFO  | Task d22b2db3-6420-47af-b5af-887cbf3817b9 is in state STARTED 2026-03-29 00:47:16.916990 | orchestrator | 2026-03-29 00:47:16 | INFO  | Task d10becc9-8919-4be5-af0b-496400f02fc4 is in state STARTED 2026-03-29 00:47:16.916995 | orchestrator | 2026-03-29 00:47:16 | INFO  | Task cc451751-1fed-4ef5-b623-0676d09a4162 is in state STARTED 2026-03-29 00:47:16.917001 | orchestrator | 2026-03-29 00:47:16 | INFO  | Task b2be653b-bac8-4300-bc75-a5fe2da4c5df is in state STARTED 2026-03-29 00:47:16.919713 | orchestrator | 2026-03-29 00:47:16 | INFO  | Task 3263f584-1399-454b-8de0-da9439e1a5cc is in state STARTED 2026-03-29 00:47:16.921549 | orchestrator | 2026-03-29 00:47:16 | INFO  | Task 1a16439d-2e03-442a-b330-ac506537ee12 is in state STARTED 2026-03-29 00:47:16.921611 | orchestrator | 2026-03-29 00:47:16 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:47:20.128619 | orchestrator | 2026-03-29 00:47:20 | INFO  | Task feb9c89a-0c45-49b3-8df3-c9abfd37c122 is in state STARTED 2026-03-29 00:47:20.128665 | orchestrator | 2026-03-29 00:47:20 | INFO  | Task d22b2db3-6420-47af-b5af-887cbf3817b9 is in state STARTED 2026-03-29 00:47:20.128669 | orchestrator | 2026-03-29 00:47:20 | INFO  | Task d10becc9-8919-4be5-af0b-496400f02fc4 is in state STARTED 2026-03-29 00:47:20.128673 | orchestrator | 2026-03-29 00:47:20 | INFO  | Task cc451751-1fed-4ef5-b623-0676d09a4162 is in state STARTED 2026-03-29 00:47:20.128676 | orchestrator | 2026-03-29 00:47:20 | INFO  | Task b2be653b-bac8-4300-bc75-a5fe2da4c5df is in state STARTED 2026-03-29 00:47:20.128679 | orchestrator | 2026-03-29 00:47:20 | INFO  | Task 3263f584-1399-454b-8de0-da9439e1a5cc is in state STARTED 2026-03-29 00:47:20.128683 | orchestrator | 2026-03-29 00:47:20 | INFO  | Task 1a16439d-2e03-442a-b330-ac506537ee12 is in state STARTED 2026-03-29 00:47:20.128686 | orchestrator | 2026-03-29 00:47:20 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:47:23.114952 | orchestrator | 2026-03-29 00:47:23 | INFO  | Task feb9c89a-0c45-49b3-8df3-c9abfd37c122 is in state STARTED 2026-03-29 00:47:23.115039 | orchestrator | 2026-03-29 00:47:23 | INFO  | Task d22b2db3-6420-47af-b5af-887cbf3817b9 is in state STARTED 2026-03-29 00:47:23.115056 | orchestrator | 2026-03-29 00:47:23 | INFO  | Task d10becc9-8919-4be5-af0b-496400f02fc4 is in state STARTED 2026-03-29 00:47:23.115071 | orchestrator | 2026-03-29 00:47:23 | INFO  | Task cc451751-1fed-4ef5-b623-0676d09a4162 is in state SUCCESS 2026-03-29 00:47:23.115103 | orchestrator | 2026-03-29 00:47:23 | INFO  | Task b2be653b-bac8-4300-bc75-a5fe2da4c5df is in state STARTED 2026-03-29 00:47:23.115111 | orchestrator | 2026-03-29 00:47:23 | INFO  | Task 3263f584-1399-454b-8de0-da9439e1a5cc is in state STARTED 2026-03-29 00:47:23.115122 | orchestrator | 2026-03-29 00:47:23 | INFO  | Task 1a16439d-2e03-442a-b330-ac506537ee12 is in state STARTED 2026-03-29 00:47:23.115141 | orchestrator | 2026-03-29 00:47:23 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:47:26.170607 | orchestrator | 2026-03-29 00:47:26 | INFO  | Task feb9c89a-0c45-49b3-8df3-c9abfd37c122 is in state STARTED 2026-03-29 00:47:26.176762 | orchestrator | 2026-03-29 00:47:26 | INFO  | Task d22b2db3-6420-47af-b5af-887cbf3817b9 is in state STARTED 2026-03-29 00:47:26.179567 | orchestrator | 2026-03-29 00:47:26 | INFO  | Task d10becc9-8919-4be5-af0b-496400f02fc4 is in state STARTED 2026-03-29 00:47:26.185631 | orchestrator | 2026-03-29 00:47:26 | INFO  | Task b2be653b-bac8-4300-bc75-a5fe2da4c5df is in state STARTED 2026-03-29 00:47:26.186977 | orchestrator | 2026-03-29 00:47:26 | INFO  | Task 3263f584-1399-454b-8de0-da9439e1a5cc is in state STARTED 2026-03-29 00:47:26.190564 | orchestrator | 2026-03-29 00:47:26 | INFO  | Task 1a16439d-2e03-442a-b330-ac506537ee12 is in state STARTED 2026-03-29 00:47:26.190604 | orchestrator | 2026-03-29 00:47:26 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:47:29.226047 | orchestrator | 2026-03-29 00:47:29 | INFO  | Task feb9c89a-0c45-49b3-8df3-c9abfd37c122 is in state STARTED 2026-03-29 00:47:29.226184 | orchestrator | 2026-03-29 00:47:29 | INFO  | Task d22b2db3-6420-47af-b5af-887cbf3817b9 is in state STARTED 2026-03-29 00:47:29.228215 | orchestrator | 2026-03-29 00:47:29 | INFO  | Task d10becc9-8919-4be5-af0b-496400f02fc4 is in state STARTED 2026-03-29 00:47:29.228516 | orchestrator | 2026-03-29 00:47:29 | INFO  | Task b2be653b-bac8-4300-bc75-a5fe2da4c5df is in state STARTED 2026-03-29 00:47:29.229319 | orchestrator | 2026-03-29 00:47:29 | INFO  | Task 3263f584-1399-454b-8de0-da9439e1a5cc is in state STARTED 2026-03-29 00:47:29.230398 | orchestrator | 2026-03-29 00:47:29 | INFO  | Task 1a16439d-2e03-442a-b330-ac506537ee12 is in state STARTED 2026-03-29 00:47:29.230444 | orchestrator | 2026-03-29 00:47:29 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:47:32.398960 | orchestrator | 2026-03-29 00:47:32 | INFO  | Task feb9c89a-0c45-49b3-8df3-c9abfd37c122 is in state STARTED 2026-03-29 00:47:32.399053 | orchestrator | 2026-03-29 00:47:32 | INFO  | Task d22b2db3-6420-47af-b5af-887cbf3817b9 is in state STARTED 2026-03-29 00:47:32.399064 | orchestrator | 2026-03-29 00:47:32 | INFO  | Task d10becc9-8919-4be5-af0b-496400f02fc4 is in state STARTED 2026-03-29 00:47:32.399072 | orchestrator | 2026-03-29 00:47:32 | INFO  | Task b2be653b-bac8-4300-bc75-a5fe2da4c5df is in state STARTED 2026-03-29 00:47:32.399079 | orchestrator | 2026-03-29 00:47:32 | INFO  | Task 3263f584-1399-454b-8de0-da9439e1a5cc is in state STARTED 2026-03-29 00:47:32.399105 | orchestrator | 2026-03-29 00:47:32 | INFO  | Task 1a16439d-2e03-442a-b330-ac506537ee12 is in state STARTED 2026-03-29 00:47:32.399114 | orchestrator | 2026-03-29 00:47:32 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:47:35.304069 | orchestrator | 2026-03-29 00:47:35 | INFO  | Task feb9c89a-0c45-49b3-8df3-c9abfd37c122 is in state SUCCESS 2026-03-29 00:47:35.305818 | orchestrator | 2026-03-29 00:47:35 | INFO  | Task d22b2db3-6420-47af-b5af-887cbf3817b9 is in state STARTED 2026-03-29 00:47:35.307334 | orchestrator | 2026-03-29 00:47:35 | INFO  | Task d10becc9-8919-4be5-af0b-496400f02fc4 is in state STARTED 2026-03-29 00:47:35.309251 | orchestrator | 2026-03-29 00:47:35 | INFO  | Task b2be653b-bac8-4300-bc75-a5fe2da4c5df is in state STARTED 2026-03-29 00:47:35.310047 | orchestrator | 2026-03-29 00:47:35 | INFO  | Task 3263f584-1399-454b-8de0-da9439e1a5cc is in state STARTED 2026-03-29 00:47:35.311731 | orchestrator | 2026-03-29 00:47:35 | INFO  | Task 1a16439d-2e03-442a-b330-ac506537ee12 is in state STARTED 2026-03-29 00:47:35.311763 | orchestrator | 2026-03-29 00:47:35 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:47:38.350956 | orchestrator | 2026-03-29 00:47:38 | INFO  | Task d22b2db3-6420-47af-b5af-887cbf3817b9 is in state STARTED 2026-03-29 00:47:38.353176 | orchestrator | 2026-03-29 00:47:38 | INFO  | Task d10becc9-8919-4be5-af0b-496400f02fc4 is in state STARTED 2026-03-29 00:47:38.354283 | orchestrator | 2026-03-29 00:47:38 | INFO  | Task b2be653b-bac8-4300-bc75-a5fe2da4c5df is in state STARTED 2026-03-29 00:47:38.355180 | orchestrator | 2026-03-29 00:47:38 | INFO  | Task 3263f584-1399-454b-8de0-da9439e1a5cc is in state STARTED 2026-03-29 00:47:38.356803 | orchestrator | 2026-03-29 00:47:38 | INFO  | Task 1a16439d-2e03-442a-b330-ac506537ee12 is in state STARTED 2026-03-29 00:47:38.356837 | orchestrator | 2026-03-29 00:47:38 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:47:41.396637 | orchestrator | 2026-03-29 00:47:41 | INFO  | Task d22b2db3-6420-47af-b5af-887cbf3817b9 is in state STARTED 2026-03-29 00:47:41.398331 | orchestrator | 2026-03-29 00:47:41 | INFO  | Task d10becc9-8919-4be5-af0b-496400f02fc4 is in state STARTED 2026-03-29 00:47:41.400388 | orchestrator | 2026-03-29 00:47:41 | INFO  | Task b2be653b-bac8-4300-bc75-a5fe2da4c5df is in state STARTED 2026-03-29 00:47:41.401136 | orchestrator | 2026-03-29 00:47:41 | INFO  | Task 3263f584-1399-454b-8de0-da9439e1a5cc is in state STARTED 2026-03-29 00:47:41.402803 | orchestrator | 2026-03-29 00:47:41 | INFO  | Task 1a16439d-2e03-442a-b330-ac506537ee12 is in state STARTED 2026-03-29 00:47:41.402985 | orchestrator | 2026-03-29 00:47:41 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:47:44.434320 | orchestrator | 2026-03-29 00:47:44 | INFO  | Task d22b2db3-6420-47af-b5af-887cbf3817b9 is in state STARTED 2026-03-29 00:47:44.435307 | orchestrator | 2026-03-29 00:47:44 | INFO  | Task d10becc9-8919-4be5-af0b-496400f02fc4 is in state STARTED 2026-03-29 00:47:44.439235 | orchestrator | 2026-03-29 00:47:44 | INFO  | Task b2be653b-bac8-4300-bc75-a5fe2da4c5df is in state STARTED 2026-03-29 00:47:44.444562 | orchestrator | 2026-03-29 00:47:44 | INFO  | Task 3263f584-1399-454b-8de0-da9439e1a5cc is in state STARTED 2026-03-29 00:47:44.447670 | orchestrator | 2026-03-29 00:47:44 | INFO  | Task 1a16439d-2e03-442a-b330-ac506537ee12 is in state STARTED 2026-03-29 00:47:44.447715 | orchestrator | 2026-03-29 00:47:44 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:47:47.494587 | orchestrator | 2026-03-29 00:47:47 | INFO  | Task d22b2db3-6420-47af-b5af-887cbf3817b9 is in state STARTED 2026-03-29 00:47:47.499278 | orchestrator | 2026-03-29 00:47:47 | INFO  | Task d10becc9-8919-4be5-af0b-496400f02fc4 is in state STARTED 2026-03-29 00:47:47.499346 | orchestrator | 2026-03-29 00:47:47 | INFO  | Task b2be653b-bac8-4300-bc75-a5fe2da4c5df is in state STARTED 2026-03-29 00:47:47.499360 | orchestrator | 2026-03-29 00:47:47 | INFO  | Task 3263f584-1399-454b-8de0-da9439e1a5cc is in state STARTED 2026-03-29 00:47:47.502235 | orchestrator | 2026-03-29 00:47:47 | INFO  | Task 1a16439d-2e03-442a-b330-ac506537ee12 is in state STARTED 2026-03-29 00:47:47.502284 | orchestrator | 2026-03-29 00:47:47 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:47:50.578229 | orchestrator | 2026-03-29 00:47:50 | INFO  | Task d22b2db3-6420-47af-b5af-887cbf3817b9 is in state STARTED 2026-03-29 00:47:50.579380 | orchestrator | 2026-03-29 00:47:50 | INFO  | Task d10becc9-8919-4be5-af0b-496400f02fc4 is in state STARTED 2026-03-29 00:47:50.580171 | orchestrator | 2026-03-29 00:47:50 | INFO  | Task b2be653b-bac8-4300-bc75-a5fe2da4c5df is in state STARTED 2026-03-29 00:47:50.581086 | orchestrator | 2026-03-29 00:47:50 | INFO  | Task 3263f584-1399-454b-8de0-da9439e1a5cc is in state STARTED 2026-03-29 00:47:50.582001 | orchestrator | 2026-03-29 00:47:50 | INFO  | Task 1a16439d-2e03-442a-b330-ac506537ee12 is in state STARTED 2026-03-29 00:47:50.582055 | orchestrator | 2026-03-29 00:47:50 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:47:53.660438 | orchestrator | 2026-03-29 00:47:53 | INFO  | Task d22b2db3-6420-47af-b5af-887cbf3817b9 is in state STARTED 2026-03-29 00:47:53.666205 | orchestrator | 2026-03-29 00:47:53 | INFO  | Task d10becc9-8919-4be5-af0b-496400f02fc4 is in state STARTED 2026-03-29 00:47:53.725637 | orchestrator | 2026-03-29 00:47:53 | INFO  | Task b2be653b-bac8-4300-bc75-a5fe2da4c5df is in state STARTED 2026-03-29 00:47:53.726547 | orchestrator | 2026-03-29 00:47:53 | INFO  | Task 3263f584-1399-454b-8de0-da9439e1a5cc is in state STARTED 2026-03-29 00:47:53.726577 | orchestrator | 2026-03-29 00:47:53 | INFO  | Task 1a16439d-2e03-442a-b330-ac506537ee12 is in state STARTED 2026-03-29 00:47:53.726583 | orchestrator | 2026-03-29 00:47:53 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:47:56.714585 | orchestrator | 2026-03-29 00:47:56 | INFO  | Task d22b2db3-6420-47af-b5af-887cbf3817b9 is in state STARTED 2026-03-29 00:47:56.718004 | orchestrator | 2026-03-29 00:47:56 | INFO  | Task d10becc9-8919-4be5-af0b-496400f02fc4 is in state STARTED 2026-03-29 00:47:56.721435 | orchestrator | 2026-03-29 00:47:56 | INFO  | Task b2be653b-bac8-4300-bc75-a5fe2da4c5df is in state STARTED 2026-03-29 00:47:56.724439 | orchestrator | 2026-03-29 00:47:56 | INFO  | Task 3263f584-1399-454b-8de0-da9439e1a5cc is in state STARTED 2026-03-29 00:47:56.725339 | orchestrator | 2026-03-29 00:47:56 | INFO  | Task 1a16439d-2e03-442a-b330-ac506537ee12 is in state STARTED 2026-03-29 00:47:56.726583 | orchestrator | 2026-03-29 00:47:56 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:47:59.758994 | orchestrator | 2026-03-29 00:47:59 | INFO  | Task d22b2db3-6420-47af-b5af-887cbf3817b9 is in state STARTED 2026-03-29 00:47:59.759043 | orchestrator | 2026-03-29 00:47:59 | INFO  | Task d10becc9-8919-4be5-af0b-496400f02fc4 is in state STARTED 2026-03-29 00:47:59.761337 | orchestrator | 2026-03-29 00:47:59 | INFO  | Task b2be653b-bac8-4300-bc75-a5fe2da4c5df is in state STARTED 2026-03-29 00:47:59.762431 | orchestrator | 2026-03-29 00:47:59 | INFO  | Task 3263f584-1399-454b-8de0-da9439e1a5cc is in state STARTED 2026-03-29 00:47:59.764929 | orchestrator | 2026-03-29 00:47:59 | INFO  | Task 1a16439d-2e03-442a-b330-ac506537ee12 is in state STARTED 2026-03-29 00:47:59.766229 | orchestrator | 2026-03-29 00:47:59 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:48:02.850267 | orchestrator | 2026-03-29 00:48:02 | INFO  | Task d22b2db3-6420-47af-b5af-887cbf3817b9 is in state STARTED 2026-03-29 00:48:02.853772 | orchestrator | 2026-03-29 00:48:02 | INFO  | Task d10becc9-8919-4be5-af0b-496400f02fc4 is in state STARTED 2026-03-29 00:48:02.853836 | orchestrator | 2026-03-29 00:48:02 | INFO  | Task b2be653b-bac8-4300-bc75-a5fe2da4c5df is in state STARTED 2026-03-29 00:48:02.854127 | orchestrator | 2026-03-29 00:48:02 | INFO  | Task 3263f584-1399-454b-8de0-da9439e1a5cc is in state STARTED 2026-03-29 00:48:02.855694 | orchestrator | 2026-03-29 00:48:02 | INFO  | Task 1a16439d-2e03-442a-b330-ac506537ee12 is in state STARTED 2026-03-29 00:48:02.856257 | orchestrator | 2026-03-29 00:48:02 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:48:05.901264 | orchestrator | 2026-03-29 00:48:05 | INFO  | Task d22b2db3-6420-47af-b5af-887cbf3817b9 is in state STARTED 2026-03-29 00:48:05.901337 | orchestrator | 2026-03-29 00:48:05 | INFO  | Task d10becc9-8919-4be5-af0b-496400f02fc4 is in state STARTED 2026-03-29 00:48:05.902487 | orchestrator | 2026-03-29 00:48:05 | INFO  | Task b2be653b-bac8-4300-bc75-a5fe2da4c5df is in state STARTED 2026-03-29 00:48:05.903103 | orchestrator | 2026-03-29 00:48:05 | INFO  | Task 3263f584-1399-454b-8de0-da9439e1a5cc is in state STARTED 2026-03-29 00:48:05.903884 | orchestrator | 2026-03-29 00:48:05 | INFO  | Task 1a16439d-2e03-442a-b330-ac506537ee12 is in state STARTED 2026-03-29 00:48:05.903950 | orchestrator | 2026-03-29 00:48:05 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:48:08.936933 | orchestrator | 2026-03-29 00:48:08 | INFO  | Task d22b2db3-6420-47af-b5af-887cbf3817b9 is in state STARTED 2026-03-29 00:48:08.937125 | orchestrator | 2026-03-29 00:48:08 | INFO  | Task d10becc9-8919-4be5-af0b-496400f02fc4 is in state STARTED 2026-03-29 00:48:08.938544 | orchestrator | 2026-03-29 00:48:08 | INFO  | Task b2be653b-bac8-4300-bc75-a5fe2da4c5df is in state STARTED 2026-03-29 00:48:08.938809 | orchestrator | 2026-03-29 00:48:08 | INFO  | Task 3263f584-1399-454b-8de0-da9439e1a5cc is in state STARTED 2026-03-29 00:48:08.943753 | orchestrator | 2026-03-29 00:48:08 | INFO  | Task 1a16439d-2e03-442a-b330-ac506537ee12 is in state STARTED 2026-03-29 00:48:08.943813 | orchestrator | 2026-03-29 00:48:08 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:48:11.977197 | orchestrator | 2026-03-29 00:48:11 | INFO  | Task d22b2db3-6420-47af-b5af-887cbf3817b9 is in state STARTED 2026-03-29 00:48:11.977689 | orchestrator | 2026-03-29 00:48:11 | INFO  | Task d10becc9-8919-4be5-af0b-496400f02fc4 is in state SUCCESS 2026-03-29 00:48:11.977709 | orchestrator | 2026-03-29 00:48:11.977716 | orchestrator | 2026-03-29 00:48:11.977722 | orchestrator | PLAY [Apply role homer] ******************************************************** 2026-03-29 00:48:11.977729 | orchestrator | 2026-03-29 00:48:11.977735 | orchestrator | TASK [osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards] *** 2026-03-29 00:48:11.977742 | orchestrator | Sunday 29 March 2026 00:46:44 +0000 (0:00:00.680) 0:00:00.680 ********** 2026-03-29 00:48:11.977750 | orchestrator | ok: [testbed-manager] => { 2026-03-29 00:48:11.977757 | orchestrator |  "msg": "The support for the homer_url_kibana has been removed. Please use the homer_url_opensearch_dashboards parameter." 2026-03-29 00:48:11.977763 | orchestrator | } 2026-03-29 00:48:11.977767 | orchestrator | 2026-03-29 00:48:11.977771 | orchestrator | TASK [osism.services.homer : Create traefik external network] ****************** 2026-03-29 00:48:11.977775 | orchestrator | Sunday 29 March 2026 00:46:44 +0000 (0:00:00.307) 0:00:00.987 ********** 2026-03-29 00:48:11.977780 | orchestrator | ok: [testbed-manager] 2026-03-29 00:48:11.977784 | orchestrator | 2026-03-29 00:48:11.977788 | orchestrator | TASK [osism.services.homer : Create required directories] ********************** 2026-03-29 00:48:11.977793 | orchestrator | Sunday 29 March 2026 00:46:47 +0000 (0:00:02.729) 0:00:03.717 ********** 2026-03-29 00:48:11.977797 | orchestrator | changed: [testbed-manager] => (item=/opt/homer/configuration) 2026-03-29 00:48:11.977802 | orchestrator | ok: [testbed-manager] => (item=/opt/homer) 2026-03-29 00:48:11.977805 | orchestrator | 2026-03-29 00:48:11.977809 | orchestrator | TASK [osism.services.homer : Copy config.yml configuration file] *************** 2026-03-29 00:48:11.977813 | orchestrator | Sunday 29 March 2026 00:46:48 +0000 (0:00:01.500) 0:00:05.217 ********** 2026-03-29 00:48:11.977817 | orchestrator | changed: [testbed-manager] 2026-03-29 00:48:11.977835 | orchestrator | 2026-03-29 00:48:11.977844 | orchestrator | TASK [osism.services.homer : Copy docker-compose.yml file] ********************* 2026-03-29 00:48:11.977850 | orchestrator | Sunday 29 March 2026 00:46:51 +0000 (0:00:02.586) 0:00:07.804 ********** 2026-03-29 00:48:11.977855 | orchestrator | changed: [testbed-manager] 2026-03-29 00:48:11.977861 | orchestrator | 2026-03-29 00:48:11.977867 | orchestrator | TASK [osism.services.homer : Manage homer service] ***************************** 2026-03-29 00:48:11.977873 | orchestrator | Sunday 29 March 2026 00:46:53 +0000 (0:00:01.543) 0:00:09.348 ********** 2026-03-29 00:48:11.977879 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage homer service (10 retries left). 2026-03-29 00:48:11.977885 | orchestrator | ok: [testbed-manager] 2026-03-29 00:48:11.977890 | orchestrator | 2026-03-29 00:48:11.977897 | orchestrator | RUNNING HANDLER [osism.services.homer : Restart homer service] ***************** 2026-03-29 00:48:11.977927 | orchestrator | Sunday 29 March 2026 00:47:18 +0000 (0:00:25.633) 0:00:34.981 ********** 2026-03-29 00:48:11.978066 | orchestrator | changed: [testbed-manager] 2026-03-29 00:48:11.978080 | orchestrator | 2026-03-29 00:48:11.978087 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 00:48:11.978094 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-29 00:48:11.978102 | orchestrator | 2026-03-29 00:48:11.978109 | orchestrator | 2026-03-29 00:48:11.978115 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-29 00:48:11.978122 | orchestrator | Sunday 29 March 2026 00:47:21 +0000 (0:00:02.842) 0:00:37.824 ********** 2026-03-29 00:48:11.978128 | orchestrator | =============================================================================== 2026-03-29 00:48:11.978135 | orchestrator | osism.services.homer : Manage homer service ---------------------------- 25.63s 2026-03-29 00:48:11.978142 | orchestrator | osism.services.homer : Restart homer service ---------------------------- 2.84s 2026-03-29 00:48:11.978148 | orchestrator | osism.services.homer : Create traefik external network ------------------ 2.73s 2026-03-29 00:48:11.978154 | orchestrator | osism.services.homer : Copy config.yml configuration file --------------- 2.59s 2026-03-29 00:48:11.978160 | orchestrator | osism.services.homer : Copy docker-compose.yml file --------------------- 1.54s 2026-03-29 00:48:11.978166 | orchestrator | osism.services.homer : Create required directories ---------------------- 1.50s 2026-03-29 00:48:11.978173 | orchestrator | osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards --- 0.31s 2026-03-29 00:48:11.978180 | orchestrator | 2026-03-29 00:48:11.978185 | orchestrator | 2026-03-29 00:48:11.978192 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2026-03-29 00:48:11.978198 | orchestrator | 2026-03-29 00:48:11.978204 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2026-03-29 00:48:11.978211 | orchestrator | Sunday 29 March 2026 00:46:44 +0000 (0:00:00.341) 0:00:00.341 ********** 2026-03-29 00:48:11.978216 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2026-03-29 00:48:11.978221 | orchestrator | 2026-03-29 00:48:11.978227 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2026-03-29 00:48:11.978233 | orchestrator | Sunday 29 March 2026 00:46:45 +0000 (0:00:00.441) 0:00:00.783 ********** 2026-03-29 00:48:11.978240 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2026-03-29 00:48:11.978246 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2026-03-29 00:48:11.978253 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2026-03-29 00:48:11.978259 | orchestrator | 2026-03-29 00:48:11.978266 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2026-03-29 00:48:11.978271 | orchestrator | Sunday 29 March 2026 00:46:48 +0000 (0:00:03.616) 0:00:04.399 ********** 2026-03-29 00:48:11.978277 | orchestrator | changed: [testbed-manager] 2026-03-29 00:48:11.978281 | orchestrator | 2026-03-29 00:48:11.978285 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2026-03-29 00:48:11.978304 | orchestrator | Sunday 29 March 2026 00:46:51 +0000 (0:00:02.513) 0:00:06.913 ********** 2026-03-29 00:48:11.978309 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2026-03-29 00:48:11.978313 | orchestrator | ok: [testbed-manager] 2026-03-29 00:48:11.978317 | orchestrator | 2026-03-29 00:48:11.978321 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2026-03-29 00:48:11.978325 | orchestrator | Sunday 29 March 2026 00:47:26 +0000 (0:00:34.704) 0:00:41.618 ********** 2026-03-29 00:48:11.978350 | orchestrator | changed: [testbed-manager] 2026-03-29 00:48:11.978354 | orchestrator | 2026-03-29 00:48:11.978358 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2026-03-29 00:48:11.978362 | orchestrator | Sunday 29 March 2026 00:47:27 +0000 (0:00:01.829) 0:00:43.448 ********** 2026-03-29 00:48:11.978366 | orchestrator | ok: [testbed-manager] 2026-03-29 00:48:11.978370 | orchestrator | 2026-03-29 00:48:11.978374 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2026-03-29 00:48:11.978378 | orchestrator | Sunday 29 March 2026 00:47:28 +0000 (0:00:00.869) 0:00:44.317 ********** 2026-03-29 00:48:11.978382 | orchestrator | changed: [testbed-manager] 2026-03-29 00:48:11.978385 | orchestrator | 2026-03-29 00:48:11.978389 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2026-03-29 00:48:11.978393 | orchestrator | Sunday 29 March 2026 00:47:31 +0000 (0:00:02.488) 0:00:46.806 ********** 2026-03-29 00:48:11.978397 | orchestrator | changed: [testbed-manager] 2026-03-29 00:48:11.978401 | orchestrator | 2026-03-29 00:48:11.978405 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2026-03-29 00:48:11.978409 | orchestrator | Sunday 29 March 2026 00:47:32 +0000 (0:00:01.051) 0:00:47.857 ********** 2026-03-29 00:48:11.978413 | orchestrator | changed: [testbed-manager] 2026-03-29 00:48:11.978416 | orchestrator | 2026-03-29 00:48:11.978420 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2026-03-29 00:48:11.978424 | orchestrator | Sunday 29 March 2026 00:47:33 +0000 (0:00:00.879) 0:00:48.737 ********** 2026-03-29 00:48:11.978427 | orchestrator | ok: [testbed-manager] 2026-03-29 00:48:11.978431 | orchestrator | 2026-03-29 00:48:11.978435 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 00:48:11.978439 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-29 00:48:11.978443 | orchestrator | 2026-03-29 00:48:11.978446 | orchestrator | 2026-03-29 00:48:11.978450 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-29 00:48:11.978454 | orchestrator | Sunday 29 March 2026 00:47:33 +0000 (0:00:00.470) 0:00:49.208 ********** 2026-03-29 00:48:11.978458 | orchestrator | =============================================================================== 2026-03-29 00:48:11.978462 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 34.70s 2026-03-29 00:48:11.978465 | orchestrator | osism.services.openstackclient : Create required directories ------------ 3.62s 2026-03-29 00:48:11.978469 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 2.51s 2026-03-29 00:48:11.978473 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 2.49s 2026-03-29 00:48:11.978476 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 1.83s 2026-03-29 00:48:11.978480 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 1.05s 2026-03-29 00:48:11.978484 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 0.88s 2026-03-29 00:48:11.978488 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 0.87s 2026-03-29 00:48:11.978491 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 0.47s 2026-03-29 00:48:11.978495 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 0.44s 2026-03-29 00:48:11.978502 | orchestrator | 2026-03-29 00:48:11.978506 | orchestrator | 2026-03-29 00:48:11.978509 | orchestrator | PLAY [Apply role phpmyadmin] *************************************************** 2026-03-29 00:48:11.978513 | orchestrator | 2026-03-29 00:48:11.978517 | orchestrator | TASK [osism.services.phpmyadmin : Create traefik external network] ************* 2026-03-29 00:48:11.978520 | orchestrator | Sunday 29 March 2026 00:47:03 +0000 (0:00:00.297) 0:00:00.297 ********** 2026-03-29 00:48:11.978524 | orchestrator | ok: [testbed-manager] 2026-03-29 00:48:11.978528 | orchestrator | 2026-03-29 00:48:11.978532 | orchestrator | TASK [osism.services.phpmyadmin : Create required directories] ***************** 2026-03-29 00:48:11.978535 | orchestrator | Sunday 29 March 2026 00:47:05 +0000 (0:00:01.880) 0:00:02.178 ********** 2026-03-29 00:48:11.978539 | orchestrator | changed: [testbed-manager] => (item=/opt/phpmyadmin) 2026-03-29 00:48:11.978543 | orchestrator | 2026-03-29 00:48:11.978546 | orchestrator | TASK [osism.services.phpmyadmin : Copy docker-compose.yml file] **************** 2026-03-29 00:48:11.978550 | orchestrator | Sunday 29 March 2026 00:47:06 +0000 (0:00:00.492) 0:00:02.671 ********** 2026-03-29 00:48:11.978554 | orchestrator | changed: [testbed-manager] 2026-03-29 00:48:11.978558 | orchestrator | 2026-03-29 00:48:11.978561 | orchestrator | TASK [osism.services.phpmyadmin : Manage phpmyadmin service] ******************* 2026-03-29 00:48:11.978567 | orchestrator | Sunday 29 March 2026 00:47:07 +0000 (0:00:01.378) 0:00:04.049 ********** 2026-03-29 00:48:11.978571 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage phpmyadmin service (10 retries left). 2026-03-29 00:48:11.978575 | orchestrator | ok: [testbed-manager] 2026-03-29 00:48:11.978578 | orchestrator | 2026-03-29 00:48:11.978582 | orchestrator | RUNNING HANDLER [osism.services.phpmyadmin : Restart phpmyadmin service] ******* 2026-03-29 00:48:11.978586 | orchestrator | Sunday 29 March 2026 00:48:01 +0000 (0:00:53.673) 0:00:57.723 ********** 2026-03-29 00:48:11.978590 | orchestrator | changed: [testbed-manager] 2026-03-29 00:48:11.978593 | orchestrator | 2026-03-29 00:48:11.978597 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 00:48:11.978605 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-29 00:48:11.978609 | orchestrator | 2026-03-29 00:48:11.978613 | orchestrator | 2026-03-29 00:48:11.978616 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-29 00:48:11.978620 | orchestrator | Sunday 29 March 2026 00:48:09 +0000 (0:00:08.336) 0:01:06.059 ********** 2026-03-29 00:48:11.978624 | orchestrator | =============================================================================== 2026-03-29 00:48:11.978628 | orchestrator | osism.services.phpmyadmin : Manage phpmyadmin service ------------------ 53.67s 2026-03-29 00:48:11.978631 | orchestrator | osism.services.phpmyadmin : Restart phpmyadmin service ------------------ 8.34s 2026-03-29 00:48:11.978635 | orchestrator | osism.services.phpmyadmin : Create traefik external network ------------- 1.88s 2026-03-29 00:48:11.978639 | orchestrator | osism.services.phpmyadmin : Copy docker-compose.yml file ---------------- 1.38s 2026-03-29 00:48:11.978642 | orchestrator | osism.services.phpmyadmin : Create required directories ----------------- 0.49s 2026-03-29 00:48:11.978646 | orchestrator | 2026-03-29 00:48:11 | INFO  | Task b2be653b-bac8-4300-bc75-a5fe2da4c5df is in state STARTED 2026-03-29 00:48:11.978828 | orchestrator | 2026-03-29 00:48:11 | INFO  | Task 3263f584-1399-454b-8de0-da9439e1a5cc is in state STARTED 2026-03-29 00:48:11.979598 | orchestrator | 2026-03-29 00:48:11 | INFO  | Task 1a16439d-2e03-442a-b330-ac506537ee12 is in state STARTED 2026-03-29 00:48:11.980066 | orchestrator | 2026-03-29 00:48:11 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:48:15.079247 | orchestrator | 2026-03-29 00:48:15 | INFO  | Task d22b2db3-6420-47af-b5af-887cbf3817b9 is in state STARTED 2026-03-29 00:48:15.079858 | orchestrator | 2026-03-29 00:48:15 | INFO  | Task b2be653b-bac8-4300-bc75-a5fe2da4c5df is in state STARTED 2026-03-29 00:48:15.080885 | orchestrator | 2026-03-29 00:48:15 | INFO  | Task 3263f584-1399-454b-8de0-da9439e1a5cc is in state STARTED 2026-03-29 00:48:15.082958 | orchestrator | 2026-03-29 00:48:15 | INFO  | Task 1a16439d-2e03-442a-b330-ac506537ee12 is in state STARTED 2026-03-29 00:48:15.083007 | orchestrator | 2026-03-29 00:48:15 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:48:18.132565 | orchestrator | 2026-03-29 00:48:18 | INFO  | Task d22b2db3-6420-47af-b5af-887cbf3817b9 is in state STARTED 2026-03-29 00:48:18.137541 | orchestrator | 2026-03-29 00:48:18 | INFO  | Task b2be653b-bac8-4300-bc75-a5fe2da4c5df is in state STARTED 2026-03-29 00:48:18.139553 | orchestrator | 2026-03-29 00:48:18 | INFO  | Task 3263f584-1399-454b-8de0-da9439e1a5cc is in state STARTED 2026-03-29 00:48:18.142164 | orchestrator | 2026-03-29 00:48:18 | INFO  | Task 1a16439d-2e03-442a-b330-ac506537ee12 is in state STARTED 2026-03-29 00:48:18.142244 | orchestrator | 2026-03-29 00:48:18 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:48:21.193040 | orchestrator | 2026-03-29 00:48:21 | INFO  | Task d22b2db3-6420-47af-b5af-887cbf3817b9 is in state STARTED 2026-03-29 00:48:21.196793 | orchestrator | 2026-03-29 00:48:21 | INFO  | Task b2be653b-bac8-4300-bc75-a5fe2da4c5df is in state STARTED 2026-03-29 00:48:21.198141 | orchestrator | 2026-03-29 00:48:21 | INFO  | Task 3263f584-1399-454b-8de0-da9439e1a5cc is in state SUCCESS 2026-03-29 00:48:21.198799 | orchestrator | 2026-03-29 00:48:21.198820 | orchestrator | 2026-03-29 00:48:21.198828 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-29 00:48:21.198835 | orchestrator | 2026-03-29 00:48:21.198842 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-29 00:48:21.198850 | orchestrator | Sunday 29 March 2026 00:46:44 +0000 (0:00:00.481) 0:00:00.481 ********** 2026-03-29 00:48:21.198856 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2026-03-29 00:48:21.198864 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2026-03-29 00:48:21.198870 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2026-03-29 00:48:21.198874 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2026-03-29 00:48:21.198878 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2026-03-29 00:48:21.198883 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2026-03-29 00:48:21.198887 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2026-03-29 00:48:21.198892 | orchestrator | 2026-03-29 00:48:21.198896 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2026-03-29 00:48:21.198903 | orchestrator | 2026-03-29 00:48:21.198951 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2026-03-29 00:48:21.198970 | orchestrator | Sunday 29 March 2026 00:46:47 +0000 (0:00:03.018) 0:00:03.500 ********** 2026-03-29 00:48:21.198982 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-29 00:48:21.198990 | orchestrator | 2026-03-29 00:48:21.198994 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2026-03-29 00:48:21.198999 | orchestrator | Sunday 29 March 2026 00:46:49 +0000 (0:00:01.412) 0:00:04.913 ********** 2026-03-29 00:48:21.199003 | orchestrator | ok: [testbed-manager] 2026-03-29 00:48:21.199008 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:48:21.199012 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:48:21.199017 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:48:21.199021 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:48:21.199026 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:48:21.199030 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:48:21.199034 | orchestrator | 2026-03-29 00:48:21.199038 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2026-03-29 00:48:21.199052 | orchestrator | Sunday 29 March 2026 00:46:52 +0000 (0:00:02.834) 0:00:07.747 ********** 2026-03-29 00:48:21.199056 | orchestrator | ok: [testbed-manager] 2026-03-29 00:48:21.199060 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:48:21.199063 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:48:21.199067 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:48:21.199071 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:48:21.199074 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:48:21.199078 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:48:21.199082 | orchestrator | 2026-03-29 00:48:21.199086 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2026-03-29 00:48:21.199089 | orchestrator | Sunday 29 March 2026 00:46:55 +0000 (0:00:03.384) 0:00:11.132 ********** 2026-03-29 00:48:21.199093 | orchestrator | changed: [testbed-manager] 2026-03-29 00:48:21.199097 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:48:21.199101 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:48:21.199104 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:48:21.199108 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:48:21.199112 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:48:21.199116 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:48:21.199119 | orchestrator | 2026-03-29 00:48:21.199123 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2026-03-29 00:48:21.199127 | orchestrator | Sunday 29 March 2026 00:46:57 +0000 (0:00:02.401) 0:00:13.533 ********** 2026-03-29 00:48:21.199131 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:48:21.199134 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:48:21.199138 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:48:21.199142 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:48:21.199145 | orchestrator | changed: [testbed-manager] 2026-03-29 00:48:21.199149 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:48:21.199153 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:48:21.199156 | orchestrator | 2026-03-29 00:48:21.199160 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2026-03-29 00:48:21.199164 | orchestrator | Sunday 29 March 2026 00:47:09 +0000 (0:00:11.994) 0:00:25.528 ********** 2026-03-29 00:48:21.199168 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:48:21.199171 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:48:21.199175 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:48:21.199179 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:48:21.199182 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:48:21.199186 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:48:21.199190 | orchestrator | changed: [testbed-manager] 2026-03-29 00:48:21.199194 | orchestrator | 2026-03-29 00:48:21.199197 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2026-03-29 00:48:21.199201 | orchestrator | Sunday 29 March 2026 00:47:49 +0000 (0:00:40.005) 0:01:05.534 ********** 2026-03-29 00:48:21.199205 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-29 00:48:21.199210 | orchestrator | 2026-03-29 00:48:21.199214 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2026-03-29 00:48:21.199217 | orchestrator | Sunday 29 March 2026 00:47:51 +0000 (0:00:01.450) 0:01:06.984 ********** 2026-03-29 00:48:21.199221 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2026-03-29 00:48:21.199225 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2026-03-29 00:48:21.199229 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2026-03-29 00:48:21.199232 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2026-03-29 00:48:21.199243 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2026-03-29 00:48:21.199247 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2026-03-29 00:48:21.199251 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2026-03-29 00:48:21.199255 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2026-03-29 00:48:21.199262 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2026-03-29 00:48:21.199272 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2026-03-29 00:48:21.199280 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2026-03-29 00:48:21.199284 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2026-03-29 00:48:21.199287 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2026-03-29 00:48:21.199291 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2026-03-29 00:48:21.199295 | orchestrator | 2026-03-29 00:48:21.199298 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2026-03-29 00:48:21.199303 | orchestrator | Sunday 29 March 2026 00:47:56 +0000 (0:00:04.819) 0:01:11.804 ********** 2026-03-29 00:48:21.199307 | orchestrator | ok: [testbed-manager] 2026-03-29 00:48:21.199310 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:48:21.199314 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:48:21.199318 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:48:21.199322 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:48:21.199328 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:48:21.199331 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:48:21.199335 | orchestrator | 2026-03-29 00:48:21.199339 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2026-03-29 00:48:21.199343 | orchestrator | Sunday 29 March 2026 00:47:57 +0000 (0:00:01.449) 0:01:13.253 ********** 2026-03-29 00:48:21.199346 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:48:21.199350 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:48:21.199354 | orchestrator | changed: [testbed-manager] 2026-03-29 00:48:21.199358 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:48:21.199361 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:48:21.199365 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:48:21.199369 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:48:21.199373 | orchestrator | 2026-03-29 00:48:21.199376 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2026-03-29 00:48:21.199380 | orchestrator | Sunday 29 March 2026 00:47:59 +0000 (0:00:01.360) 0:01:14.613 ********** 2026-03-29 00:48:21.199384 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:48:21.199387 | orchestrator | ok: [testbed-manager] 2026-03-29 00:48:21.199391 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:48:21.199395 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:48:21.199399 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:48:21.199402 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:48:21.199406 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:48:21.199410 | orchestrator | 2026-03-29 00:48:21.199413 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2026-03-29 00:48:21.199417 | orchestrator | Sunday 29 March 2026 00:48:01 +0000 (0:00:02.454) 0:01:17.068 ********** 2026-03-29 00:48:21.199421 | orchestrator | ok: [testbed-manager] 2026-03-29 00:48:21.199425 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:48:21.199428 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:48:21.199432 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:48:21.199435 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:48:21.199439 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:48:21.199443 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:48:21.199447 | orchestrator | 2026-03-29 00:48:21.199450 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2026-03-29 00:48:21.199454 | orchestrator | Sunday 29 March 2026 00:48:03 +0000 (0:00:02.380) 0:01:19.449 ********** 2026-03-29 00:48:21.199458 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2026-03-29 00:48:21.199463 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-29 00:48:21.199467 | orchestrator | 2026-03-29 00:48:21.199471 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2026-03-29 00:48:21.199477 | orchestrator | Sunday 29 March 2026 00:48:05 +0000 (0:00:01.539) 0:01:20.988 ********** 2026-03-29 00:48:21.199481 | orchestrator | changed: [testbed-manager] 2026-03-29 00:48:21.199485 | orchestrator | 2026-03-29 00:48:21.199489 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2026-03-29 00:48:21.199492 | orchestrator | Sunday 29 March 2026 00:48:06 +0000 (0:00:01.554) 0:01:22.543 ********** 2026-03-29 00:48:21.199496 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:48:21.199500 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:48:21.199503 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:48:21.199507 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:48:21.199511 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:48:21.199515 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:48:21.199518 | orchestrator | changed: [testbed-manager] 2026-03-29 00:48:21.199522 | orchestrator | 2026-03-29 00:48:21.199526 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 00:48:21.199530 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-29 00:48:21.199534 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-29 00:48:21.199538 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-29 00:48:21.199542 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-29 00:48:21.199548 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-29 00:48:21.199552 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-29 00:48:21.199555 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-29 00:48:21.199559 | orchestrator | 2026-03-29 00:48:21.199563 | orchestrator | 2026-03-29 00:48:21.199567 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-29 00:48:21.199570 | orchestrator | Sunday 29 March 2026 00:48:17 +0000 (0:00:10.973) 0:01:33.516 ********** 2026-03-29 00:48:21.199574 | orchestrator | =============================================================================== 2026-03-29 00:48:21.199578 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 40.01s 2026-03-29 00:48:21.199581 | orchestrator | osism.services.netdata : Add repository -------------------------------- 11.99s 2026-03-29 00:48:21.199585 | orchestrator | osism.services.netdata : Restart service netdata ----------------------- 10.97s 2026-03-29 00:48:21.199589 | orchestrator | osism.services.netdata : Copy configuration files ----------------------- 4.82s 2026-03-29 00:48:21.199594 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 3.38s 2026-03-29 00:48:21.199598 | orchestrator | Group hosts based on enabled services ----------------------------------- 3.02s 2026-03-29 00:48:21.199602 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 2.83s 2026-03-29 00:48:21.199605 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 2.45s 2026-03-29 00:48:21.199609 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 2.40s 2026-03-29 00:48:21.199613 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 2.38s 2026-03-29 00:48:21.199616 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 1.55s 2026-03-29 00:48:21.199620 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 1.54s 2026-03-29 00:48:21.199624 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 1.45s 2026-03-29 00:48:21.199631 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 1.45s 2026-03-29 00:48:21.199635 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 1.41s 2026-03-29 00:48:21.199638 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 1.36s 2026-03-29 00:48:21.201691 | orchestrator | 2026-03-29 00:48:21 | INFO  | Task 1a16439d-2e03-442a-b330-ac506537ee12 is in state STARTED 2026-03-29 00:48:21.201921 | orchestrator | 2026-03-29 00:48:21 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:48:24.245591 | orchestrator | 2026-03-29 00:48:24 | INFO  | Task d22b2db3-6420-47af-b5af-887cbf3817b9 is in state STARTED 2026-03-29 00:48:24.245905 | orchestrator | 2026-03-29 00:48:24 | INFO  | Task b2be653b-bac8-4300-bc75-a5fe2da4c5df is in state STARTED 2026-03-29 00:48:24.247372 | orchestrator | 2026-03-29 00:48:24 | INFO  | Task 1a16439d-2e03-442a-b330-ac506537ee12 is in state STARTED 2026-03-29 00:48:24.247412 | orchestrator | 2026-03-29 00:48:24 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:48:27.295343 | orchestrator | 2026-03-29 00:48:27 | INFO  | Task d22b2db3-6420-47af-b5af-887cbf3817b9 is in state STARTED 2026-03-29 00:48:27.296300 | orchestrator | 2026-03-29 00:48:27 | INFO  | Task b2be653b-bac8-4300-bc75-a5fe2da4c5df is in state STARTED 2026-03-29 00:48:27.296790 | orchestrator | 2026-03-29 00:48:27 | INFO  | Task 1a16439d-2e03-442a-b330-ac506537ee12 is in state STARTED 2026-03-29 00:48:27.296896 | orchestrator | 2026-03-29 00:48:27 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:48:30.348832 | orchestrator | 2026-03-29 00:48:30 | INFO  | Task d22b2db3-6420-47af-b5af-887cbf3817b9 is in state STARTED 2026-03-29 00:48:30.348891 | orchestrator | 2026-03-29 00:48:30 | INFO  | Task b2be653b-bac8-4300-bc75-a5fe2da4c5df is in state STARTED 2026-03-29 00:48:30.350354 | orchestrator | 2026-03-29 00:48:30 | INFO  | Task 1a16439d-2e03-442a-b330-ac506537ee12 is in state STARTED 2026-03-29 00:48:30.351355 | orchestrator | 2026-03-29 00:48:30 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:48:33.385632 | orchestrator | 2026-03-29 00:48:33 | INFO  | Task d22b2db3-6420-47af-b5af-887cbf3817b9 is in state STARTED 2026-03-29 00:48:33.389367 | orchestrator | 2026-03-29 00:48:33 | INFO  | Task b2be653b-bac8-4300-bc75-a5fe2da4c5df is in state STARTED 2026-03-29 00:48:33.393064 | orchestrator | 2026-03-29 00:48:33 | INFO  | Task 1a16439d-2e03-442a-b330-ac506537ee12 is in state STARTED 2026-03-29 00:48:33.393137 | orchestrator | 2026-03-29 00:48:33 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:48:36.431247 | orchestrator | 2026-03-29 00:48:36 | INFO  | Task d22b2db3-6420-47af-b5af-887cbf3817b9 is in state STARTED 2026-03-29 00:48:36.432111 | orchestrator | 2026-03-29 00:48:36 | INFO  | Task b2be653b-bac8-4300-bc75-a5fe2da4c5df is in state STARTED 2026-03-29 00:48:36.433296 | orchestrator | 2026-03-29 00:48:36 | INFO  | Task 1a16439d-2e03-442a-b330-ac506537ee12 is in state STARTED 2026-03-29 00:48:36.433335 | orchestrator | 2026-03-29 00:48:36 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:48:39.465677 | orchestrator | 2026-03-29 00:48:39 | INFO  | Task d22b2db3-6420-47af-b5af-887cbf3817b9 is in state STARTED 2026-03-29 00:48:39.466224 | orchestrator | 2026-03-29 00:48:39 | INFO  | Task b2be653b-bac8-4300-bc75-a5fe2da4c5df is in state STARTED 2026-03-29 00:48:39.467027 | orchestrator | 2026-03-29 00:48:39 | INFO  | Task 1a16439d-2e03-442a-b330-ac506537ee12 is in state STARTED 2026-03-29 00:48:39.467070 | orchestrator | 2026-03-29 00:48:39 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:48:42.514652 | orchestrator | 2026-03-29 00:48:42 | INFO  | Task d22b2db3-6420-47af-b5af-887cbf3817b9 is in state STARTED 2026-03-29 00:48:42.515207 | orchestrator | 2026-03-29 00:48:42 | INFO  | Task b2be653b-bac8-4300-bc75-a5fe2da4c5df is in state STARTED 2026-03-29 00:48:42.515641 | orchestrator | 2026-03-29 00:48:42 | INFO  | Task 1a16439d-2e03-442a-b330-ac506537ee12 is in state STARTED 2026-03-29 00:48:42.515771 | orchestrator | 2026-03-29 00:48:42 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:48:45.546096 | orchestrator | 2026-03-29 00:48:45 | INFO  | Task d22b2db3-6420-47af-b5af-887cbf3817b9 is in state STARTED 2026-03-29 00:48:45.546722 | orchestrator | 2026-03-29 00:48:45 | INFO  | Task b2be653b-bac8-4300-bc75-a5fe2da4c5df is in state STARTED 2026-03-29 00:48:45.548394 | orchestrator | 2026-03-29 00:48:45 | INFO  | Task 1a16439d-2e03-442a-b330-ac506537ee12 is in state STARTED 2026-03-29 00:48:45.548425 | orchestrator | 2026-03-29 00:48:45 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:48:48.584808 | orchestrator | 2026-03-29 00:48:48 | INFO  | Task d22b2db3-6420-47af-b5af-887cbf3817b9 is in state STARTED 2026-03-29 00:48:48.590138 | orchestrator | 2026-03-29 00:48:48.590960 | orchestrator | 2026-03-29 00:48:48.590985 | orchestrator | PLAY [Apply role common] ******************************************************* 2026-03-29 00:48:48.590998 | orchestrator | 2026-03-29 00:48:48.591010 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-03-29 00:48:48.591021 | orchestrator | Sunday 29 March 2026 00:46:38 +0000 (0:00:00.242) 0:00:00.242 ********** 2026-03-29 00:48:48.591034 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-29 00:48:48.591047 | orchestrator | 2026-03-29 00:48:48.591058 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2026-03-29 00:48:48.591068 | orchestrator | Sunday 29 March 2026 00:46:39 +0000 (0:00:01.003) 0:00:01.245 ********** 2026-03-29 00:48:48.591083 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-29 00:48:48.591095 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-29 00:48:48.591108 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-29 00:48:48.591124 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-29 00:48:48.591135 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-29 00:48:48.591146 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-29 00:48:48.591157 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-29 00:48:48.591168 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-29 00:48:48.591179 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-29 00:48:48.591191 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-29 00:48:48.591204 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-29 00:48:48.591216 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-29 00:48:48.591227 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-29 00:48:48.591238 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-29 00:48:48.591249 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-29 00:48:48.591261 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-29 00:48:48.591272 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-29 00:48:48.591309 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-29 00:48:48.591321 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-29 00:48:48.591332 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-29 00:48:48.591342 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-29 00:48:48.591353 | orchestrator | 2026-03-29 00:48:48.591364 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-03-29 00:48:48.591374 | orchestrator | Sunday 29 March 2026 00:46:43 +0000 (0:00:03.668) 0:00:04.914 ********** 2026-03-29 00:48:48.591384 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-29 00:48:48.591396 | orchestrator | 2026-03-29 00:48:48.591406 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2026-03-29 00:48:48.591417 | orchestrator | Sunday 29 March 2026 00:46:44 +0000 (0:00:01.271) 0:00:06.186 ********** 2026-03-29 00:48:48.591434 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-29 00:48:48.591449 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-29 00:48:48.591546 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-29 00:48:48.591562 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-29 00:48:48.591575 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-29 00:48:48.591597 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-29 00:48:48.591620 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-29 00:48:48.591633 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:48:48.591650 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:48:48.591702 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:48:48.591715 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:48:48.591727 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:48:48.591746 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:48:48.591758 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:48:48.591770 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:48:48.591805 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:48:48.591816 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:48:48.591859 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:48:48.591872 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:48:48.591883 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:48:48.591903 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:48:48.591914 | orchestrator | 2026-03-29 00:48:48.591925 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2026-03-29 00:48:48.591968 | orchestrator | Sunday 29 March 2026 00:46:49 +0000 (0:00:05.205) 0:00:11.394 ********** 2026-03-29 00:48:48.591980 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-29 00:48:48.591991 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 00:48:48.592008 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 00:48:48.592021 | orchestrator | skipping: [testbed-manager] 2026-03-29 00:48:48.592033 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-29 00:48:48.592079 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 00:48:48.592091 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 00:48:48.592113 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:48:48.592125 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-29 00:48:48.592136 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 00:48:48.592148 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 00:48:48.592160 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:48:48.592171 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-29 00:48:48.592188 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 00:48:48.592200 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 00:48:48.592211 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:48:48.592228 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-29 00:48:48.592239 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 00:48:48.592258 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 00:48:48.592270 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:48:48.592282 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-29 00:48:48.592293 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 00:48:48.592305 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-29 00:48:48.592321 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 00:48:48.592333 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:48:48.592353 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 00:48:48.592365 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 00:48:48.592383 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:48:48.592394 | orchestrator | 2026-03-29 00:48:48.592404 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2026-03-29 00:48:48.592414 | orchestrator | Sunday 29 March 2026 00:46:52 +0000 (0:00:02.330) 0:00:13.725 ********** 2026-03-29 00:48:48.592425 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-29 00:48:48.592436 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 00:48:48.592447 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 00:48:48.592458 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-29 00:48:48.592475 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 00:48:48.592487 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 00:48:48.592497 | orchestrator | skipping: [testbed-manager] 2026-03-29 00:48:48.592507 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:48:48.592537 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-29 00:48:48.592548 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 00:48:48.592559 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 00:48:48.592570 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:48:48.592580 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-29 00:48:48.592591 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 00:48:48.592601 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 00:48:48.592613 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-29 00:48:48.592627 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 00:48:48.592645 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 00:48:48.592655 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-29 00:48:48.592665 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 00:48:48.592676 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 00:48:48.592686 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:48:48.592697 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:48:48.592708 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:48:48.592719 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-29 00:48:48.592740 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 00:48:48.592751 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 00:48:48.592768 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:48:48.592779 | orchestrator | 2026-03-29 00:48:48.592789 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2026-03-29 00:48:48.592800 | orchestrator | Sunday 29 March 2026 00:46:54 +0000 (0:00:02.709) 0:00:16.435 ********** 2026-03-29 00:48:48.592810 | orchestrator | skipping: [testbed-manager] 2026-03-29 00:48:48.592819 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:48:48.592829 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:48:48.592840 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:48:48.592850 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:48:48.592866 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:48:48.592877 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:48:48.592887 | orchestrator | 2026-03-29 00:48:48.592897 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2026-03-29 00:48:48.592907 | orchestrator | Sunday 29 March 2026 00:46:56 +0000 (0:00:01.436) 0:00:17.872 ********** 2026-03-29 00:48:48.592918 | orchestrator | skipping: [testbed-manager] 2026-03-29 00:48:48.592928 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:48:48.592959 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:48:48.592970 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:48:48.592981 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:48:48.592991 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:48:48.593001 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:48:48.593011 | orchestrator | 2026-03-29 00:48:48.593021 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2026-03-29 00:48:48.593031 | orchestrator | Sunday 29 March 2026 00:46:58 +0000 (0:00:01.596) 0:00:19.468 ********** 2026-03-29 00:48:48.593042 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-29 00:48:48.593054 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-29 00:48:48.593065 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-29 00:48:48.593076 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-29 00:48:48.593105 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-29 00:48:48.593117 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-29 00:48:48.593135 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-29 00:48:48.593147 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:48:48.593158 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:48:48.593169 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:48:48.593181 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:48:48.593203 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:48:48.593214 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:48:48.593231 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:48:48.593243 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:48:48.593255 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:48:48.593266 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:48:48.593278 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:48:48.593289 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:48:48.593307 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:48:48.593323 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:48:48.593334 | orchestrator | 2026-03-29 00:48:48.593345 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2026-03-29 00:48:48.593355 | orchestrator | Sunday 29 March 2026 00:47:05 +0000 (0:00:07.482) 0:00:26.951 ********** 2026-03-29 00:48:48.593366 | orchestrator | [WARNING]: Skipped 2026-03-29 00:48:48.593377 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2026-03-29 00:48:48.593387 | orchestrator | to this access issue: 2026-03-29 00:48:48.593396 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2026-03-29 00:48:48.593406 | orchestrator | directory 2026-03-29 00:48:48.593416 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-29 00:48:48.593426 | orchestrator | 2026-03-29 00:48:48.593435 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2026-03-29 00:48:48.593444 | orchestrator | Sunday 29 March 2026 00:47:06 +0000 (0:00:00.766) 0:00:27.718 ********** 2026-03-29 00:48:48.593454 | orchestrator | [WARNING]: Skipped 2026-03-29 00:48:48.593463 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2026-03-29 00:48:48.593476 | orchestrator | to this access issue: 2026-03-29 00:48:48.593486 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2026-03-29 00:48:48.593496 | orchestrator | directory 2026-03-29 00:48:48.593505 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-29 00:48:48.593514 | orchestrator | 2026-03-29 00:48:48.593524 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2026-03-29 00:48:48.593533 | orchestrator | Sunday 29 March 2026 00:47:07 +0000 (0:00:00.836) 0:00:28.554 ********** 2026-03-29 00:48:48.593542 | orchestrator | [WARNING]: Skipped 2026-03-29 00:48:48.593552 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2026-03-29 00:48:48.593562 | orchestrator | to this access issue: 2026-03-29 00:48:48.593571 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2026-03-29 00:48:48.593580 | orchestrator | directory 2026-03-29 00:48:48.593590 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-29 00:48:48.593599 | orchestrator | 2026-03-29 00:48:48.593608 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2026-03-29 00:48:48.593618 | orchestrator | Sunday 29 March 2026 00:47:08 +0000 (0:00:01.140) 0:00:29.695 ********** 2026-03-29 00:48:48.593627 | orchestrator | [WARNING]: Skipped 2026-03-29 00:48:48.593636 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2026-03-29 00:48:48.593646 | orchestrator | to this access issue: 2026-03-29 00:48:48.593655 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2026-03-29 00:48:48.593664 | orchestrator | directory 2026-03-29 00:48:48.593672 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-29 00:48:48.593687 | orchestrator | 2026-03-29 00:48:48.593696 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2026-03-29 00:48:48.593705 | orchestrator | Sunday 29 March 2026 00:47:09 +0000 (0:00:00.835) 0:00:30.531 ********** 2026-03-29 00:48:48.593714 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:48:48.593722 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:48:48.593731 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:48:48.593740 | orchestrator | changed: [testbed-manager] 2026-03-29 00:48:48.593749 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:48:48.593758 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:48:48.593768 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:48:48.593777 | orchestrator | 2026-03-29 00:48:48.593786 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2026-03-29 00:48:48.593796 | orchestrator | Sunday 29 March 2026 00:47:13 +0000 (0:00:04.601) 0:00:35.132 ********** 2026-03-29 00:48:48.593805 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-29 00:48:48.593815 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-29 00:48:48.593825 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-29 00:48:48.593834 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-29 00:48:48.593844 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-29 00:48:48.593853 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-29 00:48:48.593862 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-29 00:48:48.593872 | orchestrator | 2026-03-29 00:48:48.593882 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2026-03-29 00:48:48.593891 | orchestrator | Sunday 29 March 2026 00:47:17 +0000 (0:00:04.070) 0:00:39.203 ********** 2026-03-29 00:48:48.593901 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:48:48.593910 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:48:48.593919 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:48:48.593928 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:48:48.593964 | orchestrator | changed: [testbed-manager] 2026-03-29 00:48:48.593974 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:48:48.593983 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:48:48.593992 | orchestrator | 2026-03-29 00:48:48.594002 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2026-03-29 00:48:48.594011 | orchestrator | Sunday 29 March 2026 00:47:21 +0000 (0:00:03.353) 0:00:42.556 ********** 2026-03-29 00:48:48.594071 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-29 00:48:48.594087 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 00:48:48.594103 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-29 00:48:48.594113 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-29 00:48:48.594123 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 00:48:48.594133 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 00:48:48.594144 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:48:48.594154 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:48:48.594169 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-29 00:48:48.594185 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 00:48:48.594200 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:48:48.594210 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-29 00:48:48.594220 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 00:48:48.594230 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:48:48.594241 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:48:48.594250 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-29 00:48:48.594265 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 00:48:48.594286 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-29 00:48:48.594296 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 00:48:48.594306 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:48:48.594316 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:48:48.594326 | orchestrator | 2026-03-29 00:48:48.594335 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2026-03-29 00:48:48.594345 | orchestrator | Sunday 29 March 2026 00:47:24 +0000 (0:00:03.407) 0:00:45.963 ********** 2026-03-29 00:48:48.594355 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-29 00:48:48.594364 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-29 00:48:48.594374 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-29 00:48:48.594383 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-29 00:48:48.594393 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-29 00:48:48.594402 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-29 00:48:48.594412 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-29 00:48:48.594421 | orchestrator | 2026-03-29 00:48:48.594431 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2026-03-29 00:48:48.594440 | orchestrator | Sunday 29 March 2026 00:47:28 +0000 (0:00:03.612) 0:00:49.576 ********** 2026-03-29 00:48:48.594450 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-29 00:48:48.594459 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-29 00:48:48.594468 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-29 00:48:48.594478 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-29 00:48:48.594487 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-29 00:48:48.594497 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-29 00:48:48.594512 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-29 00:48:48.594522 | orchestrator | 2026-03-29 00:48:48.594536 | orchestrator | TASK [common : Check common containers] **************************************** 2026-03-29 00:48:48.594546 | orchestrator | Sunday 29 March 2026 00:47:31 +0000 (0:00:03.354) 0:00:52.930 ********** 2026-03-29 00:48:48.594556 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-29 00:48:48.594574 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-29 00:48:48.594585 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-29 00:48:48.594595 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-29 00:48:48.594605 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-29 00:48:48.594615 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:48:48.594625 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-29 00:48:48.594641 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-29 00:48:48.594664 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:48:48.594674 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:48:48.594684 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:48:48.594694 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:48:48.594704 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:48:48.594715 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:48:48.594734 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:48:48.594745 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:48:48.594759 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:48:48.594769 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:48:48.594778 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:48:48.594788 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:48:48.594797 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:48:48.594807 | orchestrator | 2026-03-29 00:48:48.594817 | orchestrator | TASK [common : Creating log volume] ******************************************** 2026-03-29 00:48:48.594826 | orchestrator | Sunday 29 March 2026 00:47:34 +0000 (0:00:03.191) 0:00:56.121 ********** 2026-03-29 00:48:48.594836 | orchestrator | changed: [testbed-manager] 2026-03-29 00:48:48.594845 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:48:48.594860 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:48:48.594871 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:48:48.594880 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:48:48.594889 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:48:48.594899 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:48:48.594908 | orchestrator | 2026-03-29 00:48:48.594918 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2026-03-29 00:48:48.594927 | orchestrator | Sunday 29 March 2026 00:47:36 +0000 (0:00:01.418) 0:00:57.540 ********** 2026-03-29 00:48:48.594955 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:48:48.594964 | orchestrator | changed: [testbed-manager] 2026-03-29 00:48:48.594973 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:48:48.594982 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:48:48.594992 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:48:48.595001 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:48:48.595010 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:48:48.595020 | orchestrator | 2026-03-29 00:48:48.595029 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-29 00:48:48.595039 | orchestrator | Sunday 29 March 2026 00:47:37 +0000 (0:00:01.301) 0:00:58.841 ********** 2026-03-29 00:48:48.595048 | orchestrator | 2026-03-29 00:48:48.595057 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-29 00:48:48.595066 | orchestrator | Sunday 29 March 2026 00:47:37 +0000 (0:00:00.050) 0:00:58.892 ********** 2026-03-29 00:48:48.595076 | orchestrator | 2026-03-29 00:48:48.595085 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-29 00:48:48.595098 | orchestrator | Sunday 29 March 2026 00:47:37 +0000 (0:00:00.048) 0:00:58.940 ********** 2026-03-29 00:48:48.595107 | orchestrator | 2026-03-29 00:48:48.595117 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-29 00:48:48.595126 | orchestrator | Sunday 29 March 2026 00:47:37 +0000 (0:00:00.048) 0:00:58.988 ********** 2026-03-29 00:48:48.595135 | orchestrator | 2026-03-29 00:48:48.595143 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-29 00:48:48.595153 | orchestrator | Sunday 29 March 2026 00:47:37 +0000 (0:00:00.049) 0:00:59.037 ********** 2026-03-29 00:48:48.595162 | orchestrator | 2026-03-29 00:48:48.595171 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-29 00:48:48.595180 | orchestrator | Sunday 29 March 2026 00:47:37 +0000 (0:00:00.048) 0:00:59.086 ********** 2026-03-29 00:48:48.595190 | orchestrator | 2026-03-29 00:48:48.595199 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-29 00:48:48.595208 | orchestrator | Sunday 29 March 2026 00:47:37 +0000 (0:00:00.048) 0:00:59.134 ********** 2026-03-29 00:48:48.595217 | orchestrator | 2026-03-29 00:48:48.595227 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2026-03-29 00:48:48.595241 | orchestrator | Sunday 29 March 2026 00:47:37 +0000 (0:00:00.066) 0:00:59.201 ********** 2026-03-29 00:48:48.595250 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:48:48.595258 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:48:48.595266 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:48:48.595274 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:48:48.595283 | orchestrator | changed: [testbed-manager] 2026-03-29 00:48:48.595292 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:48:48.595302 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:48:48.595311 | orchestrator | 2026-03-29 00:48:48.595321 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2026-03-29 00:48:48.595330 | orchestrator | Sunday 29 March 2026 00:48:08 +0000 (0:00:30.972) 0:01:30.174 ********** 2026-03-29 00:48:48.595339 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:48:48.595349 | orchestrator | changed: [testbed-manager] 2026-03-29 00:48:48.595358 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:48:48.595367 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:48:48.595376 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:48:48.595392 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:48:48.595401 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:48:48.595411 | orchestrator | 2026-03-29 00:48:48.595420 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2026-03-29 00:48:48.595430 | orchestrator | Sunday 29 March 2026 00:48:36 +0000 (0:00:27.851) 0:01:58.025 ********** 2026-03-29 00:48:48.595439 | orchestrator | ok: [testbed-manager] 2026-03-29 00:48:48.595448 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:48:48.595458 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:48:48.595467 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:48:48.595477 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:48:48.595486 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:48:48.595495 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:48:48.595504 | orchestrator | 2026-03-29 00:48:48.595514 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2026-03-29 00:48:48.595523 | orchestrator | Sunday 29 March 2026 00:48:38 +0000 (0:00:01.960) 0:01:59.986 ********** 2026-03-29 00:48:48.595532 | orchestrator | changed: [testbed-manager] 2026-03-29 00:48:48.595542 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:48:48.595551 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:48:48.595560 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:48:48.595570 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:48:48.595579 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:48:48.595588 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:48:48.595598 | orchestrator | 2026-03-29 00:48:48.595606 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 00:48:48.595625 | orchestrator | testbed-manager : ok=22  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-29 00:48:48.595635 | orchestrator | testbed-node-0 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-29 00:48:48.595643 | orchestrator | testbed-node-1 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-29 00:48:48.595651 | orchestrator | testbed-node-2 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-29 00:48:48.595659 | orchestrator | testbed-node-3 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-29 00:48:48.595668 | orchestrator | testbed-node-4 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-29 00:48:48.595677 | orchestrator | testbed-node-5 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-29 00:48:48.595687 | orchestrator | 2026-03-29 00:48:48.595696 | orchestrator | 2026-03-29 00:48:48.595706 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-29 00:48:48.595716 | orchestrator | Sunday 29 March 2026 00:48:47 +0000 (0:00:08.986) 0:02:08.973 ********** 2026-03-29 00:48:48.595725 | orchestrator | =============================================================================== 2026-03-29 00:48:48.595734 | orchestrator | common : Restart fluentd container ------------------------------------- 30.97s 2026-03-29 00:48:48.595744 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 27.85s 2026-03-29 00:48:48.595753 | orchestrator | common : Restart cron container ----------------------------------------- 8.99s 2026-03-29 00:48:48.595767 | orchestrator | common : Copying over config.json files for services -------------------- 7.48s 2026-03-29 00:48:48.595777 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 5.21s 2026-03-29 00:48:48.595786 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 4.60s 2026-03-29 00:48:48.595796 | orchestrator | common : Copying over cron logrotate config file ------------------------ 4.07s 2026-03-29 00:48:48.595811 | orchestrator | common : Ensuring config directories exist ------------------------------ 3.67s 2026-03-29 00:48:48.595820 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 3.61s 2026-03-29 00:48:48.595830 | orchestrator | common : Ensuring config directories have correct owner and permission --- 3.41s 2026-03-29 00:48:48.595839 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 3.35s 2026-03-29 00:48:48.595847 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 3.35s 2026-03-29 00:48:48.595856 | orchestrator | common : Check common containers ---------------------------------------- 3.19s 2026-03-29 00:48:48.595865 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 2.71s 2026-03-29 00:48:48.595879 | orchestrator | service-cert-copy : common | Copying over backend internal TLS certificate --- 2.33s 2026-03-29 00:48:48.595887 | orchestrator | common : Initializing toolbox container using normal user --------------- 1.96s 2026-03-29 00:48:48.595895 | orchestrator | common : Restart systemd-tmpfiles --------------------------------------- 1.60s 2026-03-29 00:48:48.595903 | orchestrator | common : Copying over /run subdirectories conf -------------------------- 1.44s 2026-03-29 00:48:48.595912 | orchestrator | common : Creating log volume -------------------------------------------- 1.42s 2026-03-29 00:48:48.595921 | orchestrator | common : Link kolla_logs volume to /var/log/kolla ----------------------- 1.30s 2026-03-29 00:48:48.595930 | orchestrator | 2026-03-29 00:48:48 | INFO  | Task b2be653b-bac8-4300-bc75-a5fe2da4c5df is in state SUCCESS 2026-03-29 00:48:48.595996 | orchestrator | 2026-03-29 00:48:48 | INFO  | Task 1a16439d-2e03-442a-b330-ac506537ee12 is in state STARTED 2026-03-29 00:48:48.596006 | orchestrator | 2026-03-29 00:48:48 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:48:51.614231 | orchestrator | 2026-03-29 00:48:51 | INFO  | Task d22b2db3-6420-47af-b5af-887cbf3817b9 is in state STARTED 2026-03-29 00:48:51.614364 | orchestrator | 2026-03-29 00:48:51 | INFO  | Task 8c4b91d4-e389-4682-99b9-c2ae62935991 is in state STARTED 2026-03-29 00:48:51.614374 | orchestrator | 2026-03-29 00:48:51 | INFO  | Task 6c138a55-119f-47ca-b332-964599c0cd53 is in state STARTED 2026-03-29 00:48:51.614382 | orchestrator | 2026-03-29 00:48:51 | INFO  | Task 1a16439d-2e03-442a-b330-ac506537ee12 is in state STARTED 2026-03-29 00:48:51.614388 | orchestrator | 2026-03-29 00:48:51 | INFO  | Task 1952dd4e-e8fa-4f77-9602-05a66e609231 is in state STARTED 2026-03-29 00:48:51.614826 | orchestrator | 2026-03-29 00:48:51 | INFO  | Task 113dda18-3d42-487e-b5bf-95f5cdea23c6 is in state STARTED 2026-03-29 00:48:51.614851 | orchestrator | 2026-03-29 00:48:51 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:48:54.645733 | orchestrator | 2026-03-29 00:48:54 | INFO  | Task d22b2db3-6420-47af-b5af-887cbf3817b9 is in state STARTED 2026-03-29 00:48:54.646203 | orchestrator | 2026-03-29 00:48:54 | INFO  | Task 8c4b91d4-e389-4682-99b9-c2ae62935991 is in state STARTED 2026-03-29 00:48:54.647010 | orchestrator | 2026-03-29 00:48:54 | INFO  | Task 6c138a55-119f-47ca-b332-964599c0cd53 is in state STARTED 2026-03-29 00:48:54.647544 | orchestrator | 2026-03-29 00:48:54 | INFO  | Task 1a16439d-2e03-442a-b330-ac506537ee12 is in state STARTED 2026-03-29 00:48:54.648248 | orchestrator | 2026-03-29 00:48:54 | INFO  | Task 1952dd4e-e8fa-4f77-9602-05a66e609231 is in state STARTED 2026-03-29 00:48:54.648873 | orchestrator | 2026-03-29 00:48:54 | INFO  | Task 113dda18-3d42-487e-b5bf-95f5cdea23c6 is in state STARTED 2026-03-29 00:48:54.648906 | orchestrator | 2026-03-29 00:48:54 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:48:57.695748 | orchestrator | 2026-03-29 00:48:57 | INFO  | Task d22b2db3-6420-47af-b5af-887cbf3817b9 is in state STARTED 2026-03-29 00:48:57.695833 | orchestrator | 2026-03-29 00:48:57 | INFO  | Task 8c4b91d4-e389-4682-99b9-c2ae62935991 is in state STARTED 2026-03-29 00:48:57.696119 | orchestrator | 2026-03-29 00:48:57 | INFO  | Task 6c138a55-119f-47ca-b332-964599c0cd53 is in state STARTED 2026-03-29 00:48:57.697035 | orchestrator | 2026-03-29 00:48:57 | INFO  | Task 1a16439d-2e03-442a-b330-ac506537ee12 is in state STARTED 2026-03-29 00:48:57.697715 | orchestrator | 2026-03-29 00:48:57 | INFO  | Task 1952dd4e-e8fa-4f77-9602-05a66e609231 is in state STARTED 2026-03-29 00:48:57.698514 | orchestrator | 2026-03-29 00:48:57 | INFO  | Task 113dda18-3d42-487e-b5bf-95f5cdea23c6 is in state STARTED 2026-03-29 00:48:57.698548 | orchestrator | 2026-03-29 00:48:57 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:49:00.729162 | orchestrator | 2026-03-29 00:49:00 | INFO  | Task d22b2db3-6420-47af-b5af-887cbf3817b9 is in state STARTED 2026-03-29 00:49:00.729738 | orchestrator | 2026-03-29 00:49:00 | INFO  | Task 8c4b91d4-e389-4682-99b9-c2ae62935991 is in state STARTED 2026-03-29 00:49:00.731894 | orchestrator | 2026-03-29 00:49:00 | INFO  | Task 6c138a55-119f-47ca-b332-964599c0cd53 is in state STARTED 2026-03-29 00:49:00.733452 | orchestrator | 2026-03-29 00:49:00 | INFO  | Task 1a16439d-2e03-442a-b330-ac506537ee12 is in state STARTED 2026-03-29 00:49:00.734930 | orchestrator | 2026-03-29 00:49:00 | INFO  | Task 1952dd4e-e8fa-4f77-9602-05a66e609231 is in state STARTED 2026-03-29 00:49:00.735669 | orchestrator | 2026-03-29 00:49:00 | INFO  | Task 113dda18-3d42-487e-b5bf-95f5cdea23c6 is in state STARTED 2026-03-29 00:49:00.735696 | orchestrator | 2026-03-29 00:49:00 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:49:03.768236 | orchestrator | 2026-03-29 00:49:03 | INFO  | Task d22b2db3-6420-47af-b5af-887cbf3817b9 is in state STARTED 2026-03-29 00:49:03.768536 | orchestrator | 2026-03-29 00:49:03 | INFO  | Task 8c4b91d4-e389-4682-99b9-c2ae62935991 is in state STARTED 2026-03-29 00:49:03.769549 | orchestrator | 2026-03-29 00:49:03 | INFO  | Task 6c138a55-119f-47ca-b332-964599c0cd53 is in state STARTED 2026-03-29 00:49:03.770475 | orchestrator | 2026-03-29 00:49:03 | INFO  | Task 1a16439d-2e03-442a-b330-ac506537ee12 is in state STARTED 2026-03-29 00:49:03.772491 | orchestrator | 2026-03-29 00:49:03 | INFO  | Task 1952dd4e-e8fa-4f77-9602-05a66e609231 is in state STARTED 2026-03-29 00:49:03.773324 | orchestrator | 2026-03-29 00:49:03 | INFO  | Task 113dda18-3d42-487e-b5bf-95f5cdea23c6 is in state STARTED 2026-03-29 00:49:03.773357 | orchestrator | 2026-03-29 00:49:03 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:49:06.810697 | orchestrator | 2026-03-29 00:49:06 | INFO  | Task d22b2db3-6420-47af-b5af-887cbf3817b9 is in state STARTED 2026-03-29 00:49:06.811527 | orchestrator | 2026-03-29 00:49:06 | INFO  | Task 8c4b91d4-e389-4682-99b9-c2ae62935991 is in state SUCCESS 2026-03-29 00:49:06.812862 | orchestrator | 2026-03-29 00:49:06 | INFO  | Task 6c138a55-119f-47ca-b332-964599c0cd53 is in state STARTED 2026-03-29 00:49:06.813768 | orchestrator | 2026-03-29 00:49:06 | INFO  | Task 502b9470-3814-4e5b-866b-42ac99e0b382 is in state STARTED 2026-03-29 00:49:06.814992 | orchestrator | 2026-03-29 00:49:06 | INFO  | Task 1a16439d-2e03-442a-b330-ac506537ee12 is in state STARTED 2026-03-29 00:49:06.815941 | orchestrator | 2026-03-29 00:49:06 | INFO  | Task 1952dd4e-e8fa-4f77-9602-05a66e609231 is in state STARTED 2026-03-29 00:49:06.816805 | orchestrator | 2026-03-29 00:49:06 | INFO  | Task 113dda18-3d42-487e-b5bf-95f5cdea23c6 is in state STARTED 2026-03-29 00:49:06.816933 | orchestrator | 2026-03-29 00:49:06 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:49:09.851847 | orchestrator | 2026-03-29 00:49:09 | INFO  | Task d22b2db3-6420-47af-b5af-887cbf3817b9 is in state STARTED 2026-03-29 00:49:09.852935 | orchestrator | 2026-03-29 00:49:09 | INFO  | Task 6c138a55-119f-47ca-b332-964599c0cd53 is in state STARTED 2026-03-29 00:49:09.854979 | orchestrator | 2026-03-29 00:49:09 | INFO  | Task 502b9470-3814-4e5b-866b-42ac99e0b382 is in state STARTED 2026-03-29 00:49:09.856105 | orchestrator | 2026-03-29 00:49:09 | INFO  | Task 1a16439d-2e03-442a-b330-ac506537ee12 is in state STARTED 2026-03-29 00:49:09.859835 | orchestrator | 2026-03-29 00:49:09 | INFO  | Task 1952dd4e-e8fa-4f77-9602-05a66e609231 is in state STARTED 2026-03-29 00:49:09.860750 | orchestrator | 2026-03-29 00:49:09 | INFO  | Task 113dda18-3d42-487e-b5bf-95f5cdea23c6 is in state STARTED 2026-03-29 00:49:09.860790 | orchestrator | 2026-03-29 00:49:09 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:49:12.936849 | orchestrator | 2026-03-29 00:49:12 | INFO  | Task d22b2db3-6420-47af-b5af-887cbf3817b9 is in state STARTED 2026-03-29 00:49:12.936929 | orchestrator | 2026-03-29 00:49:12 | INFO  | Task 6c138a55-119f-47ca-b332-964599c0cd53 is in state STARTED 2026-03-29 00:49:12.936936 | orchestrator | 2026-03-29 00:49:12 | INFO  | Task 502b9470-3814-4e5b-866b-42ac99e0b382 is in state STARTED 2026-03-29 00:49:12.936940 | orchestrator | 2026-03-29 00:49:12 | INFO  | Task 1a16439d-2e03-442a-b330-ac506537ee12 is in state STARTED 2026-03-29 00:49:12.936945 | orchestrator | 2026-03-29 00:49:12 | INFO  | Task 1952dd4e-e8fa-4f77-9602-05a66e609231 is in state STARTED 2026-03-29 00:49:12.936989 | orchestrator | 2026-03-29 00:49:12 | INFO  | Task 113dda18-3d42-487e-b5bf-95f5cdea23c6 is in state STARTED 2026-03-29 00:49:12.936995 | orchestrator | 2026-03-29 00:49:12 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:49:15.966891 | orchestrator | 2026-03-29 00:49:15 | INFO  | Task d22b2db3-6420-47af-b5af-887cbf3817b9 is in state STARTED 2026-03-29 00:49:15.967120 | orchestrator | 2026-03-29 00:49:15 | INFO  | Task 6c138a55-119f-47ca-b332-964599c0cd53 is in state SUCCESS 2026-03-29 00:49:15.968934 | orchestrator | 2026-03-29 00:49:15.969063 | orchestrator | 2026-03-29 00:49:15.969077 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-29 00:49:15.969085 | orchestrator | 2026-03-29 00:49:15.969092 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-29 00:49:15.969099 | orchestrator | Sunday 29 March 2026 00:48:51 +0000 (0:00:00.284) 0:00:00.284 ********** 2026-03-29 00:49:15.969106 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:49:15.969114 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:49:15.969120 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:49:15.969124 | orchestrator | 2026-03-29 00:49:15.969128 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-29 00:49:15.969132 | orchestrator | Sunday 29 March 2026 00:48:51 +0000 (0:00:00.309) 0:00:00.594 ********** 2026-03-29 00:49:15.969137 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2026-03-29 00:49:15.969145 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2026-03-29 00:49:15.969151 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2026-03-29 00:49:15.969157 | orchestrator | 2026-03-29 00:49:15.969163 | orchestrator | PLAY [Apply role memcached] **************************************************** 2026-03-29 00:49:15.969169 | orchestrator | 2026-03-29 00:49:15.969175 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2026-03-29 00:49:15.969181 | orchestrator | Sunday 29 March 2026 00:48:52 +0000 (0:00:00.350) 0:00:00.944 ********** 2026-03-29 00:49:15.969187 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 00:49:15.969195 | orchestrator | 2026-03-29 00:49:15.969202 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2026-03-29 00:49:15.969228 | orchestrator | Sunday 29 March 2026 00:48:52 +0000 (0:00:00.642) 0:00:01.587 ********** 2026-03-29 00:49:15.969235 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-03-29 00:49:15.969242 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-03-29 00:49:15.969247 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-03-29 00:49:15.969255 | orchestrator | 2026-03-29 00:49:15.969264 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2026-03-29 00:49:15.969270 | orchestrator | Sunday 29 March 2026 00:48:54 +0000 (0:00:01.657) 0:00:03.244 ********** 2026-03-29 00:49:15.969276 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-03-29 00:49:15.969283 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-03-29 00:49:15.969289 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-03-29 00:49:15.969295 | orchestrator | 2026-03-29 00:49:15.969302 | orchestrator | TASK [memcached : Check memcached container] *********************************** 2026-03-29 00:49:15.969308 | orchestrator | Sunday 29 March 2026 00:48:56 +0000 (0:00:01.483) 0:00:04.728 ********** 2026-03-29 00:49:15.969315 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:49:15.969321 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:49:15.969327 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:49:15.969334 | orchestrator | 2026-03-29 00:49:15.969338 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2026-03-29 00:49:15.969342 | orchestrator | Sunday 29 March 2026 00:48:58 +0000 (0:00:02.121) 0:00:06.850 ********** 2026-03-29 00:49:15.969346 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:49:15.969350 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:49:15.969354 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:49:15.969357 | orchestrator | 2026-03-29 00:49:15.969361 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 00:49:15.969366 | orchestrator | testbed-node-0 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-29 00:49:15.969372 | orchestrator | testbed-node-1 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-29 00:49:15.969376 | orchestrator | testbed-node-2 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-29 00:49:15.969380 | orchestrator | 2026-03-29 00:49:15.969383 | orchestrator | 2026-03-29 00:49:15.969387 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-29 00:49:15.969391 | orchestrator | Sunday 29 March 2026 00:49:05 +0000 (0:00:07.275) 0:00:14.126 ********** 2026-03-29 00:49:15.969395 | orchestrator | =============================================================================== 2026-03-29 00:49:15.969398 | orchestrator | memcached : Restart memcached container --------------------------------- 7.28s 2026-03-29 00:49:15.969402 | orchestrator | memcached : Check memcached container ----------------------------------- 2.12s 2026-03-29 00:49:15.969406 | orchestrator | memcached : Ensuring config directories exist --------------------------- 1.66s 2026-03-29 00:49:15.969410 | orchestrator | memcached : Copying over config.json files for services ----------------- 1.48s 2026-03-29 00:49:15.969413 | orchestrator | memcached : include_tasks ----------------------------------------------- 0.64s 2026-03-29 00:49:15.969417 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.35s 2026-03-29 00:49:15.969421 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.31s 2026-03-29 00:49:15.969425 | orchestrator | 2026-03-29 00:49:15.969428 | orchestrator | 2026-03-29 00:49:15.969432 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-29 00:49:15.969436 | orchestrator | 2026-03-29 00:49:15.969445 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-29 00:49:15.969450 | orchestrator | Sunday 29 March 2026 00:48:51 +0000 (0:00:00.444) 0:00:00.444 ********** 2026-03-29 00:49:15.969454 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:49:15.969463 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:49:15.969467 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:49:15.969471 | orchestrator | 2026-03-29 00:49:15.969476 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-29 00:49:15.969491 | orchestrator | Sunday 29 March 2026 00:48:52 +0000 (0:00:00.291) 0:00:00.736 ********** 2026-03-29 00:49:15.969496 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2026-03-29 00:49:15.969500 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2026-03-29 00:49:15.969505 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2026-03-29 00:49:15.969509 | orchestrator | 2026-03-29 00:49:15.969513 | orchestrator | PLAY [Apply role redis] ******************************************************** 2026-03-29 00:49:15.969517 | orchestrator | 2026-03-29 00:49:15.969521 | orchestrator | TASK [redis : include_tasks] *************************************************** 2026-03-29 00:49:15.969526 | orchestrator | Sunday 29 March 2026 00:48:52 +0000 (0:00:00.364) 0:00:01.100 ********** 2026-03-29 00:49:15.969530 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 00:49:15.969535 | orchestrator | 2026-03-29 00:49:15.969539 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2026-03-29 00:49:15.969544 | orchestrator | Sunday 29 March 2026 00:48:53 +0000 (0:00:00.605) 0:00:01.705 ********** 2026-03-29 00:49:15.969550 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-29 00:49:15.969558 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-29 00:49:15.969563 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-29 00:49:15.969568 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-29 00:49:15.969575 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-29 00:49:15.969586 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-29 00:49:15.969591 | orchestrator | 2026-03-29 00:49:15.969596 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2026-03-29 00:49:15.969600 | orchestrator | Sunday 29 March 2026 00:48:55 +0000 (0:00:01.895) 0:00:03.600 ********** 2026-03-29 00:49:15.969605 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-29 00:49:15.969610 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-29 00:49:15.969614 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-29 00:49:15.969619 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-29 00:49:15.969623 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-29 00:49:15.969635 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-29 00:49:15.969640 | orchestrator | 2026-03-29 00:49:15.969645 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2026-03-29 00:49:15.969649 | orchestrator | Sunday 29 March 2026 00:48:57 +0000 (0:00:02.267) 0:00:05.868 ********** 2026-03-29 00:49:15.969653 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-29 00:49:15.969658 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-29 00:49:15.969662 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-29 00:49:15.969667 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-29 00:49:15.969671 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-29 00:49:15.969679 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-29 00:49:15.969684 | orchestrator | 2026-03-29 00:49:15.969691 | orchestrator | TASK [redis : Check redis containers] ****************************************** 2026-03-29 00:49:15.969696 | orchestrator | Sunday 29 March 2026 00:48:59 +0000 (0:00:02.667) 0:00:08.535 ********** 2026-03-29 00:49:15.969700 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-29 00:49:15.969705 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-29 00:49:15.969713 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-29 00:49:15.969718 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-29 00:49:15.969722 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-29 00:49:15.969732 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-29 00:49:15.969737 | orchestrator | 2026-03-29 00:49:15.969741 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-03-29 00:49:15.969745 | orchestrator | Sunday 29 March 2026 00:49:01 +0000 (0:00:01.655) 0:00:10.191 ********** 2026-03-29 00:49:15.969749 | orchestrator | 2026-03-29 00:49:15.969753 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-03-29 00:49:15.969759 | orchestrator | Sunday 29 March 2026 00:49:01 +0000 (0:00:00.174) 0:00:10.365 ********** 2026-03-29 00:49:15.969763 | orchestrator | 2026-03-29 00:49:15.969767 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-03-29 00:49:15.969770 | orchestrator | Sunday 29 March 2026 00:49:01 +0000 (0:00:00.060) 0:00:10.425 ********** 2026-03-29 00:49:15.969774 | orchestrator | 2026-03-29 00:49:15.969778 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2026-03-29 00:49:15.969782 | orchestrator | Sunday 29 March 2026 00:49:01 +0000 (0:00:00.058) 0:00:10.484 ********** 2026-03-29 00:49:15.969785 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:49:15.969789 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:49:15.969793 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:49:15.969797 | orchestrator | 2026-03-29 00:49:15.969801 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2026-03-29 00:49:15.969804 | orchestrator | Sunday 29 March 2026 00:49:10 +0000 (0:00:08.228) 0:00:18.713 ********** 2026-03-29 00:49:15.969808 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:49:15.969812 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:49:15.969816 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:49:15.969819 | orchestrator | 2026-03-29 00:49:15.969823 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 00:49:15.969827 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-29 00:49:15.969831 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-29 00:49:15.969838 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-29 00:49:15.969844 | orchestrator | 2026-03-29 00:49:15.969850 | orchestrator | 2026-03-29 00:49:15.969856 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-29 00:49:15.969861 | orchestrator | Sunday 29 March 2026 00:49:13 +0000 (0:00:03.267) 0:00:21.980 ********** 2026-03-29 00:49:15.969866 | orchestrator | =============================================================================== 2026-03-29 00:49:15.969876 | orchestrator | redis : Restart redis container ----------------------------------------- 8.23s 2026-03-29 00:49:15.969882 | orchestrator | redis : Restart redis-sentinel container -------------------------------- 3.27s 2026-03-29 00:49:15.969888 | orchestrator | redis : Copying over redis config files --------------------------------- 2.67s 2026-03-29 00:49:15.969894 | orchestrator | redis : Copying over default config.json files -------------------------- 2.27s 2026-03-29 00:49:15.969900 | orchestrator | redis : Ensuring config directories exist ------------------------------- 1.90s 2026-03-29 00:49:15.969906 | orchestrator | redis : Check redis containers ------------------------------------------ 1.66s 2026-03-29 00:49:15.969912 | orchestrator | redis : include_tasks --------------------------------------------------- 0.61s 2026-03-29 00:49:15.969918 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.36s 2026-03-29 00:49:15.970123 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.29s 2026-03-29 00:49:15.970129 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.29s 2026-03-29 00:49:15.970136 | orchestrator | 2026-03-29 00:49:15 | INFO  | Task 502b9470-3814-4e5b-866b-42ac99e0b382 is in state STARTED 2026-03-29 00:49:15.970148 | orchestrator | 2026-03-29 00:49:15 | INFO  | Task 1a16439d-2e03-442a-b330-ac506537ee12 is in state STARTED 2026-03-29 00:49:15.975109 | orchestrator | 2026-03-29 00:49:15 | INFO  | Task 1952dd4e-e8fa-4f77-9602-05a66e609231 is in state STARTED 2026-03-29 00:49:15.975569 | orchestrator | 2026-03-29 00:49:15 | INFO  | Task 113dda18-3d42-487e-b5bf-95f5cdea23c6 is in state STARTED 2026-03-29 00:49:15.975660 | orchestrator | 2026-03-29 00:49:15 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:49:19.022148 | orchestrator | 2026-03-29 00:49:19 | INFO  | Task d22b2db3-6420-47af-b5af-887cbf3817b9 is in state STARTED 2026-03-29 00:49:19.022254 | orchestrator | 2026-03-29 00:49:19 | INFO  | Task 502b9470-3814-4e5b-866b-42ac99e0b382 is in state STARTED 2026-03-29 00:49:19.023801 | orchestrator | 2026-03-29 00:49:19 | INFO  | Task 1a16439d-2e03-442a-b330-ac506537ee12 is in state STARTED 2026-03-29 00:49:19.023870 | orchestrator | 2026-03-29 00:49:19 | INFO  | Task 1952dd4e-e8fa-4f77-9602-05a66e609231 is in state STARTED 2026-03-29 00:49:19.024246 | orchestrator | 2026-03-29 00:49:19 | INFO  | Task 113dda18-3d42-487e-b5bf-95f5cdea23c6 is in state STARTED 2026-03-29 00:49:19.024280 | orchestrator | 2026-03-29 00:49:19 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:49:22.174823 | orchestrator | 2026-03-29 00:49:22 | INFO  | Task d22b2db3-6420-47af-b5af-887cbf3817b9 is in state STARTED 2026-03-29 00:49:22.175289 | orchestrator | 2026-03-29 00:49:22 | INFO  | Task 502b9470-3814-4e5b-866b-42ac99e0b382 is in state STARTED 2026-03-29 00:49:22.175321 | orchestrator | 2026-03-29 00:49:22 | INFO  | Task 1a16439d-2e03-442a-b330-ac506537ee12 is in state STARTED 2026-03-29 00:49:22.176541 | orchestrator | 2026-03-29 00:49:22 | INFO  | Task 1952dd4e-e8fa-4f77-9602-05a66e609231 is in state STARTED 2026-03-29 00:49:22.176587 | orchestrator | 2026-03-29 00:49:22 | INFO  | Task 113dda18-3d42-487e-b5bf-95f5cdea23c6 is in state STARTED 2026-03-29 00:49:22.176596 | orchestrator | 2026-03-29 00:49:22 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:49:25.251143 | orchestrator | 2026-03-29 00:49:25 | INFO  | Task d22b2db3-6420-47af-b5af-887cbf3817b9 is in state STARTED 2026-03-29 00:49:25.251202 | orchestrator | 2026-03-29 00:49:25 | INFO  | Task 502b9470-3814-4e5b-866b-42ac99e0b382 is in state STARTED 2026-03-29 00:49:25.251212 | orchestrator | 2026-03-29 00:49:25 | INFO  | Task 1a16439d-2e03-442a-b330-ac506537ee12 is in state STARTED 2026-03-29 00:49:25.251219 | orchestrator | 2026-03-29 00:49:25 | INFO  | Task 1952dd4e-e8fa-4f77-9602-05a66e609231 is in state STARTED 2026-03-29 00:49:25.251245 | orchestrator | 2026-03-29 00:49:25 | INFO  | Task 113dda18-3d42-487e-b5bf-95f5cdea23c6 is in state STARTED 2026-03-29 00:49:25.251252 | orchestrator | 2026-03-29 00:49:25 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:49:28.255200 | orchestrator | 2026-03-29 00:49:28 | INFO  | Task d22b2db3-6420-47af-b5af-887cbf3817b9 is in state STARTED 2026-03-29 00:49:28.257265 | orchestrator | 2026-03-29 00:49:28 | INFO  | Task 502b9470-3814-4e5b-866b-42ac99e0b382 is in state STARTED 2026-03-29 00:49:28.258697 | orchestrator | 2026-03-29 00:49:28 | INFO  | Task 1a16439d-2e03-442a-b330-ac506537ee12 is in state STARTED 2026-03-29 00:49:28.259898 | orchestrator | 2026-03-29 00:49:28 | INFO  | Task 1952dd4e-e8fa-4f77-9602-05a66e609231 is in state STARTED 2026-03-29 00:49:28.262170 | orchestrator | 2026-03-29 00:49:28 | INFO  | Task 113dda18-3d42-487e-b5bf-95f5cdea23c6 is in state STARTED 2026-03-29 00:49:28.264107 | orchestrator | 2026-03-29 00:49:28 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:49:31.314800 | orchestrator | 2026-03-29 00:49:31 | INFO  | Task d22b2db3-6420-47af-b5af-887cbf3817b9 is in state STARTED 2026-03-29 00:49:31.317609 | orchestrator | 2026-03-29 00:49:31 | INFO  | Task 502b9470-3814-4e5b-866b-42ac99e0b382 is in state STARTED 2026-03-29 00:49:31.321593 | orchestrator | 2026-03-29 00:49:31 | INFO  | Task 1a16439d-2e03-442a-b330-ac506537ee12 is in state STARTED 2026-03-29 00:49:31.322594 | orchestrator | 2026-03-29 00:49:31 | INFO  | Task 1952dd4e-e8fa-4f77-9602-05a66e609231 is in state STARTED 2026-03-29 00:49:31.323259 | orchestrator | 2026-03-29 00:49:31 | INFO  | Task 113dda18-3d42-487e-b5bf-95f5cdea23c6 is in state STARTED 2026-03-29 00:49:31.324466 | orchestrator | 2026-03-29 00:49:31 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:49:34.438984 | orchestrator | 2026-03-29 00:49:34 | INFO  | Task d22b2db3-6420-47af-b5af-887cbf3817b9 is in state STARTED 2026-03-29 00:49:34.439041 | orchestrator | 2026-03-29 00:49:34 | INFO  | Task 502b9470-3814-4e5b-866b-42ac99e0b382 is in state STARTED 2026-03-29 00:49:34.439049 | orchestrator | 2026-03-29 00:49:34 | INFO  | Task 1a16439d-2e03-442a-b330-ac506537ee12 is in state STARTED 2026-03-29 00:49:34.439055 | orchestrator | 2026-03-29 00:49:34 | INFO  | Task 1952dd4e-e8fa-4f77-9602-05a66e609231 is in state STARTED 2026-03-29 00:49:34.439061 | orchestrator | 2026-03-29 00:49:34 | INFO  | Task 113dda18-3d42-487e-b5bf-95f5cdea23c6 is in state STARTED 2026-03-29 00:49:34.439066 | orchestrator | 2026-03-29 00:49:34 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:49:37.496046 | orchestrator | 2026-03-29 00:49:37 | INFO  | Task d22b2db3-6420-47af-b5af-887cbf3817b9 is in state STARTED 2026-03-29 00:49:37.505747 | orchestrator | 2026-03-29 00:49:37 | INFO  | Task 502b9470-3814-4e5b-866b-42ac99e0b382 is in state STARTED 2026-03-29 00:49:37.509265 | orchestrator | 2026-03-29 00:49:37 | INFO  | Task 1a16439d-2e03-442a-b330-ac506537ee12 is in state STARTED 2026-03-29 00:49:37.511350 | orchestrator | 2026-03-29 00:49:37 | INFO  | Task 1952dd4e-e8fa-4f77-9602-05a66e609231 is in state STARTED 2026-03-29 00:49:37.517566 | orchestrator | 2026-03-29 00:49:37 | INFO  | Task 113dda18-3d42-487e-b5bf-95f5cdea23c6 is in state STARTED 2026-03-29 00:49:37.517616 | orchestrator | 2026-03-29 00:49:37 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:49:40.561058 | orchestrator | 2026-03-29 00:49:40 | INFO  | Task d22b2db3-6420-47af-b5af-887cbf3817b9 is in state STARTED 2026-03-29 00:49:40.563231 | orchestrator | 2026-03-29 00:49:40 | INFO  | Task 502b9470-3814-4e5b-866b-42ac99e0b382 is in state STARTED 2026-03-29 00:49:40.564225 | orchestrator | 2026-03-29 00:49:40 | INFO  | Task 1a16439d-2e03-442a-b330-ac506537ee12 is in state STARTED 2026-03-29 00:49:40.564690 | orchestrator | 2026-03-29 00:49:40 | INFO  | Task 1952dd4e-e8fa-4f77-9602-05a66e609231 is in state STARTED 2026-03-29 00:49:40.565518 | orchestrator | 2026-03-29 00:49:40 | INFO  | Task 113dda18-3d42-487e-b5bf-95f5cdea23c6 is in state STARTED 2026-03-29 00:49:40.565551 | orchestrator | 2026-03-29 00:49:40 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:49:43.610563 | orchestrator | 2026-03-29 00:49:43 | INFO  | Task d22b2db3-6420-47af-b5af-887cbf3817b9 is in state STARTED 2026-03-29 00:49:43.612904 | orchestrator | 2026-03-29 00:49:43 | INFO  | Task 502b9470-3814-4e5b-866b-42ac99e0b382 is in state STARTED 2026-03-29 00:49:43.613313 | orchestrator | 2026-03-29 00:49:43 | INFO  | Task 1a16439d-2e03-442a-b330-ac506537ee12 is in state STARTED 2026-03-29 00:49:43.614233 | orchestrator | 2026-03-29 00:49:43 | INFO  | Task 1952dd4e-e8fa-4f77-9602-05a66e609231 is in state STARTED 2026-03-29 00:49:43.614849 | orchestrator | 2026-03-29 00:49:43 | INFO  | Task 113dda18-3d42-487e-b5bf-95f5cdea23c6 is in state STARTED 2026-03-29 00:49:43.614905 | orchestrator | 2026-03-29 00:49:43 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:49:46.670734 | orchestrator | 2026-03-29 00:49:46 | INFO  | Task d22b2db3-6420-47af-b5af-887cbf3817b9 is in state STARTED 2026-03-29 00:49:46.674136 | orchestrator | 2026-03-29 00:49:46 | INFO  | Task 502b9470-3814-4e5b-866b-42ac99e0b382 is in state STARTED 2026-03-29 00:49:46.674853 | orchestrator | 2026-03-29 00:49:46 | INFO  | Task 1a16439d-2e03-442a-b330-ac506537ee12 is in state STARTED 2026-03-29 00:49:46.675667 | orchestrator | 2026-03-29 00:49:46 | INFO  | Task 1952dd4e-e8fa-4f77-9602-05a66e609231 is in state STARTED 2026-03-29 00:49:46.680014 | orchestrator | 2026-03-29 00:49:46 | INFO  | Task 113dda18-3d42-487e-b5bf-95f5cdea23c6 is in state STARTED 2026-03-29 00:49:46.680058 | orchestrator | 2026-03-29 00:49:46 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:49:49.704584 | orchestrator | 2026-03-29 00:49:49 | INFO  | Task d22b2db3-6420-47af-b5af-887cbf3817b9 is in state STARTED 2026-03-29 00:49:49.706949 | orchestrator | 2026-03-29 00:49:49 | INFO  | Task 502b9470-3814-4e5b-866b-42ac99e0b382 is in state STARTED 2026-03-29 00:49:49.709062 | orchestrator | 2026-03-29 00:49:49 | INFO  | Task 1a16439d-2e03-442a-b330-ac506537ee12 is in state STARTED 2026-03-29 00:49:49.709683 | orchestrator | 2026-03-29 00:49:49 | INFO  | Task 1952dd4e-e8fa-4f77-9602-05a66e609231 is in state STARTED 2026-03-29 00:49:49.711252 | orchestrator | 2026-03-29 00:49:49 | INFO  | Task 113dda18-3d42-487e-b5bf-95f5cdea23c6 is in state STARTED 2026-03-29 00:49:49.711305 | orchestrator | 2026-03-29 00:49:49 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:49:52.742698 | orchestrator | 2026-03-29 00:49:52 | INFO  | Task d22b2db3-6420-47af-b5af-887cbf3817b9 is in state STARTED 2026-03-29 00:49:52.743361 | orchestrator | 2026-03-29 00:49:52 | INFO  | Task 502b9470-3814-4e5b-866b-42ac99e0b382 is in state STARTED 2026-03-29 00:49:52.744317 | orchestrator | 2026-03-29 00:49:52 | INFO  | Task 1a16439d-2e03-442a-b330-ac506537ee12 is in state STARTED 2026-03-29 00:49:52.745167 | orchestrator | 2026-03-29 00:49:52 | INFO  | Task 1952dd4e-e8fa-4f77-9602-05a66e609231 is in state STARTED 2026-03-29 00:49:52.745766 | orchestrator | 2026-03-29 00:49:52 | INFO  | Task 113dda18-3d42-487e-b5bf-95f5cdea23c6 is in state STARTED 2026-03-29 00:49:52.745799 | orchestrator | 2026-03-29 00:49:52 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:49:55.771733 | orchestrator | 2026-03-29 00:49:55 | INFO  | Task d22b2db3-6420-47af-b5af-887cbf3817b9 is in state STARTED 2026-03-29 00:49:55.772647 | orchestrator | 2026-03-29 00:49:55 | INFO  | Task 8e5df341-fcd4-485c-9419-3d748b148bbf is in state STARTED 2026-03-29 00:49:55.774075 | orchestrator | 2026-03-29 00:49:55 | INFO  | Task 502b9470-3814-4e5b-866b-42ac99e0b382 is in state STARTED 2026-03-29 00:49:55.775462 | orchestrator | 2026-03-29 00:49:55 | INFO  | Task 1a16439d-2e03-442a-b330-ac506537ee12 is in state STARTED 2026-03-29 00:49:55.779365 | orchestrator | 2026-03-29 00:49:55.779419 | orchestrator | 2026-03-29 00:49:55 | INFO  | Task 1952dd4e-e8fa-4f77-9602-05a66e609231 is in state SUCCESS 2026-03-29 00:49:55.781470 | orchestrator | 2026-03-29 00:49:55.781533 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-29 00:49:55.781544 | orchestrator | 2026-03-29 00:49:55.781551 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-29 00:49:55.781557 | orchestrator | Sunday 29 March 2026 00:48:51 +0000 (0:00:00.413) 0:00:00.413 ********** 2026-03-29 00:49:55.781563 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:49:55.781570 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:49:55.781577 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:49:55.781582 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:49:55.781588 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:49:55.781594 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:49:55.781600 | orchestrator | 2026-03-29 00:49:55.781606 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-29 00:49:55.781612 | orchestrator | Sunday 29 March 2026 00:48:51 +0000 (0:00:00.774) 0:00:01.188 ********** 2026-03-29 00:49:55.781618 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-29 00:49:55.781625 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-29 00:49:55.781631 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-29 00:49:55.781638 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-29 00:49:55.781643 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-29 00:49:55.781649 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-29 00:49:55.781656 | orchestrator | 2026-03-29 00:49:55.781662 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2026-03-29 00:49:55.781668 | orchestrator | 2026-03-29 00:49:55.781674 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2026-03-29 00:49:55.781680 | orchestrator | Sunday 29 March 2026 00:48:52 +0000 (0:00:00.808) 0:00:01.996 ********** 2026-03-29 00:49:55.781687 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 00:49:55.781693 | orchestrator | 2026-03-29 00:49:55.781699 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-03-29 00:49:55.781706 | orchestrator | Sunday 29 March 2026 00:48:54 +0000 (0:00:01.338) 0:00:03.334 ********** 2026-03-29 00:49:55.781712 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-03-29 00:49:55.781718 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-03-29 00:49:55.781724 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-03-29 00:49:55.781731 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-03-29 00:49:55.781737 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-03-29 00:49:55.781743 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-03-29 00:49:55.781750 | orchestrator | 2026-03-29 00:49:55.781755 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-03-29 00:49:55.781761 | orchestrator | Sunday 29 March 2026 00:48:55 +0000 (0:00:01.340) 0:00:04.675 ********** 2026-03-29 00:49:55.781767 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-03-29 00:49:55.781785 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-03-29 00:49:55.781791 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-03-29 00:49:55.781797 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-03-29 00:49:55.781803 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-03-29 00:49:55.781808 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-03-29 00:49:55.781814 | orchestrator | 2026-03-29 00:49:55.781820 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-03-29 00:49:55.781826 | orchestrator | Sunday 29 March 2026 00:48:56 +0000 (0:00:01.460) 0:00:06.135 ********** 2026-03-29 00:49:55.781831 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2026-03-29 00:49:55.781837 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:49:55.781843 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2026-03-29 00:49:55.781849 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:49:55.781855 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2026-03-29 00:49:55.781860 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:49:55.781866 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2026-03-29 00:49:55.781871 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:49:55.781877 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2026-03-29 00:49:55.781882 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:49:55.781888 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2026-03-29 00:49:55.781894 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:49:55.781900 | orchestrator | 2026-03-29 00:49:55.781907 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2026-03-29 00:49:55.781913 | orchestrator | Sunday 29 March 2026 00:48:58 +0000 (0:00:01.417) 0:00:07.553 ********** 2026-03-29 00:49:55.781919 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:49:55.781924 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:49:55.781930 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:49:55.781936 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:49:55.781942 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:49:55.781947 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:49:55.781953 | orchestrator | 2026-03-29 00:49:55.781959 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2026-03-29 00:49:55.781965 | orchestrator | Sunday 29 March 2026 00:48:59 +0000 (0:00:00.887) 0:00:08.440 ********** 2026-03-29 00:49:55.782108 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-29 00:49:55.782126 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-29 00:49:55.782140 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-29 00:49:55.782147 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-29 00:49:55.782153 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-29 00:49:55.782160 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-29 00:49:55.782177 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-29 00:49:55.782184 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-29 00:49:55.782194 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-29 00:49:55.782201 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-29 00:49:55.782208 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-29 00:49:55.782219 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-29 00:49:55.782226 | orchestrator | 2026-03-29 00:49:55.782237 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2026-03-29 00:49:55.782244 | orchestrator | Sunday 29 March 2026 00:49:00 +0000 (0:00:01.520) 0:00:09.961 ********** 2026-03-29 00:49:55.782250 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-29 00:49:55.782260 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-29 00:49:55.782267 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-29 00:49:55.782274 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-29 00:49:55.782281 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-29 00:49:55.782298 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-29 00:49:55.782306 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-29 00:49:55.782397 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-29 00:49:55.782407 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-29 00:49:55.782414 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-29 00:49:55.782422 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-29 00:49:55.782437 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-29 00:49:55.782444 | orchestrator | 2026-03-29 00:49:55.782450 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2026-03-29 00:49:55.782461 | orchestrator | Sunday 29 March 2026 00:49:03 +0000 (0:00:02.475) 0:00:12.436 ********** 2026-03-29 00:49:55.782468 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:49:55.782475 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:49:55.782481 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:49:55.782487 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:49:55.782493 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:49:55.782499 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:49:55.782505 | orchestrator | 2026-03-29 00:49:55.782511 | orchestrator | TASK [openvswitch : Check openvswitch containers] ****************************** 2026-03-29 00:49:55.782517 | orchestrator | Sunday 29 March 2026 00:49:03 +0000 (0:00:00.674) 0:00:13.111 ********** 2026-03-29 00:49:55.782524 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-29 00:49:55.782531 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-29 00:49:55.782538 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-29 00:49:55.782544 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-29 00:49:55.782555 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-29 00:49:55.782566 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-29 00:49:55.782572 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-29 00:49:55.782579 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-29 00:49:55.782584 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-29 00:49:55.782594 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-29 00:49:55.782610 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-29 00:49:55.782619 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-29 00:49:55.782625 | orchestrator | 2026-03-29 00:49:55.782632 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-29 00:49:55.782637 | orchestrator | Sunday 29 March 2026 00:49:05 +0000 (0:00:01.868) 0:00:14.980 ********** 2026-03-29 00:49:55.782643 | orchestrator | 2026-03-29 00:49:55.782649 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-29 00:49:55.782655 | orchestrator | Sunday 29 March 2026 00:49:05 +0000 (0:00:00.205) 0:00:15.186 ********** 2026-03-29 00:49:55.782661 | orchestrator | 2026-03-29 00:49:55.782667 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-29 00:49:55.782673 | orchestrator | Sunday 29 March 2026 00:49:06 +0000 (0:00:00.122) 0:00:15.308 ********** 2026-03-29 00:49:55.782678 | orchestrator | 2026-03-29 00:49:55.782683 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-29 00:49:55.782693 | orchestrator | Sunday 29 March 2026 00:49:06 +0000 (0:00:00.129) 0:00:15.438 ********** 2026-03-29 00:49:55.782702 | orchestrator | 2026-03-29 00:49:55.782708 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-29 00:49:55.782714 | orchestrator | Sunday 29 March 2026 00:49:06 +0000 (0:00:00.213) 0:00:15.651 ********** 2026-03-29 00:49:55.782722 | orchestrator | 2026-03-29 00:49:55.782730 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-29 00:49:55.782737 | orchestrator | Sunday 29 March 2026 00:49:06 +0000 (0:00:00.121) 0:00:15.773 ********** 2026-03-29 00:49:55.782743 | orchestrator | 2026-03-29 00:49:55.782748 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2026-03-29 00:49:55.782754 | orchestrator | Sunday 29 March 2026 00:49:06 +0000 (0:00:00.127) 0:00:15.901 ********** 2026-03-29 00:49:55.782760 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:49:55.782766 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:49:55.782772 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:49:55.782779 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:49:55.782786 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:49:55.782792 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:49:55.782798 | orchestrator | 2026-03-29 00:49:55.782804 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2026-03-29 00:49:55.782811 | orchestrator | Sunday 29 March 2026 00:49:15 +0000 (0:00:09.009) 0:00:24.910 ********** 2026-03-29 00:49:55.782818 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:49:55.782825 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:49:55.782832 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:49:55.782838 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:49:55.782844 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:49:55.782855 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:49:55.782862 | orchestrator | 2026-03-29 00:49:55.782868 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-03-29 00:49:55.782875 | orchestrator | Sunday 29 March 2026 00:49:17 +0000 (0:00:01.412) 0:00:26.323 ********** 2026-03-29 00:49:55.782881 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:49:55.782888 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:49:55.782895 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:49:55.782901 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:49:55.782907 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:49:55.782913 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:49:55.782920 | orchestrator | 2026-03-29 00:49:55.782926 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2026-03-29 00:49:55.782932 | orchestrator | Sunday 29 March 2026 00:49:27 +0000 (0:00:10.046) 0:00:36.369 ********** 2026-03-29 00:49:55.782939 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2026-03-29 00:49:55.782946 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2026-03-29 00:49:55.782952 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2026-03-29 00:49:55.782958 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2026-03-29 00:49:55.782965 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2026-03-29 00:49:55.782991 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2026-03-29 00:49:55.782998 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2026-03-29 00:49:55.783004 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2026-03-29 00:49:55.783011 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2026-03-29 00:49:55.783017 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2026-03-29 00:49:55.783024 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2026-03-29 00:49:55.783031 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2026-03-29 00:49:55.783038 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-29 00:49:55.783044 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-29 00:49:55.783050 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-29 00:49:55.783057 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-29 00:49:55.783063 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-29 00:49:55.783070 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-29 00:49:55.783076 | orchestrator | 2026-03-29 00:49:55.783083 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2026-03-29 00:49:55.783089 | orchestrator | Sunday 29 March 2026 00:49:35 +0000 (0:00:07.962) 0:00:44.331 ********** 2026-03-29 00:49:55.783096 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2026-03-29 00:49:55.783103 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:49:55.783110 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2026-03-29 00:49:55.783121 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:49:55.783127 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2026-03-29 00:49:55.783133 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:49:55.783140 | orchestrator | changed: [testbed-node-0] => (item=br-ex) 2026-03-29 00:49:55.783146 | orchestrator | changed: [testbed-node-1] => (item=br-ex) 2026-03-29 00:49:55.783153 | orchestrator | changed: [testbed-node-2] => (item=br-ex) 2026-03-29 00:49:55.783160 | orchestrator | 2026-03-29 00:49:55.783166 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2026-03-29 00:49:55.783173 | orchestrator | Sunday 29 March 2026 00:49:37 +0000 (0:00:02.771) 0:00:47.103 ********** 2026-03-29 00:49:55.783179 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2026-03-29 00:49:55.783186 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:49:55.783193 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2026-03-29 00:49:55.783200 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:49:55.783206 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2026-03-29 00:49:55.783213 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:49:55.783219 | orchestrator | changed: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2026-03-29 00:49:55.783226 | orchestrator | changed: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2026-03-29 00:49:55.783232 | orchestrator | changed: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2026-03-29 00:49:55.783239 | orchestrator | 2026-03-29 00:49:55.783246 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-03-29 00:49:55.783253 | orchestrator | Sunday 29 March 2026 00:49:43 +0000 (0:00:06.009) 0:00:53.112 ********** 2026-03-29 00:49:55.783259 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:49:55.783266 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:49:55.783272 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:49:55.783279 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:49:55.783286 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:49:55.783292 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:49:55.783298 | orchestrator | 2026-03-29 00:49:55.783305 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 00:49:55.783312 | orchestrator | testbed-node-0 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-29 00:49:55.783319 | orchestrator | testbed-node-1 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-29 00:49:55.783326 | orchestrator | testbed-node-2 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-29 00:49:55.783332 | orchestrator | testbed-node-3 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-29 00:49:55.783339 | orchestrator | testbed-node-4 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-29 00:49:55.783352 | orchestrator | testbed-node-5 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-29 00:49:55.783358 | orchestrator | 2026-03-29 00:49:55.783365 | orchestrator | 2026-03-29 00:49:55.783371 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-29 00:49:55.783378 | orchestrator | Sunday 29 March 2026 00:49:52 +0000 (0:00:08.750) 0:01:01.862 ********** 2026-03-29 00:49:55.783384 | orchestrator | =============================================================================== 2026-03-29 00:49:55.783390 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 18.80s 2026-03-29 00:49:55.783396 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------- 9.01s 2026-03-29 00:49:55.783402 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 7.96s 2026-03-29 00:49:55.783413 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 6.01s 2026-03-29 00:49:55.783420 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 2.77s 2026-03-29 00:49:55.783426 | orchestrator | openvswitch : Copying over config.json files for services --------------- 2.48s 2026-03-29 00:49:55.783432 | orchestrator | openvswitch : Check openvswitch containers ------------------------------ 1.87s 2026-03-29 00:49:55.783438 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 1.52s 2026-03-29 00:49:55.783445 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 1.46s 2026-03-29 00:49:55.783451 | orchestrator | module-load : Drop module persistence ----------------------------------- 1.42s 2026-03-29 00:49:55.783457 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 1.41s 2026-03-29 00:49:55.783463 | orchestrator | module-load : Load modules ---------------------------------------------- 1.34s 2026-03-29 00:49:55.783470 | orchestrator | openvswitch : include_tasks --------------------------------------------- 1.34s 2026-03-29 00:49:55.783476 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 0.92s 2026-03-29 00:49:55.783482 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 0.89s 2026-03-29 00:49:55.783489 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.81s 2026-03-29 00:49:55.783495 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.77s 2026-03-29 00:49:55.783502 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 0.67s 2026-03-29 00:49:55.783508 | orchestrator | 2026-03-29 00:49:55 | INFO  | Task 113dda18-3d42-487e-b5bf-95f5cdea23c6 is in state STARTED 2026-03-29 00:49:55.783516 | orchestrator | 2026-03-29 00:49:55 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:49:58.810973 | orchestrator | 2026-03-29 00:49:58 | INFO  | Task d22b2db3-6420-47af-b5af-887cbf3817b9 is in state STARTED 2026-03-29 00:49:58.811377 | orchestrator | 2026-03-29 00:49:58 | INFO  | Task 8e5df341-fcd4-485c-9419-3d748b148bbf is in state STARTED 2026-03-29 00:49:58.812115 | orchestrator | 2026-03-29 00:49:58 | INFO  | Task 502b9470-3814-4e5b-866b-42ac99e0b382 is in state STARTED 2026-03-29 00:49:58.815120 | orchestrator | 2026-03-29 00:49:58 | INFO  | Task 1a16439d-2e03-442a-b330-ac506537ee12 is in state STARTED 2026-03-29 00:49:58.815685 | orchestrator | 2026-03-29 00:49:58 | INFO  | Task 113dda18-3d42-487e-b5bf-95f5cdea23c6 is in state STARTED 2026-03-29 00:49:58.815730 | orchestrator | 2026-03-29 00:49:58 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:50:01.849881 | orchestrator | 2026-03-29 00:50:01 | INFO  | Task d22b2db3-6420-47af-b5af-887cbf3817b9 is in state STARTED 2026-03-29 00:50:01.850491 | orchestrator | 2026-03-29 00:50:01 | INFO  | Task 8e5df341-fcd4-485c-9419-3d748b148bbf is in state STARTED 2026-03-29 00:50:01.851232 | orchestrator | 2026-03-29 00:50:01 | INFO  | Task 502b9470-3814-4e5b-866b-42ac99e0b382 is in state STARTED 2026-03-29 00:50:01.851974 | orchestrator | 2026-03-29 00:50:01 | INFO  | Task 1a16439d-2e03-442a-b330-ac506537ee12 is in state STARTED 2026-03-29 00:50:01.852702 | orchestrator | 2026-03-29 00:50:01 | INFO  | Task 113dda18-3d42-487e-b5bf-95f5cdea23c6 is in state STARTED 2026-03-29 00:50:01.852755 | orchestrator | 2026-03-29 00:50:01 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:50:04.884882 | orchestrator | 2026-03-29 00:50:04 | INFO  | Task d22b2db3-6420-47af-b5af-887cbf3817b9 is in state STARTED 2026-03-29 00:50:04.885821 | orchestrator | 2026-03-29 00:50:04 | INFO  | Task 8e5df341-fcd4-485c-9419-3d748b148bbf is in state STARTED 2026-03-29 00:50:04.887222 | orchestrator | 2026-03-29 00:50:04 | INFO  | Task 502b9470-3814-4e5b-866b-42ac99e0b382 is in state STARTED 2026-03-29 00:50:04.888590 | orchestrator | 2026-03-29 00:50:04 | INFO  | Task 1a16439d-2e03-442a-b330-ac506537ee12 is in state STARTED 2026-03-29 00:50:04.890108 | orchestrator | 2026-03-29 00:50:04 | INFO  | Task 113dda18-3d42-487e-b5bf-95f5cdea23c6 is in state STARTED 2026-03-29 00:50:04.890345 | orchestrator | 2026-03-29 00:50:04 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:50:07.932889 | orchestrator | 2026-03-29 00:50:07 | INFO  | Task d22b2db3-6420-47af-b5af-887cbf3817b9 is in state STARTED 2026-03-29 00:50:07.936703 | orchestrator | 2026-03-29 00:50:07 | INFO  | Task 8e5df341-fcd4-485c-9419-3d748b148bbf is in state STARTED 2026-03-29 00:50:07.938501 | orchestrator | 2026-03-29 00:50:07 | INFO  | Task 502b9470-3814-4e5b-866b-42ac99e0b382 is in state STARTED 2026-03-29 00:50:07.941415 | orchestrator | 2026-03-29 00:50:07 | INFO  | Task 1a16439d-2e03-442a-b330-ac506537ee12 is in state STARTED 2026-03-29 00:50:07.943587 | orchestrator | 2026-03-29 00:50:07 | INFO  | Task 113dda18-3d42-487e-b5bf-95f5cdea23c6 is in state STARTED 2026-03-29 00:50:07.943606 | orchestrator | 2026-03-29 00:50:07 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:50:10.990303 | orchestrator | 2026-03-29 00:50:10 | INFO  | Task d22b2db3-6420-47af-b5af-887cbf3817b9 is in state STARTED 2026-03-29 00:50:10.990856 | orchestrator | 2026-03-29 00:50:10 | INFO  | Task 8e5df341-fcd4-485c-9419-3d748b148bbf is in state STARTED 2026-03-29 00:50:10.992392 | orchestrator | 2026-03-29 00:50:10 | INFO  | Task 502b9470-3814-4e5b-866b-42ac99e0b382 is in state STARTED 2026-03-29 00:50:10.993676 | orchestrator | 2026-03-29 00:50:10 | INFO  | Task 1a16439d-2e03-442a-b330-ac506537ee12 is in state STARTED 2026-03-29 00:50:10.997503 | orchestrator | 2026-03-29 00:50:10 | INFO  | Task 113dda18-3d42-487e-b5bf-95f5cdea23c6 is in state STARTED 2026-03-29 00:50:10.998049 | orchestrator | 2026-03-29 00:50:10 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:50:14.070512 | orchestrator | 2026-03-29 00:50:14 | INFO  | Task d22b2db3-6420-47af-b5af-887cbf3817b9 is in state STARTED 2026-03-29 00:50:14.072850 | orchestrator | 2026-03-29 00:50:14 | INFO  | Task 8e5df341-fcd4-485c-9419-3d748b148bbf is in state STARTED 2026-03-29 00:50:14.075541 | orchestrator | 2026-03-29 00:50:14 | INFO  | Task 502b9470-3814-4e5b-866b-42ac99e0b382 is in state STARTED 2026-03-29 00:50:14.077476 | orchestrator | 2026-03-29 00:50:14 | INFO  | Task 1a16439d-2e03-442a-b330-ac506537ee12 is in state STARTED 2026-03-29 00:50:14.079205 | orchestrator | 2026-03-29 00:50:14 | INFO  | Task 113dda18-3d42-487e-b5bf-95f5cdea23c6 is in state STARTED 2026-03-29 00:50:14.079578 | orchestrator | 2026-03-29 00:50:14 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:50:17.134927 | orchestrator | 2026-03-29 00:50:17 | INFO  | Task d22b2db3-6420-47af-b5af-887cbf3817b9 is in state STARTED 2026-03-29 00:50:17.138436 | orchestrator | 2026-03-29 00:50:17 | INFO  | Task 8e5df341-fcd4-485c-9419-3d748b148bbf is in state STARTED 2026-03-29 00:50:17.142183 | orchestrator | 2026-03-29 00:50:17 | INFO  | Task 502b9470-3814-4e5b-866b-42ac99e0b382 is in state STARTED 2026-03-29 00:50:17.144514 | orchestrator | 2026-03-29 00:50:17 | INFO  | Task 1a16439d-2e03-442a-b330-ac506537ee12 is in state STARTED 2026-03-29 00:50:17.146121 | orchestrator | 2026-03-29 00:50:17 | INFO  | Task 113dda18-3d42-487e-b5bf-95f5cdea23c6 is in state STARTED 2026-03-29 00:50:17.146171 | orchestrator | 2026-03-29 00:50:17 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:50:20.193682 | orchestrator | 2026-03-29 00:50:20 | INFO  | Task d22b2db3-6420-47af-b5af-887cbf3817b9 is in state STARTED 2026-03-29 00:50:20.196406 | orchestrator | 2026-03-29 00:50:20 | INFO  | Task 8e5df341-fcd4-485c-9419-3d748b148bbf is in state STARTED 2026-03-29 00:50:20.199191 | orchestrator | 2026-03-29 00:50:20 | INFO  | Task 502b9470-3814-4e5b-866b-42ac99e0b382 is in state STARTED 2026-03-29 00:50:20.202534 | orchestrator | 2026-03-29 00:50:20 | INFO  | Task 1a16439d-2e03-442a-b330-ac506537ee12 is in state STARTED 2026-03-29 00:50:20.204517 | orchestrator | 2026-03-29 00:50:20 | INFO  | Task 113dda18-3d42-487e-b5bf-95f5cdea23c6 is in state STARTED 2026-03-29 00:50:20.204585 | orchestrator | 2026-03-29 00:50:20 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:50:23.240132 | orchestrator | 2026-03-29 00:50:23 | INFO  | Task d22b2db3-6420-47af-b5af-887cbf3817b9 is in state STARTED 2026-03-29 00:50:23.240841 | orchestrator | 2026-03-29 00:50:23 | INFO  | Task 8e5df341-fcd4-485c-9419-3d748b148bbf is in state STARTED 2026-03-29 00:50:23.243122 | orchestrator | 2026-03-29 00:50:23 | INFO  | Task 502b9470-3814-4e5b-866b-42ac99e0b382 is in state STARTED 2026-03-29 00:50:23.243800 | orchestrator | 2026-03-29 00:50:23 | INFO  | Task 1a16439d-2e03-442a-b330-ac506537ee12 is in state STARTED 2026-03-29 00:50:23.244568 | orchestrator | 2026-03-29 00:50:23 | INFO  | Task 113dda18-3d42-487e-b5bf-95f5cdea23c6 is in state STARTED 2026-03-29 00:50:23.244609 | orchestrator | 2026-03-29 00:50:23 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:50:26.282273 | orchestrator | 2026-03-29 00:50:26 | INFO  | Task d22b2db3-6420-47af-b5af-887cbf3817b9 is in state STARTED 2026-03-29 00:50:26.283346 | orchestrator | 2026-03-29 00:50:26 | INFO  | Task 8e5df341-fcd4-485c-9419-3d748b148bbf is in state STARTED 2026-03-29 00:50:26.287481 | orchestrator | 2026-03-29 00:50:26 | INFO  | Task 502b9470-3814-4e5b-866b-42ac99e0b382 is in state STARTED 2026-03-29 00:50:26.288245 | orchestrator | 2026-03-29 00:50:26 | INFO  | Task 1a16439d-2e03-442a-b330-ac506537ee12 is in state STARTED 2026-03-29 00:50:26.288734 | orchestrator | 2026-03-29 00:50:26 | INFO  | Task 113dda18-3d42-487e-b5bf-95f5cdea23c6 is in state STARTED 2026-03-29 00:50:26.288755 | orchestrator | 2026-03-29 00:50:26 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:50:29.339290 | orchestrator | 2026-03-29 00:50:29 | INFO  | Task d22b2db3-6420-47af-b5af-887cbf3817b9 is in state STARTED 2026-03-29 00:50:29.339349 | orchestrator | 2026-03-29 00:50:29 | INFO  | Task 8e5df341-fcd4-485c-9419-3d748b148bbf is in state STARTED 2026-03-29 00:50:29.340117 | orchestrator | 2026-03-29 00:50:29 | INFO  | Task 502b9470-3814-4e5b-866b-42ac99e0b382 is in state STARTED 2026-03-29 00:50:29.340789 | orchestrator | 2026-03-29 00:50:29 | INFO  | Task 1a16439d-2e03-442a-b330-ac506537ee12 is in state STARTED 2026-03-29 00:50:29.342902 | orchestrator | 2026-03-29 00:50:29 | INFO  | Task 113dda18-3d42-487e-b5bf-95f5cdea23c6 is in state STARTED 2026-03-29 00:50:29.342924 | orchestrator | 2026-03-29 00:50:29 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:50:32.371194 | orchestrator | 2026-03-29 00:50:32 | INFO  | Task d22b2db3-6420-47af-b5af-887cbf3817b9 is in state STARTED 2026-03-29 00:50:32.371701 | orchestrator | 2026-03-29 00:50:32 | INFO  | Task 8e5df341-fcd4-485c-9419-3d748b148bbf is in state STARTED 2026-03-29 00:50:32.372724 | orchestrator | 2026-03-29 00:50:32 | INFO  | Task 502b9470-3814-4e5b-866b-42ac99e0b382 is in state STARTED 2026-03-29 00:50:32.374173 | orchestrator | 2026-03-29 00:50:32 | INFO  | Task 1a16439d-2e03-442a-b330-ac506537ee12 is in state STARTED 2026-03-29 00:50:32.374920 | orchestrator | 2026-03-29 00:50:32 | INFO  | Task 113dda18-3d42-487e-b5bf-95f5cdea23c6 is in state STARTED 2026-03-29 00:50:32.375073 | orchestrator | 2026-03-29 00:50:32 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:50:35.415163 | orchestrator | 2026-03-29 00:50:35 | INFO  | Task d22b2db3-6420-47af-b5af-887cbf3817b9 is in state STARTED 2026-03-29 00:50:35.415223 | orchestrator | 2026-03-29 00:50:35 | INFO  | Task 8e5df341-fcd4-485c-9419-3d748b148bbf is in state STARTED 2026-03-29 00:50:35.415232 | orchestrator | 2026-03-29 00:50:35 | INFO  | Task 502b9470-3814-4e5b-866b-42ac99e0b382 is in state STARTED 2026-03-29 00:50:35.416481 | orchestrator | 2026-03-29 00:50:35 | INFO  | Task 1a16439d-2e03-442a-b330-ac506537ee12 is in state STARTED 2026-03-29 00:50:35.417335 | orchestrator | 2026-03-29 00:50:35 | INFO  | Task 113dda18-3d42-487e-b5bf-95f5cdea23c6 is in state STARTED 2026-03-29 00:50:35.417366 | orchestrator | 2026-03-29 00:50:35 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:50:38.453438 | orchestrator | 2026-03-29 00:50:38 | INFO  | Task d22b2db3-6420-47af-b5af-887cbf3817b9 is in state STARTED 2026-03-29 00:50:38.454877 | orchestrator | 2026-03-29 00:50:38 | INFO  | Task 8e5df341-fcd4-485c-9419-3d748b148bbf is in state STARTED 2026-03-29 00:50:38.455581 | orchestrator | 2026-03-29 00:50:38 | INFO  | Task 502b9470-3814-4e5b-866b-42ac99e0b382 is in state STARTED 2026-03-29 00:50:38.456502 | orchestrator | 2026-03-29 00:50:38 | INFO  | Task 1a16439d-2e03-442a-b330-ac506537ee12 is in state STARTED 2026-03-29 00:50:38.457163 | orchestrator | 2026-03-29 00:50:38 | INFO  | Task 113dda18-3d42-487e-b5bf-95f5cdea23c6 is in state STARTED 2026-03-29 00:50:38.457339 | orchestrator | 2026-03-29 00:50:38 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:50:41.487428 | orchestrator | 2026-03-29 00:50:41 | INFO  | Task d22b2db3-6420-47af-b5af-887cbf3817b9 is in state STARTED 2026-03-29 00:50:41.488827 | orchestrator | 2026-03-29 00:50:41 | INFO  | Task 8e5df341-fcd4-485c-9419-3d748b148bbf is in state STARTED 2026-03-29 00:50:41.489452 | orchestrator | 2026-03-29 00:50:41 | INFO  | Task 502b9470-3814-4e5b-866b-42ac99e0b382 is in state STARTED 2026-03-29 00:50:41.490265 | orchestrator | 2026-03-29 00:50:41 | INFO  | Task 1a16439d-2e03-442a-b330-ac506537ee12 is in state STARTED 2026-03-29 00:50:41.490862 | orchestrator | 2026-03-29 00:50:41 | INFO  | Task 113dda18-3d42-487e-b5bf-95f5cdea23c6 is in state STARTED 2026-03-29 00:50:41.490888 | orchestrator | 2026-03-29 00:50:41 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:50:44.524282 | orchestrator | 2026-03-29 00:50:44 | INFO  | Task d22b2db3-6420-47af-b5af-887cbf3817b9 is in state STARTED 2026-03-29 00:50:44.524792 | orchestrator | 2026-03-29 00:50:44 | INFO  | Task 8e5df341-fcd4-485c-9419-3d748b148bbf is in state STARTED 2026-03-29 00:50:44.527515 | orchestrator | 2026-03-29 00:50:44 | INFO  | Task 502b9470-3814-4e5b-866b-42ac99e0b382 is in state STARTED 2026-03-29 00:50:44.527943 | orchestrator | 2026-03-29 00:50:44 | INFO  | Task 1a16439d-2e03-442a-b330-ac506537ee12 is in state STARTED 2026-03-29 00:50:44.528692 | orchestrator | 2026-03-29 00:50:44 | INFO  | Task 113dda18-3d42-487e-b5bf-95f5cdea23c6 is in state STARTED 2026-03-29 00:50:44.528719 | orchestrator | 2026-03-29 00:50:44 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:50:47.563541 | orchestrator | 2026-03-29 00:50:47 | INFO  | Task d22b2db3-6420-47af-b5af-887cbf3817b9 is in state STARTED 2026-03-29 00:50:47.564502 | orchestrator | 2026-03-29 00:50:47 | INFO  | Task 8e5df341-fcd4-485c-9419-3d748b148bbf is in state STARTED 2026-03-29 00:50:47.565914 | orchestrator | 2026-03-29 00:50:47 | INFO  | Task 502b9470-3814-4e5b-866b-42ac99e0b382 is in state STARTED 2026-03-29 00:50:47.567049 | orchestrator | 2026-03-29 00:50:47 | INFO  | Task 1a16439d-2e03-442a-b330-ac506537ee12 is in state STARTED 2026-03-29 00:50:47.567553 | orchestrator | 2026-03-29 00:50:47 | INFO  | Task 113dda18-3d42-487e-b5bf-95f5cdea23c6 is in state STARTED 2026-03-29 00:50:47.567589 | orchestrator | 2026-03-29 00:50:47 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:50:50.604544 | orchestrator | 2026-03-29 00:50:50 | INFO  | Task d22b2db3-6420-47af-b5af-887cbf3817b9 is in state STARTED 2026-03-29 00:50:50.604684 | orchestrator | 2026-03-29 00:50:50 | INFO  | Task 8e5df341-fcd4-485c-9419-3d748b148bbf is in state STARTED 2026-03-29 00:50:50.606154 | orchestrator | 2026-03-29 00:50:50 | INFO  | Task 502b9470-3814-4e5b-866b-42ac99e0b382 is in state STARTED 2026-03-29 00:50:50.607468 | orchestrator | 2026-03-29 00:50:50 | INFO  | Task 1a16439d-2e03-442a-b330-ac506537ee12 is in state STARTED 2026-03-29 00:50:50.608564 | orchestrator | 2026-03-29 00:50:50 | INFO  | Task 113dda18-3d42-487e-b5bf-95f5cdea23c6 is in state STARTED 2026-03-29 00:50:50.609282 | orchestrator | 2026-03-29 00:50:50 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:50:54.082292 | orchestrator | 2026-03-29 00:50:54 | INFO  | Task d22b2db3-6420-47af-b5af-887cbf3817b9 is in state STARTED 2026-03-29 00:50:54.085345 | orchestrator | 2026-03-29 00:50:54 | INFO  | Task 8e5df341-fcd4-485c-9419-3d748b148bbf is in state STARTED 2026-03-29 00:50:54.085758 | orchestrator | 2026-03-29 00:50:54 | INFO  | Task 502b9470-3814-4e5b-866b-42ac99e0b382 is in state STARTED 2026-03-29 00:50:54.086667 | orchestrator | 2026-03-29 00:50:54 | INFO  | Task 1a16439d-2e03-442a-b330-ac506537ee12 is in state STARTED 2026-03-29 00:50:54.087268 | orchestrator | 2026-03-29 00:50:54 | INFO  | Task 113dda18-3d42-487e-b5bf-95f5cdea23c6 is in state STARTED 2026-03-29 00:50:54.089844 | orchestrator | 2026-03-29 00:50:54 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:50:57.263724 | orchestrator | 2026-03-29 00:50:57 | INFO  | Task d22b2db3-6420-47af-b5af-887cbf3817b9 is in state STARTED 2026-03-29 00:50:57.264829 | orchestrator | 2026-03-29 00:50:57 | INFO  | Task 8e5df341-fcd4-485c-9419-3d748b148bbf is in state STARTED 2026-03-29 00:50:57.266247 | orchestrator | 2026-03-29 00:50:57 | INFO  | Task 502b9470-3814-4e5b-866b-42ac99e0b382 is in state STARTED 2026-03-29 00:50:57.266664 | orchestrator | 2026-03-29 00:50:57 | INFO  | Task 1a16439d-2e03-442a-b330-ac506537ee12 is in state STARTED 2026-03-29 00:50:57.267971 | orchestrator | 2026-03-29 00:50:57 | INFO  | Task 113dda18-3d42-487e-b5bf-95f5cdea23c6 is in state STARTED 2026-03-29 00:50:57.268036 | orchestrator | 2026-03-29 00:50:57 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:51:00.696549 | orchestrator | 2026-03-29 00:51:00 | INFO  | Task d22b2db3-6420-47af-b5af-887cbf3817b9 is in state STARTED 2026-03-29 00:51:00.696694 | orchestrator | 2026-03-29 00:51:00 | INFO  | Task 8e5df341-fcd4-485c-9419-3d748b148bbf is in state STARTED 2026-03-29 00:51:00.696891 | orchestrator | 2026-03-29 00:51:00 | INFO  | Task 502b9470-3814-4e5b-866b-42ac99e0b382 is in state STARTED 2026-03-29 00:51:00.697343 | orchestrator | 2026-03-29 00:51:00 | INFO  | Task 1a16439d-2e03-442a-b330-ac506537ee12 is in state STARTED 2026-03-29 00:51:00.697890 | orchestrator | 2026-03-29 00:51:00 | INFO  | Task 113dda18-3d42-487e-b5bf-95f5cdea23c6 is in state STARTED 2026-03-29 00:51:00.697912 | orchestrator | 2026-03-29 00:51:00 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:51:03.730313 | orchestrator | 2026-03-29 00:51:03 | INFO  | Task d22b2db3-6420-47af-b5af-887cbf3817b9 is in state STARTED 2026-03-29 00:51:03.732363 | orchestrator | 2026-03-29 00:51:03 | INFO  | Task 8e5df341-fcd4-485c-9419-3d748b148bbf is in state STARTED 2026-03-29 00:51:03.732520 | orchestrator | 2026-03-29 00:51:03 | INFO  | Task 502b9470-3814-4e5b-866b-42ac99e0b382 is in state STARTED 2026-03-29 00:51:03.732966 | orchestrator | 2026-03-29 00:51:03 | INFO  | Task 1a16439d-2e03-442a-b330-ac506537ee12 is in state STARTED 2026-03-29 00:51:03.733532 | orchestrator | 2026-03-29 00:51:03 | INFO  | Task 113dda18-3d42-487e-b5bf-95f5cdea23c6 is in state STARTED 2026-03-29 00:51:03.733556 | orchestrator | 2026-03-29 00:51:03 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:51:06.761378 | orchestrator | 2026-03-29 00:51:06.761473 | orchestrator | 2026-03-29 00:51:06.761519 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2026-03-29 00:51:06.761529 | orchestrator | 2026-03-29 00:51:06.761538 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2026-03-29 00:51:06.761550 | orchestrator | Sunday 29 March 2026 00:46:39 +0000 (0:00:00.275) 0:00:00.275 ********** 2026-03-29 00:51:06.761558 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:51:06.761569 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:51:06.761578 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:51:06.761586 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:51:06.761595 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:51:06.761603 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:51:06.761612 | orchestrator | 2026-03-29 00:51:06.761622 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2026-03-29 00:51:06.761631 | orchestrator | Sunday 29 March 2026 00:46:40 +0000 (0:00:00.686) 0:00:00.962 ********** 2026-03-29 00:51:06.761639 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:51:06.761648 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:51:06.761656 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:51:06.761665 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:51:06.761675 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:51:06.761683 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:51:06.761691 | orchestrator | 2026-03-29 00:51:06.761699 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2026-03-29 00:51:06.761707 | orchestrator | Sunday 29 March 2026 00:46:41 +0000 (0:00:00.848) 0:00:01.810 ********** 2026-03-29 00:51:06.761716 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:51:06.761724 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:51:06.761733 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:51:06.761741 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:51:06.761750 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:51:06.761759 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:51:06.761767 | orchestrator | 2026-03-29 00:51:06.761775 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2026-03-29 00:51:06.761783 | orchestrator | Sunday 29 March 2026 00:46:41 +0000 (0:00:00.794) 0:00:02.605 ********** 2026-03-29 00:51:06.761792 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:51:06.761800 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:51:06.761808 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:51:06.761816 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:51:06.761824 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:51:06.761833 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:51:06.761841 | orchestrator | 2026-03-29 00:51:06.761850 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2026-03-29 00:51:06.761860 | orchestrator | Sunday 29 March 2026 00:46:43 +0000 (0:00:02.028) 0:00:04.633 ********** 2026-03-29 00:51:06.761869 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:51:06.761879 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:51:06.761888 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:51:06.761899 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:51:06.761909 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:51:06.761920 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:51:06.761930 | orchestrator | 2026-03-29 00:51:06.761940 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2026-03-29 00:51:06.761977 | orchestrator | Sunday 29 March 2026 00:46:44 +0000 (0:00:01.086) 0:00:05.720 ********** 2026-03-29 00:51:06.762057 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:51:06.762167 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:51:06.762182 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:51:06.762192 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:51:06.762201 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:51:06.762210 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:51:06.762220 | orchestrator | 2026-03-29 00:51:06.762230 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2026-03-29 00:51:06.762240 | orchestrator | Sunday 29 March 2026 00:46:46 +0000 (0:00:01.228) 0:00:06.948 ********** 2026-03-29 00:51:06.762250 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:51:06.762272 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:51:06.762282 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:51:06.762291 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:51:06.762300 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:51:06.762309 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:51:06.762329 | orchestrator | 2026-03-29 00:51:06.762338 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2026-03-29 00:51:06.762348 | orchestrator | Sunday 29 March 2026 00:46:47 +0000 (0:00:01.071) 0:00:08.019 ********** 2026-03-29 00:51:06.762358 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:51:06.762368 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:51:06.762377 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:51:06.762386 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:51:06.762396 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:51:06.762404 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:51:06.762412 | orchestrator | 2026-03-29 00:51:06.762434 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2026-03-29 00:51:06.762443 | orchestrator | Sunday 29 March 2026 00:46:48 +0000 (0:00:00.830) 0:00:08.849 ********** 2026-03-29 00:51:06.762451 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-29 00:51:06.762462 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-29 00:51:06.762470 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:51:06.762479 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-29 00:51:06.762488 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-29 00:51:06.762497 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:51:06.762507 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-29 00:51:06.762517 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-29 00:51:06.762526 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:51:06.762535 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-29 00:51:06.762562 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-29 00:51:06.762568 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:51:06.762573 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-29 00:51:06.762579 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-29 00:51:06.762584 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:51:06.762590 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-29 00:51:06.762595 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-29 00:51:06.762601 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:51:06.762606 | orchestrator | 2026-03-29 00:51:06.762612 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2026-03-29 00:51:06.762617 | orchestrator | Sunday 29 March 2026 00:46:49 +0000 (0:00:01.186) 0:00:10.036 ********** 2026-03-29 00:51:06.762632 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:51:06.762677 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:51:06.762683 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:51:06.762689 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:51:06.762694 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:51:06.762700 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:51:06.762705 | orchestrator | 2026-03-29 00:51:06.762711 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2026-03-29 00:51:06.762717 | orchestrator | Sunday 29 March 2026 00:46:51 +0000 (0:00:01.750) 0:00:11.787 ********** 2026-03-29 00:51:06.762723 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:51:06.762729 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:51:06.762734 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:51:06.762739 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:51:06.762761 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:51:06.762767 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:51:06.762773 | orchestrator | 2026-03-29 00:51:06.762778 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2026-03-29 00:51:06.762784 | orchestrator | Sunday 29 March 2026 00:46:52 +0000 (0:00:01.309) 0:00:13.096 ********** 2026-03-29 00:51:06.762789 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:51:06.762795 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:51:06.762800 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:51:06.762805 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:51:06.762811 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:51:06.762816 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:51:06.762822 | orchestrator | 2026-03-29 00:51:06.762827 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2026-03-29 00:51:06.762833 | orchestrator | Sunday 29 March 2026 00:46:57 +0000 (0:00:05.564) 0:00:18.661 ********** 2026-03-29 00:51:06.762838 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:51:06.762844 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:51:06.762849 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:51:06.762855 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:51:06.762861 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:51:06.762866 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:51:06.762871 | orchestrator | 2026-03-29 00:51:06.762877 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2026-03-29 00:51:06.762882 | orchestrator | Sunday 29 March 2026 00:46:59 +0000 (0:00:01.743) 0:00:20.404 ********** 2026-03-29 00:51:06.762888 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:51:06.762893 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:51:06.762898 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:51:06.762904 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:51:06.762909 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:51:06.762915 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:51:06.762920 | orchestrator | 2026-03-29 00:51:06.762926 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2026-03-29 00:51:06.762933 | orchestrator | Sunday 29 March 2026 00:47:02 +0000 (0:00:02.588) 0:00:22.992 ********** 2026-03-29 00:51:06.762938 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:51:06.762943 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:51:06.762949 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:51:06.762954 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:51:06.762960 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:51:06.762965 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:51:06.762970 | orchestrator | 2026-03-29 00:51:06.762976 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2026-03-29 00:51:06.763003 | orchestrator | Sunday 29 March 2026 00:47:03 +0000 (0:00:01.164) 0:00:24.157 ********** 2026-03-29 00:51:06.763013 | orchestrator | skipping: [testbed-node-3] => (item=rancher)  2026-03-29 00:51:06.763028 | orchestrator | skipping: [testbed-node-3] => (item=rancher/k3s)  2026-03-29 00:51:06.763039 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:51:06.763045 | orchestrator | skipping: [testbed-node-4] => (item=rancher)  2026-03-29 00:51:06.763050 | orchestrator | skipping: [testbed-node-4] => (item=rancher/k3s)  2026-03-29 00:51:06.763056 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:51:06.763061 | orchestrator | skipping: [testbed-node-5] => (item=rancher)  2026-03-29 00:51:06.763067 | orchestrator | skipping: [testbed-node-5] => (item=rancher/k3s)  2026-03-29 00:51:06.763072 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:51:06.763078 | orchestrator | skipping: [testbed-node-0] => (item=rancher)  2026-03-29 00:51:06.763083 | orchestrator | skipping: [testbed-node-0] => (item=rancher/k3s)  2026-03-29 00:51:06.763089 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:51:06.763094 | orchestrator | skipping: [testbed-node-1] => (item=rancher)  2026-03-29 00:51:06.763100 | orchestrator | skipping: [testbed-node-1] => (item=rancher/k3s)  2026-03-29 00:51:06.763105 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:51:06.763111 | orchestrator | skipping: [testbed-node-2] => (item=rancher)  2026-03-29 00:51:06.763116 | orchestrator | skipping: [testbed-node-2] => (item=rancher/k3s)  2026-03-29 00:51:06.763122 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:51:06.763127 | orchestrator | 2026-03-29 00:51:06.763133 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2026-03-29 00:51:06.763146 | orchestrator | Sunday 29 March 2026 00:47:04 +0000 (0:00:01.019) 0:00:25.176 ********** 2026-03-29 00:51:06.763151 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:51:06.763157 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:51:06.763162 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:51:06.763167 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:51:06.763173 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:51:06.763178 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:51:06.763184 | orchestrator | 2026-03-29 00:51:06.763189 | orchestrator | TASK [k3s_custom_registries : Remove /etc/rancher/k3s/registries.yaml when no registries configured] *** 2026-03-29 00:51:06.763195 | orchestrator | Sunday 29 March 2026 00:47:05 +0000 (0:00:00.841) 0:00:26.018 ********** 2026-03-29 00:51:06.763200 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:51:06.763205 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:51:06.763211 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:51:06.763225 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:51:06.763238 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:51:06.763244 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:51:06.763249 | orchestrator | 2026-03-29 00:51:06.763255 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2026-03-29 00:51:06.763260 | orchestrator | 2026-03-29 00:51:06.763266 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2026-03-29 00:51:06.763271 | orchestrator | Sunday 29 March 2026 00:47:06 +0000 (0:00:01.203) 0:00:27.222 ********** 2026-03-29 00:51:06.763276 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:51:06.763291 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:51:06.763296 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:51:06.763302 | orchestrator | 2026-03-29 00:51:06.763314 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2026-03-29 00:51:06.763323 | orchestrator | Sunday 29 March 2026 00:47:07 +0000 (0:00:01.329) 0:00:28.551 ********** 2026-03-29 00:51:06.763332 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:51:06.763341 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:51:06.763350 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:51:06.763359 | orchestrator | 2026-03-29 00:51:06.763368 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2026-03-29 00:51:06.763375 | orchestrator | Sunday 29 March 2026 00:47:09 +0000 (0:00:01.274) 0:00:29.825 ********** 2026-03-29 00:51:06.763384 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:51:06.763392 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:51:06.763407 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:51:06.763416 | orchestrator | 2026-03-29 00:51:06.763425 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2026-03-29 00:51:06.763434 | orchestrator | Sunday 29 March 2026 00:47:10 +0000 (0:00:01.368) 0:00:31.194 ********** 2026-03-29 00:51:06.763444 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:51:06.763453 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:51:06.763462 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:51:06.763471 | orchestrator | 2026-03-29 00:51:06.763480 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2026-03-29 00:51:06.763489 | orchestrator | Sunday 29 March 2026 00:47:11 +0000 (0:00:01.392) 0:00:32.586 ********** 2026-03-29 00:51:06.763497 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:51:06.763502 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:51:06.763508 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:51:06.763513 | orchestrator | 2026-03-29 00:51:06.763518 | orchestrator | TASK [k3s_server : Create /etc/rancher/k3s directory] ************************** 2026-03-29 00:51:06.763524 | orchestrator | Sunday 29 March 2026 00:47:12 +0000 (0:00:00.587) 0:00:33.174 ********** 2026-03-29 00:51:06.763529 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:51:06.763535 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:51:06.763540 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:51:06.763545 | orchestrator | 2026-03-29 00:51:06.763551 | orchestrator | TASK [k3s_server : Create custom resolv.conf for k3s] ************************** 2026-03-29 00:51:06.763556 | orchestrator | Sunday 29 March 2026 00:47:13 +0000 (0:00:00.870) 0:00:34.045 ********** 2026-03-29 00:51:06.763561 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:51:06.763567 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:51:06.763572 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:51:06.763578 | orchestrator | 2026-03-29 00:51:06.763583 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2026-03-29 00:51:06.763589 | orchestrator | Sunday 29 March 2026 00:47:15 +0000 (0:00:02.112) 0:00:36.157 ********** 2026-03-29 00:51:06.763594 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 00:51:06.763599 | orchestrator | 2026-03-29 00:51:06.763605 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2026-03-29 00:51:06.763610 | orchestrator | Sunday 29 March 2026 00:47:16 +0000 (0:00:01.094) 0:00:37.251 ********** 2026-03-29 00:51:06.763616 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:51:06.763621 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:51:06.763626 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:51:06.763632 | orchestrator | 2026-03-29 00:51:06.763642 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2026-03-29 00:51:06.763647 | orchestrator | Sunday 29 March 2026 00:47:18 +0000 (0:00:02.360) 0:00:39.612 ********** 2026-03-29 00:51:06.763653 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:51:06.763659 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:51:06.763664 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:51:06.763670 | orchestrator | 2026-03-29 00:51:06.763675 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2026-03-29 00:51:06.763681 | orchestrator | Sunday 29 March 2026 00:47:20 +0000 (0:00:01.302) 0:00:40.914 ********** 2026-03-29 00:51:06.763686 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:51:06.763692 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:51:06.763697 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:51:06.763703 | orchestrator | 2026-03-29 00:51:06.763708 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2026-03-29 00:51:06.763714 | orchestrator | Sunday 29 March 2026 00:47:21 +0000 (0:00:01.022) 0:00:41.936 ********** 2026-03-29 00:51:06.763719 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:51:06.763725 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:51:06.763730 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:51:06.763736 | orchestrator | 2026-03-29 00:51:06.763741 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2026-03-29 00:51:06.763758 | orchestrator | Sunday 29 March 2026 00:47:22 +0000 (0:00:01.425) 0:00:43.362 ********** 2026-03-29 00:51:06.763764 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:51:06.763770 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:51:06.763775 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:51:06.763780 | orchestrator | 2026-03-29 00:51:06.763786 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2026-03-29 00:51:06.763791 | orchestrator | Sunday 29 March 2026 00:47:23 +0000 (0:00:00.531) 0:00:43.893 ********** 2026-03-29 00:51:06.763797 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:51:06.763803 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:51:06.763808 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:51:06.763813 | orchestrator | 2026-03-29 00:51:06.763819 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2026-03-29 00:51:06.763825 | orchestrator | Sunday 29 March 2026 00:47:23 +0000 (0:00:00.503) 0:00:44.397 ********** 2026-03-29 00:51:06.763830 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:51:06.763835 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:51:06.763841 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:51:06.763846 | orchestrator | 2026-03-29 00:51:06.763851 | orchestrator | TASK [k3s_server : Detect Kubernetes version for label compatibility] ********** 2026-03-29 00:51:06.763857 | orchestrator | Sunday 29 March 2026 00:47:25 +0000 (0:00:02.167) 0:00:46.564 ********** 2026-03-29 00:51:06.763862 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:51:06.763868 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:51:06.763873 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:51:06.763879 | orchestrator | 2026-03-29 00:51:06.763884 | orchestrator | TASK [k3s_server : Set node role label selector based on Kubernetes version] *** 2026-03-29 00:51:06.763889 | orchestrator | Sunday 29 March 2026 00:47:28 +0000 (0:00:02.591) 0:00:49.156 ********** 2026-03-29 00:51:06.763895 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:51:06.763900 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:51:06.763906 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:51:06.763911 | orchestrator | 2026-03-29 00:51:06.763916 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2026-03-29 00:51:06.763922 | orchestrator | Sunday 29 March 2026 00:47:28 +0000 (0:00:00.502) 0:00:49.658 ********** 2026-03-29 00:51:06.763928 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-03-29 00:51:06.763934 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-03-29 00:51:06.763940 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-03-29 00:51:06.763945 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-03-29 00:51:06.763951 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-03-29 00:51:06.763956 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-03-29 00:51:06.763962 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-03-29 00:51:06.763967 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-03-29 00:51:06.763972 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-03-29 00:51:06.764009 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-03-29 00:51:06.764021 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-03-29 00:51:06.764026 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-03-29 00:51:06.764042 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:51:06.764048 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:51:06.764054 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:51:06.764059 | orchestrator | 2026-03-29 00:51:06.764065 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2026-03-29 00:51:06.764070 | orchestrator | Sunday 29 March 2026 00:48:12 +0000 (0:00:43.314) 0:01:32.972 ********** 2026-03-29 00:51:06.764075 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:51:06.764081 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:51:06.764086 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:51:06.764092 | orchestrator | 2026-03-29 00:51:06.764097 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2026-03-29 00:51:06.764102 | orchestrator | Sunday 29 March 2026 00:48:12 +0000 (0:00:00.594) 0:01:33.567 ********** 2026-03-29 00:51:06.764108 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:51:06.764113 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:51:06.764118 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:51:06.764124 | orchestrator | 2026-03-29 00:51:06.764129 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2026-03-29 00:51:06.764135 | orchestrator | Sunday 29 March 2026 00:48:13 +0000 (0:00:01.125) 0:01:34.692 ********** 2026-03-29 00:51:06.764140 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:51:06.764146 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:51:06.764152 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:51:06.764157 | orchestrator | 2026-03-29 00:51:06.764166 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2026-03-29 00:51:06.764172 | orchestrator | Sunday 29 March 2026 00:48:15 +0000 (0:00:01.485) 0:01:36.178 ********** 2026-03-29 00:51:06.764177 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:51:06.764183 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:51:06.764188 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:51:06.764194 | orchestrator | 2026-03-29 00:51:06.764199 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2026-03-29 00:51:06.764205 | orchestrator | Sunday 29 March 2026 00:48:43 +0000 (0:00:28.239) 0:02:04.417 ********** 2026-03-29 00:51:06.764210 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:51:06.764216 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:51:06.764221 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:51:06.764226 | orchestrator | 2026-03-29 00:51:06.764232 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2026-03-29 00:51:06.764237 | orchestrator | Sunday 29 March 2026 00:48:44 +0000 (0:00:00.571) 0:02:04.989 ********** 2026-03-29 00:51:06.764243 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:51:06.764248 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:51:06.764254 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:51:06.764259 | orchestrator | 2026-03-29 00:51:06.764265 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2026-03-29 00:51:06.764270 | orchestrator | Sunday 29 March 2026 00:48:44 +0000 (0:00:00.715) 0:02:05.704 ********** 2026-03-29 00:51:06.764275 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:51:06.764281 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:51:06.764287 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:51:06.764292 | orchestrator | 2026-03-29 00:51:06.764297 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2026-03-29 00:51:06.764303 | orchestrator | Sunday 29 March 2026 00:48:45 +0000 (0:00:00.556) 0:02:06.261 ********** 2026-03-29 00:51:06.764312 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:51:06.764321 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:51:06.764332 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:51:06.764355 | orchestrator | 2026-03-29 00:51:06.764363 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2026-03-29 00:51:06.764373 | orchestrator | Sunday 29 March 2026 00:48:46 +0000 (0:00:00.572) 0:02:06.833 ********** 2026-03-29 00:51:06.764381 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:51:06.764390 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:51:06.764398 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:51:06.764406 | orchestrator | 2026-03-29 00:51:06.764414 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2026-03-29 00:51:06.764424 | orchestrator | Sunday 29 March 2026 00:48:46 +0000 (0:00:00.270) 0:02:07.104 ********** 2026-03-29 00:51:06.764434 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:51:06.764443 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:51:06.764452 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:51:06.764462 | orchestrator | 2026-03-29 00:51:06.764471 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2026-03-29 00:51:06.764480 | orchestrator | Sunday 29 March 2026 00:48:47 +0000 (0:00:00.773) 0:02:07.877 ********** 2026-03-29 00:51:06.764489 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:51:06.764498 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:51:06.764507 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:51:06.764516 | orchestrator | 2026-03-29 00:51:06.764526 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2026-03-29 00:51:06.764536 | orchestrator | Sunday 29 March 2026 00:48:47 +0000 (0:00:00.648) 0:02:08.525 ********** 2026-03-29 00:51:06.764544 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:51:06.764552 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:51:06.764558 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:51:06.764563 | orchestrator | 2026-03-29 00:51:06.764568 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2026-03-29 00:51:06.764574 | orchestrator | Sunday 29 March 2026 00:48:48 +0000 (0:00:00.854) 0:02:09.380 ********** 2026-03-29 00:51:06.764579 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:51:06.764585 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:51:06.764590 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:51:06.764596 | orchestrator | 2026-03-29 00:51:06.764601 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2026-03-29 00:51:06.764606 | orchestrator | Sunday 29 March 2026 00:48:49 +0000 (0:00:00.850) 0:02:10.231 ********** 2026-03-29 00:51:06.764612 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:51:06.764617 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:51:06.764623 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:51:06.764628 | orchestrator | 2026-03-29 00:51:06.764634 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2026-03-29 00:51:06.764639 | orchestrator | Sunday 29 March 2026 00:48:49 +0000 (0:00:00.411) 0:02:10.642 ********** 2026-03-29 00:51:06.764644 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:51:06.764650 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:51:06.764655 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:51:06.764660 | orchestrator | 2026-03-29 00:51:06.764666 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2026-03-29 00:51:06.764671 | orchestrator | Sunday 29 March 2026 00:48:50 +0000 (0:00:00.264) 0:02:10.907 ********** 2026-03-29 00:51:06.764677 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:51:06.764682 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:51:06.764688 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:51:06.764693 | orchestrator | 2026-03-29 00:51:06.764699 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2026-03-29 00:51:06.764704 | orchestrator | Sunday 29 March 2026 00:48:50 +0000 (0:00:00.775) 0:02:11.682 ********** 2026-03-29 00:51:06.764709 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:51:06.764715 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:51:06.764720 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:51:06.764726 | orchestrator | 2026-03-29 00:51:06.764731 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2026-03-29 00:51:06.764742 | orchestrator | Sunday 29 March 2026 00:48:51 +0000 (0:00:00.677) 0:02:12.359 ********** 2026-03-29 00:51:06.764748 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-03-29 00:51:06.764759 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-03-29 00:51:06.764765 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-03-29 00:51:06.764770 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-03-29 00:51:06.764776 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-03-29 00:51:06.764781 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-03-29 00:51:06.764787 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-03-29 00:51:06.764792 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-03-29 00:51:06.764798 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-03-29 00:51:06.764803 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-03-29 00:51:06.764809 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2026-03-29 00:51:06.764814 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-03-29 00:51:06.764820 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-03-29 00:51:06.764825 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2026-03-29 00:51:06.764831 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-03-29 00:51:06.765552 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-03-29 00:51:06.765677 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-03-29 00:51:06.765704 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-03-29 00:51:06.765728 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-03-29 00:51:06.765819 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-03-29 00:51:06.765842 | orchestrator | 2026-03-29 00:51:06.765863 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2026-03-29 00:51:06.765885 | orchestrator | 2026-03-29 00:51:06.765908 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2026-03-29 00:51:06.765929 | orchestrator | Sunday 29 March 2026 00:48:55 +0000 (0:00:04.173) 0:02:16.533 ********** 2026-03-29 00:51:06.765948 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:51:06.765974 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:51:06.766009 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:51:06.766123 | orchestrator | 2026-03-29 00:51:06.766145 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2026-03-29 00:51:06.766167 | orchestrator | Sunday 29 March 2026 00:48:56 +0000 (0:00:00.305) 0:02:16.839 ********** 2026-03-29 00:51:06.766188 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:51:06.766212 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:51:06.766233 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:51:06.766255 | orchestrator | 2026-03-29 00:51:06.766294 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2026-03-29 00:51:06.766316 | orchestrator | Sunday 29 March 2026 00:48:56 +0000 (0:00:00.723) 0:02:17.563 ********** 2026-03-29 00:51:06.766341 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:51:06.766385 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:51:06.766408 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:51:06.766491 | orchestrator | 2026-03-29 00:51:06.766516 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2026-03-29 00:51:06.766540 | orchestrator | Sunday 29 March 2026 00:48:57 +0000 (0:00:00.502) 0:02:18.066 ********** 2026-03-29 00:51:06.766573 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-29 00:51:06.766595 | orchestrator | 2026-03-29 00:51:06.766615 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2026-03-29 00:51:06.766638 | orchestrator | Sunday 29 March 2026 00:48:57 +0000 (0:00:00.452) 0:02:18.518 ********** 2026-03-29 00:51:06.766648 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:51:06.766657 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:51:06.766664 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:51:06.766672 | orchestrator | 2026-03-29 00:51:06.766679 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2026-03-29 00:51:06.766687 | orchestrator | Sunday 29 March 2026 00:48:58 +0000 (0:00:00.262) 0:02:18.781 ********** 2026-03-29 00:51:06.766696 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:51:06.766705 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:51:06.766712 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:51:06.766719 | orchestrator | 2026-03-29 00:51:06.766727 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2026-03-29 00:51:06.766735 | orchestrator | Sunday 29 March 2026 00:48:58 +0000 (0:00:00.460) 0:02:19.242 ********** 2026-03-29 00:51:06.766745 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:51:06.766753 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:51:06.766762 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:51:06.766770 | orchestrator | 2026-03-29 00:51:06.766778 | orchestrator | TASK [k3s_agent : Create /etc/rancher/k3s directory] *************************** 2026-03-29 00:51:06.766786 | orchestrator | Sunday 29 March 2026 00:48:58 +0000 (0:00:00.316) 0:02:19.559 ********** 2026-03-29 00:51:06.766794 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:51:06.766802 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:51:06.766811 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:51:06.766820 | orchestrator | 2026-03-29 00:51:06.766842 | orchestrator | TASK [k3s_agent : Create custom resolv.conf for k3s] *************************** 2026-03-29 00:51:06.766847 | orchestrator | Sunday 29 March 2026 00:48:59 +0000 (0:00:00.690) 0:02:20.249 ********** 2026-03-29 00:51:06.766853 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:51:06.766858 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:51:06.766863 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:51:06.766868 | orchestrator | 2026-03-29 00:51:06.766873 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2026-03-29 00:51:06.766878 | orchestrator | Sunday 29 March 2026 00:49:00 +0000 (0:00:01.175) 0:02:21.425 ********** 2026-03-29 00:51:06.766883 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:51:06.766889 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:51:06.766894 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:51:06.766899 | orchestrator | 2026-03-29 00:51:06.766904 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2026-03-29 00:51:06.766910 | orchestrator | Sunday 29 March 2026 00:49:02 +0000 (0:00:01.670) 0:02:23.096 ********** 2026-03-29 00:51:06.766915 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:51:06.766920 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:51:06.766925 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:51:06.766930 | orchestrator | 2026-03-29 00:51:06.766935 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-03-29 00:51:06.766941 | orchestrator | 2026-03-29 00:51:06.766945 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-03-29 00:51:06.766951 | orchestrator | Sunday 29 March 2026 00:49:12 +0000 (0:00:10.037) 0:02:33.133 ********** 2026-03-29 00:51:06.766956 | orchestrator | ok: [testbed-manager] 2026-03-29 00:51:06.766969 | orchestrator | 2026-03-29 00:51:06.766975 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-03-29 00:51:06.767034 | orchestrator | Sunday 29 March 2026 00:49:13 +0000 (0:00:00.684) 0:02:33.817 ********** 2026-03-29 00:51:06.767041 | orchestrator | changed: [testbed-manager] 2026-03-29 00:51:06.767047 | orchestrator | 2026-03-29 00:51:06.767052 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-03-29 00:51:06.767058 | orchestrator | Sunday 29 March 2026 00:49:13 +0000 (0:00:00.373) 0:02:34.191 ********** 2026-03-29 00:51:06.767063 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-03-29 00:51:06.767069 | orchestrator | 2026-03-29 00:51:06.767074 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-03-29 00:51:06.767079 | orchestrator | Sunday 29 March 2026 00:49:14 +0000 (0:00:00.559) 0:02:34.751 ********** 2026-03-29 00:51:06.767086 | orchestrator | changed: [testbed-manager] 2026-03-29 00:51:06.767095 | orchestrator | 2026-03-29 00:51:06.767102 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-03-29 00:51:06.767110 | orchestrator | Sunday 29 March 2026 00:49:14 +0000 (0:00:00.915) 0:02:35.666 ********** 2026-03-29 00:51:06.767117 | orchestrator | changed: [testbed-manager] 2026-03-29 00:51:06.767125 | orchestrator | 2026-03-29 00:51:06.767133 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-03-29 00:51:06.767141 | orchestrator | Sunday 29 March 2026 00:49:15 +0000 (0:00:00.511) 0:02:36.177 ********** 2026-03-29 00:51:06.767149 | orchestrator | changed: [testbed-manager -> localhost] 2026-03-29 00:51:06.767157 | orchestrator | 2026-03-29 00:51:06.767164 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-03-29 00:51:06.767172 | orchestrator | Sunday 29 March 2026 00:49:17 +0000 (0:00:01.672) 0:02:37.849 ********** 2026-03-29 00:51:06.767181 | orchestrator | changed: [testbed-manager -> localhost] 2026-03-29 00:51:06.767189 | orchestrator | 2026-03-29 00:51:06.767199 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-03-29 00:51:06.767208 | orchestrator | Sunday 29 March 2026 00:49:18 +0000 (0:00:00.996) 0:02:38.846 ********** 2026-03-29 00:51:06.767216 | orchestrator | changed: [testbed-manager] 2026-03-29 00:51:06.767225 | orchestrator | 2026-03-29 00:51:06.767233 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-03-29 00:51:06.767242 | orchestrator | Sunday 29 March 2026 00:49:18 +0000 (0:00:00.536) 0:02:39.383 ********** 2026-03-29 00:51:06.767250 | orchestrator | changed: [testbed-manager] 2026-03-29 00:51:06.767259 | orchestrator | 2026-03-29 00:51:06.767268 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2026-03-29 00:51:06.767273 | orchestrator | 2026-03-29 00:51:06.767285 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2026-03-29 00:51:06.767290 | orchestrator | Sunday 29 March 2026 00:49:19 +0000 (0:00:00.547) 0:02:39.930 ********** 2026-03-29 00:51:06.767295 | orchestrator | ok: [testbed-manager] 2026-03-29 00:51:06.767301 | orchestrator | 2026-03-29 00:51:06.767306 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2026-03-29 00:51:06.767311 | orchestrator | Sunday 29 March 2026 00:49:19 +0000 (0:00:00.125) 0:02:40.055 ********** 2026-03-29 00:51:06.767316 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2026-03-29 00:51:06.767322 | orchestrator | 2026-03-29 00:51:06.767327 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2026-03-29 00:51:06.767332 | orchestrator | Sunday 29 March 2026 00:49:19 +0000 (0:00:00.204) 0:02:40.259 ********** 2026-03-29 00:51:06.767337 | orchestrator | ok: [testbed-manager] 2026-03-29 00:51:06.767342 | orchestrator | 2026-03-29 00:51:06.767347 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2026-03-29 00:51:06.767352 | orchestrator | Sunday 29 March 2026 00:49:20 +0000 (0:00:00.999) 0:02:41.259 ********** 2026-03-29 00:51:06.767357 | orchestrator | ok: [testbed-manager] 2026-03-29 00:51:06.767362 | orchestrator | 2026-03-29 00:51:06.767374 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2026-03-29 00:51:06.767379 | orchestrator | Sunday 29 March 2026 00:49:21 +0000 (0:00:01.379) 0:02:42.639 ********** 2026-03-29 00:51:06.767384 | orchestrator | changed: [testbed-manager] 2026-03-29 00:51:06.767389 | orchestrator | 2026-03-29 00:51:06.767394 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2026-03-29 00:51:06.767399 | orchestrator | Sunday 29 March 2026 00:49:22 +0000 (0:00:00.795) 0:02:43.434 ********** 2026-03-29 00:51:06.767405 | orchestrator | ok: [testbed-manager] 2026-03-29 00:51:06.767410 | orchestrator | 2026-03-29 00:51:06.767422 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2026-03-29 00:51:06.767427 | orchestrator | Sunday 29 March 2026 00:49:23 +0000 (0:00:00.625) 0:02:44.060 ********** 2026-03-29 00:51:06.767432 | orchestrator | changed: [testbed-manager] 2026-03-29 00:51:06.767437 | orchestrator | 2026-03-29 00:51:06.767443 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2026-03-29 00:51:06.767448 | orchestrator | Sunday 29 March 2026 00:49:30 +0000 (0:00:07.646) 0:02:51.707 ********** 2026-03-29 00:51:06.767453 | orchestrator | changed: [testbed-manager] 2026-03-29 00:51:06.767458 | orchestrator | 2026-03-29 00:51:06.767463 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2026-03-29 00:51:06.767468 | orchestrator | Sunday 29 March 2026 00:49:44 +0000 (0:00:13.153) 0:03:04.861 ********** 2026-03-29 00:51:06.767473 | orchestrator | ok: [testbed-manager] 2026-03-29 00:51:06.767478 | orchestrator | 2026-03-29 00:51:06.767483 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2026-03-29 00:51:06.767488 | orchestrator | 2026-03-29 00:51:06.767493 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2026-03-29 00:51:06.767498 | orchestrator | Sunday 29 March 2026 00:49:44 +0000 (0:00:00.555) 0:03:05.416 ********** 2026-03-29 00:51:06.767503 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:51:06.767508 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:51:06.767514 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:51:06.767519 | orchestrator | 2026-03-29 00:51:06.767524 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2026-03-29 00:51:06.767529 | orchestrator | Sunday 29 March 2026 00:49:45 +0000 (0:00:00.452) 0:03:05.869 ********** 2026-03-29 00:51:06.767534 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:51:06.767539 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:51:06.767544 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:51:06.767549 | orchestrator | 2026-03-29 00:51:06.767554 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2026-03-29 00:51:06.767559 | orchestrator | Sunday 29 March 2026 00:49:45 +0000 (0:00:00.336) 0:03:06.205 ********** 2026-03-29 00:51:06.767565 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 00:51:06.767570 | orchestrator | 2026-03-29 00:51:06.767575 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2026-03-29 00:51:06.767580 | orchestrator | Sunday 29 March 2026 00:49:46 +0000 (0:00:00.547) 0:03:06.753 ********** 2026-03-29 00:51:06.767585 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-03-29 00:51:06.767591 | orchestrator | 2026-03-29 00:51:06.767596 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2026-03-29 00:51:06.767601 | orchestrator | Sunday 29 March 2026 00:49:46 +0000 (0:00:00.874) 0:03:07.628 ********** 2026-03-29 00:51:06.767606 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-29 00:51:06.767611 | orchestrator | 2026-03-29 00:51:06.767616 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2026-03-29 00:51:06.767621 | orchestrator | Sunday 29 March 2026 00:49:47 +0000 (0:00:00.733) 0:03:08.361 ********** 2026-03-29 00:51:06.767626 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:51:06.767631 | orchestrator | 2026-03-29 00:51:06.767636 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2026-03-29 00:51:06.767647 | orchestrator | Sunday 29 March 2026 00:49:47 +0000 (0:00:00.275) 0:03:08.636 ********** 2026-03-29 00:51:06.767652 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-29 00:51:06.767657 | orchestrator | 2026-03-29 00:51:06.767662 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2026-03-29 00:51:06.767667 | orchestrator | Sunday 29 March 2026 00:49:48 +0000 (0:00:00.905) 0:03:09.541 ********** 2026-03-29 00:51:06.767672 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:51:06.767677 | orchestrator | 2026-03-29 00:51:06.767682 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2026-03-29 00:51:06.767688 | orchestrator | Sunday 29 March 2026 00:49:48 +0000 (0:00:00.099) 0:03:09.641 ********** 2026-03-29 00:51:06.767693 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:51:06.767697 | orchestrator | 2026-03-29 00:51:06.767702 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2026-03-29 00:51:06.767707 | orchestrator | Sunday 29 March 2026 00:49:49 +0000 (0:00:00.096) 0:03:09.737 ********** 2026-03-29 00:51:06.767716 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:51:06.767721 | orchestrator | 2026-03-29 00:51:06.767726 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2026-03-29 00:51:06.767731 | orchestrator | Sunday 29 March 2026 00:49:49 +0000 (0:00:00.116) 0:03:09.854 ********** 2026-03-29 00:51:06.767736 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:51:06.767741 | orchestrator | 2026-03-29 00:51:06.767746 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2026-03-29 00:51:06.767751 | orchestrator | Sunday 29 March 2026 00:49:49 +0000 (0:00:00.088) 0:03:09.943 ********** 2026-03-29 00:51:06.767757 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-03-29 00:51:06.767762 | orchestrator | 2026-03-29 00:51:06.767767 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2026-03-29 00:51:06.767772 | orchestrator | Sunday 29 March 2026 00:49:53 +0000 (0:00:03.916) 0:03:13.859 ********** 2026-03-29 00:51:06.767777 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/cilium-operator) 2026-03-29 00:51:06.767782 | orchestrator | FAILED - RETRYING: [testbed-node-0 -> localhost]: Wait for Cilium resources (30 retries left). 2026-03-29 00:51:06.767789 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=daemonset/cilium) 2026-03-29 00:51:06.767794 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-relay) 2026-03-29 00:51:06.767799 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-ui) 2026-03-29 00:51:06.767816 | orchestrator | 2026-03-29 00:51:06.767827 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2026-03-29 00:51:06.767833 | orchestrator | Sunday 29 March 2026 00:50:36 +0000 (0:00:42.908) 0:03:56.768 ********** 2026-03-29 00:51:06.767842 | orchestrator | ok: [2026-03-29 00:51:06 | INFO  | Task d22b2db3-6420-47af-b5af-887cbf3817b9 is in state SUCCESS 2026-03-29 00:51:06.767849 | orchestrator | testbed-node-0 -> localhost] 2026-03-29 00:51:06.767854 | orchestrator | 2026-03-29 00:51:06.767859 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2026-03-29 00:51:06.767864 | orchestrator | Sunday 29 March 2026 00:50:37 +0000 (0:00:01.395) 0:03:58.164 ********** 2026-03-29 00:51:06.767869 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-03-29 00:51:06.767875 | orchestrator | 2026-03-29 00:51:06.767880 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2026-03-29 00:51:06.767885 | orchestrator | Sunday 29 March 2026 00:50:39 +0000 (0:00:01.857) 0:04:00.021 ********** 2026-03-29 00:51:06.767890 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-03-29 00:51:06.767895 | orchestrator | 2026-03-29 00:51:06.767900 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2026-03-29 00:51:06.767905 | orchestrator | Sunday 29 March 2026 00:50:40 +0000 (0:00:01.261) 0:04:01.283 ********** 2026-03-29 00:51:06.767911 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:51:06.767916 | orchestrator | 2026-03-29 00:51:06.767921 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2026-03-29 00:51:06.767931 | orchestrator | Sunday 29 March 2026 00:50:40 +0000 (0:00:00.112) 0:04:01.395 ********** 2026-03-29 00:51:06.767936 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io) 2026-03-29 00:51:06.767941 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io) 2026-03-29 00:51:06.767946 | orchestrator | 2026-03-29 00:51:06.767951 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2026-03-29 00:51:06.767956 | orchestrator | Sunday 29 March 2026 00:50:42 +0000 (0:00:01.872) 0:04:03.268 ********** 2026-03-29 00:51:06.767961 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:51:06.767967 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:51:06.767972 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:51:06.767977 | orchestrator | 2026-03-29 00:51:06.768005 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2026-03-29 00:51:06.768010 | orchestrator | Sunday 29 March 2026 00:50:42 +0000 (0:00:00.263) 0:04:03.531 ********** 2026-03-29 00:51:06.768015 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:51:06.768021 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:51:06.768026 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:51:06.768031 | orchestrator | 2026-03-29 00:51:06.768036 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2026-03-29 00:51:06.768041 | orchestrator | 2026-03-29 00:51:06.768046 | orchestrator | TASK [k9s : Gather variables for each operating system] ************************ 2026-03-29 00:51:06.768051 | orchestrator | Sunday 29 March 2026 00:50:43 +0000 (0:00:00.854) 0:04:04.386 ********** 2026-03-29 00:51:06.768056 | orchestrator | ok: [testbed-manager] 2026-03-29 00:51:06.768061 | orchestrator | 2026-03-29 00:51:06.768066 | orchestrator | TASK [k9s : Include distribution specific install tasks] *********************** 2026-03-29 00:51:06.768071 | orchestrator | Sunday 29 March 2026 00:50:43 +0000 (0:00:00.138) 0:04:04.525 ********** 2026-03-29 00:51:06.768077 | orchestrator | included: /ansible/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2026-03-29 00:51:06.768082 | orchestrator | 2026-03-29 00:51:06.768087 | orchestrator | TASK [k9s : Install k9s packages] ********************************************** 2026-03-29 00:51:06.768092 | orchestrator | Sunday 29 March 2026 00:50:44 +0000 (0:00:00.366) 0:04:04.892 ********** 2026-03-29 00:51:06.768097 | orchestrator | changed: [testbed-manager] 2026-03-29 00:51:06.768102 | orchestrator | 2026-03-29 00:51:06.768107 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2026-03-29 00:51:06.768112 | orchestrator | 2026-03-29 00:51:06.768117 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2026-03-29 00:51:06.768122 | orchestrator | Sunday 29 March 2026 00:50:49 +0000 (0:00:05.675) 0:04:10.568 ********** 2026-03-29 00:51:06.768127 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:51:06.768133 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:51:06.768138 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:51:06.768143 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:51:06.768148 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:51:06.768153 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:51:06.768158 | orchestrator | 2026-03-29 00:51:06.768166 | orchestrator | TASK [Manage labels] *********************************************************** 2026-03-29 00:51:06.768172 | orchestrator | Sunday 29 March 2026 00:50:50 +0000 (0:00:00.589) 0:04:11.157 ********** 2026-03-29 00:51:06.768177 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-03-29 00:51:06.768182 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-03-29 00:51:06.768187 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-03-29 00:51:06.768192 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-03-29 00:51:06.768197 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-03-29 00:51:06.768209 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-03-29 00:51:06.768214 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-03-29 00:51:06.768219 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-03-29 00:51:06.768224 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-03-29 00:51:06.768229 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2026-03-29 00:51:06.768234 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-03-29 00:51:06.768243 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2026-03-29 00:51:06.768248 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2026-03-29 00:51:06.768254 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-03-29 00:51:06.768259 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-03-29 00:51:06.768264 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-03-29 00:51:06.768269 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-03-29 00:51:06.768274 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-03-29 00:51:06.768279 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-03-29 00:51:06.768284 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-03-29 00:51:06.768289 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-03-29 00:51:06.768294 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-03-29 00:51:06.768299 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-03-29 00:51:06.768304 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-03-29 00:51:06.768309 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-03-29 00:51:06.768314 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-03-29 00:51:06.768319 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-03-29 00:51:06.768324 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-03-29 00:51:06.768330 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-03-29 00:51:06.768335 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-03-29 00:51:06.768340 | orchestrator | 2026-03-29 00:51:06.768345 | orchestrator | TASK [Manage annotations] ****************************************************** 2026-03-29 00:51:06.768350 | orchestrator | Sunday 29 March 2026 00:51:04 +0000 (0:00:14.302) 0:04:25.460 ********** 2026-03-29 00:51:06.768355 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:51:06.768360 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:51:06.768365 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:51:06.768370 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:51:06.768375 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:51:06.768380 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:51:06.768385 | orchestrator | 2026-03-29 00:51:06.768391 | orchestrator | TASK [Manage taints] *********************************************************** 2026-03-29 00:51:06.768396 | orchestrator | Sunday 29 March 2026 00:51:05 +0000 (0:00:00.441) 0:04:25.902 ********** 2026-03-29 00:51:06.768401 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:51:06.768406 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:51:06.768411 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:51:06.768420 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:51:06.768425 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:51:06.768430 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:51:06.768435 | orchestrator | 2026-03-29 00:51:06.768440 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 00:51:06.768446 | orchestrator | testbed-manager : ok=21  changed=11  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-29 00:51:06.768452 | orchestrator | testbed-node-0 : ok=50  changed=23  unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-03-29 00:51:06.768465 | orchestrator | testbed-node-1 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-03-29 00:51:06.768470 | orchestrator | testbed-node-2 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-03-29 00:51:06.768475 | orchestrator | testbed-node-3 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-03-29 00:51:06.768480 | orchestrator | testbed-node-4 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-03-29 00:51:06.768485 | orchestrator | testbed-node-5 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-03-29 00:51:06.768491 | orchestrator | 2026-03-29 00:51:06.768496 | orchestrator | 2026-03-29 00:51:06.768501 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-29 00:51:06.768506 | orchestrator | Sunday 29 March 2026 00:51:05 +0000 (0:00:00.499) 0:04:26.401 ********** 2026-03-29 00:51:06.768511 | orchestrator | =============================================================================== 2026-03-29 00:51:06.768516 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 43.31s 2026-03-29 00:51:06.768522 | orchestrator | k3s_server_post : Wait for Cilium resources ---------------------------- 42.91s 2026-03-29 00:51:06.768531 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 28.24s 2026-03-29 00:51:06.768536 | orchestrator | Manage labels ---------------------------------------------------------- 14.30s 2026-03-29 00:51:06.768541 | orchestrator | kubectl : Install required packages ------------------------------------ 13.15s 2026-03-29 00:51:06.768546 | orchestrator | k3s_agent : Manage k3s service ----------------------------------------- 10.04s 2026-03-29 00:51:06.768552 | orchestrator | kubectl : Add repository Debian ----------------------------------------- 7.65s 2026-03-29 00:51:06.768557 | orchestrator | k9s : Install k9s packages ---------------------------------------------- 5.68s 2026-03-29 00:51:06.768562 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 5.56s 2026-03-29 00:51:06.768567 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 4.17s 2026-03-29 00:51:06.768572 | orchestrator | k3s_server_post : Install Cilium ---------------------------------------- 3.92s 2026-03-29 00:51:06.768577 | orchestrator | k3s_server : Detect Kubernetes version for label compatibility ---------- 2.59s 2026-03-29 00:51:06.768582 | orchestrator | k3s_download : Download k3s binary armhf -------------------------------- 2.59s 2026-03-29 00:51:06.768587 | orchestrator | k3s_server : Set _kube_vip_bgp_peers fact ------------------------------- 2.36s 2026-03-29 00:51:06.768592 | orchestrator | k3s_server : Init cluster inside the transient k3s-init service --------- 2.17s 2026-03-29 00:51:06.768597 | orchestrator | k3s_server : Create custom resolv.conf for k3s -------------------------- 2.11s 2026-03-29 00:51:06.768602 | orchestrator | k3s_prereq : Enable IPv4 forwarding ------------------------------------- 2.03s 2026-03-29 00:51:06.768607 | orchestrator | k3s_server_post : Test for BGP config resources ------------------------- 1.87s 2026-03-29 00:51:06.768616 | orchestrator | k3s_server_post : Copy BGP manifests to first master -------------------- 1.86s 2026-03-29 00:51:06.768622 | orchestrator | k3s_prereq : Add /usr/local/bin to sudo secure_path --------------------- 1.75s 2026-03-29 00:51:06.768627 | orchestrator | 2026-03-29 00:51:06 | INFO  | Task 8e5df341-fcd4-485c-9419-3d748b148bbf is in state STARTED 2026-03-29 00:51:06.768632 | orchestrator | 2026-03-29 00:51:06 | INFO  | Task 502b9470-3814-4e5b-866b-42ac99e0b382 is in state STARTED 2026-03-29 00:51:06.768637 | orchestrator | 2026-03-29 00:51:06 | INFO  | Task 1a16439d-2e03-442a-b330-ac506537ee12 is in state STARTED 2026-03-29 00:51:06.768642 | orchestrator | 2026-03-29 00:51:06 | INFO  | Task 113dda18-3d42-487e-b5bf-95f5cdea23c6 is in state STARTED 2026-03-29 00:51:06.768647 | orchestrator | 2026-03-29 00:51:06 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:51:09.789969 | orchestrator | 2026-03-29 00:51:09 | INFO  | Task 9bd5a4db-9dc5-4eaf-87d2-35167b4b93e9 is in state STARTED 2026-03-29 00:51:09.790271 | orchestrator | 2026-03-29 00:51:09 | INFO  | Task 8e5df341-fcd4-485c-9419-3d748b148bbf is in state STARTED 2026-03-29 00:51:09.790789 | orchestrator | 2026-03-29 00:51:09 | INFO  | Task 502b9470-3814-4e5b-866b-42ac99e0b382 is in state STARTED 2026-03-29 00:51:09.791372 | orchestrator | 2026-03-29 00:51:09 | INFO  | Task 410b470c-7d6e-41fb-b5ce-e7f559adfb46 is in state STARTED 2026-03-29 00:51:09.792049 | orchestrator | 2026-03-29 00:51:09 | INFO  | Task 1a16439d-2e03-442a-b330-ac506537ee12 is in state STARTED 2026-03-29 00:51:09.793862 | orchestrator | 2026-03-29 00:51:09 | INFO  | Task 113dda18-3d42-487e-b5bf-95f5cdea23c6 is in state STARTED 2026-03-29 00:51:09.793913 | orchestrator | 2026-03-29 00:51:09 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:51:12.842196 | orchestrator | 2026-03-29 00:51:12 | INFO  | Task 9bd5a4db-9dc5-4eaf-87d2-35167b4b93e9 is in state SUCCESS 2026-03-29 00:51:12.842286 | orchestrator | 2026-03-29 00:51:12 | INFO  | Task 8e5df341-fcd4-485c-9419-3d748b148bbf is in state STARTED 2026-03-29 00:51:12.844300 | orchestrator | 2026-03-29 00:51:12 | INFO  | Task 502b9470-3814-4e5b-866b-42ac99e0b382 is in state STARTED 2026-03-29 00:51:12.845767 | orchestrator | 2026-03-29 00:51:12 | INFO  | Task 410b470c-7d6e-41fb-b5ce-e7f559adfb46 is in state STARTED 2026-03-29 00:51:12.847688 | orchestrator | 2026-03-29 00:51:12 | INFO  | Task 1a16439d-2e03-442a-b330-ac506537ee12 is in state STARTED 2026-03-29 00:51:12.848373 | orchestrator | 2026-03-29 00:51:12 | INFO  | Task 113dda18-3d42-487e-b5bf-95f5cdea23c6 is in state STARTED 2026-03-29 00:51:12.849700 | orchestrator | 2026-03-29 00:51:12 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:51:15.884465 | orchestrator | 2026-03-29 00:51:15 | INFO  | Task 8e5df341-fcd4-485c-9419-3d748b148bbf is in state STARTED 2026-03-29 00:51:15.884851 | orchestrator | 2026-03-29 00:51:15 | INFO  | Task 502b9470-3814-4e5b-866b-42ac99e0b382 is in state STARTED 2026-03-29 00:51:15.887612 | orchestrator | 2026-03-29 00:51:15 | INFO  | Task 410b470c-7d6e-41fb-b5ce-e7f559adfb46 is in state STARTED 2026-03-29 00:51:15.887666 | orchestrator | 2026-03-29 00:51:15 | INFO  | Task 1a16439d-2e03-442a-b330-ac506537ee12 is in state STARTED 2026-03-29 00:51:15.887675 | orchestrator | 2026-03-29 00:51:15 | INFO  | Task 113dda18-3d42-487e-b5bf-95f5cdea23c6 is in state STARTED 2026-03-29 00:51:15.887684 | orchestrator | 2026-03-29 00:51:15 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:51:18.924107 | orchestrator | 2026-03-29 00:51:18 | INFO  | Task 8e5df341-fcd4-485c-9419-3d748b148bbf is in state STARTED 2026-03-29 00:51:18.924816 | orchestrator | 2026-03-29 00:51:18 | INFO  | Task 502b9470-3814-4e5b-866b-42ac99e0b382 is in state STARTED 2026-03-29 00:51:18.925658 | orchestrator | 2026-03-29 00:51:18 | INFO  | Task 410b470c-7d6e-41fb-b5ce-e7f559adfb46 is in state SUCCESS 2026-03-29 00:51:18.928425 | orchestrator | 2026-03-29 00:51:18 | INFO  | Task 1a16439d-2e03-442a-b330-ac506537ee12 is in state STARTED 2026-03-29 00:51:18.929140 | orchestrator | 2026-03-29 00:51:18 | INFO  | Task 113dda18-3d42-487e-b5bf-95f5cdea23c6 is in state STARTED 2026-03-29 00:51:18.929180 | orchestrator | 2026-03-29 00:51:18 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:51:21.961269 | orchestrator | 2026-03-29 00:51:21 | INFO  | Task 8e5df341-fcd4-485c-9419-3d748b148bbf is in state STARTED 2026-03-29 00:51:21.962486 | orchestrator | 2026-03-29 00:51:21 | INFO  | Task 502b9470-3814-4e5b-866b-42ac99e0b382 is in state STARTED 2026-03-29 00:51:21.963671 | orchestrator | 2026-03-29 00:51:21 | INFO  | Task 1a16439d-2e03-442a-b330-ac506537ee12 is in state STARTED 2026-03-29 00:51:21.965235 | orchestrator | 2026-03-29 00:51:21 | INFO  | Task 113dda18-3d42-487e-b5bf-95f5cdea23c6 is in state STARTED 2026-03-29 00:51:21.965276 | orchestrator | 2026-03-29 00:51:21 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:51:24.988482 | orchestrator | 2026-03-29 00:51:24 | INFO  | Task 8e5df341-fcd4-485c-9419-3d748b148bbf is in state STARTED 2026-03-29 00:51:24.990343 | orchestrator | 2026-03-29 00:51:24 | INFO  | Task 502b9470-3814-4e5b-866b-42ac99e0b382 is in state STARTED 2026-03-29 00:51:24.992826 | orchestrator | 2026-03-29 00:51:24 | INFO  | Task 1a16439d-2e03-442a-b330-ac506537ee12 is in state STARTED 2026-03-29 00:51:24.992876 | orchestrator | 2026-03-29 00:51:24 | INFO  | Task 113dda18-3d42-487e-b5bf-95f5cdea23c6 is in state STARTED 2026-03-29 00:51:24.992881 | orchestrator | 2026-03-29 00:51:24 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:51:28.035815 | orchestrator | 2026-03-29 00:51:28 | INFO  | Task 8e5df341-fcd4-485c-9419-3d748b148bbf is in state STARTED 2026-03-29 00:51:28.036330 | orchestrator | 2026-03-29 00:51:28 | INFO  | Task 502b9470-3814-4e5b-866b-42ac99e0b382 is in state STARTED 2026-03-29 00:51:28.037250 | orchestrator | 2026-03-29 00:51:28 | INFO  | Task 1a16439d-2e03-442a-b330-ac506537ee12 is in state STARTED 2026-03-29 00:51:28.038069 | orchestrator | 2026-03-29 00:51:28 | INFO  | Task 113dda18-3d42-487e-b5bf-95f5cdea23c6 is in state STARTED 2026-03-29 00:51:28.038161 | orchestrator | 2026-03-29 00:51:28 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:51:31.069331 | orchestrator | 2026-03-29 00:51:31 | INFO  | Task 8e5df341-fcd4-485c-9419-3d748b148bbf is in state STARTED 2026-03-29 00:51:31.070691 | orchestrator | 2026-03-29 00:51:31 | INFO  | Task 502b9470-3814-4e5b-866b-42ac99e0b382 is in state SUCCESS 2026-03-29 00:51:31.071763 | orchestrator | 2026-03-29 00:51:31.071825 | orchestrator | 2026-03-29 00:51:31.071841 | orchestrator | PLAY [Copy kubeconfig to the configuration repository] ************************* 2026-03-29 00:51:31.071847 | orchestrator | 2026-03-29 00:51:31.071854 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-03-29 00:51:31.071860 | orchestrator | Sunday 29 March 2026 00:51:08 +0000 (0:00:00.222) 0:00:00.222 ********** 2026-03-29 00:51:31.071866 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-03-29 00:51:31.071872 | orchestrator | 2026-03-29 00:51:31.071878 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-03-29 00:51:31.071884 | orchestrator | Sunday 29 March 2026 00:51:10 +0000 (0:00:01.156) 0:00:01.378 ********** 2026-03-29 00:51:31.071890 | orchestrator | changed: [testbed-manager] 2026-03-29 00:51:31.071896 | orchestrator | 2026-03-29 00:51:31.071902 | orchestrator | TASK [Change server address in the kubeconfig file] **************************** 2026-03-29 00:51:31.071908 | orchestrator | Sunday 29 March 2026 00:51:11 +0000 (0:00:01.313) 0:00:02.691 ********** 2026-03-29 00:51:31.071929 | orchestrator | changed: [testbed-manager] 2026-03-29 00:51:31.071935 | orchestrator | 2026-03-29 00:51:31.071941 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 00:51:31.071948 | orchestrator | testbed-manager : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-29 00:51:31.071955 | orchestrator | 2026-03-29 00:51:31.072059 | orchestrator | 2026-03-29 00:51:31.072081 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-29 00:51:31.072087 | orchestrator | Sunday 29 March 2026 00:51:12 +0000 (0:00:00.666) 0:00:03.358 ********** 2026-03-29 00:51:31.072091 | orchestrator | =============================================================================== 2026-03-29 00:51:31.072096 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.31s 2026-03-29 00:51:31.072102 | orchestrator | Get kubeconfig file ----------------------------------------------------- 1.16s 2026-03-29 00:51:31.072106 | orchestrator | Change server address in the kubeconfig file ---------------------------- 0.67s 2026-03-29 00:51:31.072111 | orchestrator | 2026-03-29 00:51:31.072116 | orchestrator | 2026-03-29 00:51:31.072121 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-03-29 00:51:31.072125 | orchestrator | 2026-03-29 00:51:31.072130 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-03-29 00:51:31.072136 | orchestrator | Sunday 29 March 2026 00:51:08 +0000 (0:00:00.216) 0:00:00.216 ********** 2026-03-29 00:51:31.072141 | orchestrator | ok: [testbed-manager] 2026-03-29 00:51:31.072147 | orchestrator | 2026-03-29 00:51:31.072152 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-03-29 00:51:31.072158 | orchestrator | Sunday 29 March 2026 00:51:09 +0000 (0:00:00.746) 0:00:00.963 ********** 2026-03-29 00:51:31.072163 | orchestrator | ok: [testbed-manager] 2026-03-29 00:51:31.072168 | orchestrator | 2026-03-29 00:51:31.072173 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-03-29 00:51:31.072178 | orchestrator | Sunday 29 March 2026 00:51:10 +0000 (0:00:00.582) 0:00:01.545 ********** 2026-03-29 00:51:31.072184 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-03-29 00:51:31.072190 | orchestrator | 2026-03-29 00:51:31.072195 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-03-29 00:51:31.072201 | orchestrator | Sunday 29 March 2026 00:51:11 +0000 (0:00:01.110) 0:00:02.656 ********** 2026-03-29 00:51:31.072206 | orchestrator | changed: [testbed-manager] 2026-03-29 00:51:31.072212 | orchestrator | 2026-03-29 00:51:31.072217 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-03-29 00:51:31.072222 | orchestrator | Sunday 29 March 2026 00:51:12 +0000 (0:00:01.169) 0:00:03.825 ********** 2026-03-29 00:51:31.072228 | orchestrator | changed: [testbed-manager] 2026-03-29 00:51:31.072233 | orchestrator | 2026-03-29 00:51:31.072239 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-03-29 00:51:31.072245 | orchestrator | Sunday 29 March 2026 00:51:13 +0000 (0:00:00.458) 0:00:04.284 ********** 2026-03-29 00:51:31.072251 | orchestrator | changed: [testbed-manager -> localhost] 2026-03-29 00:51:31.072257 | orchestrator | 2026-03-29 00:51:31.072263 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-03-29 00:51:31.072269 | orchestrator | Sunday 29 March 2026 00:51:14 +0000 (0:00:01.545) 0:00:05.830 ********** 2026-03-29 00:51:31.072275 | orchestrator | changed: [testbed-manager -> localhost] 2026-03-29 00:51:31.072280 | orchestrator | 2026-03-29 00:51:31.072286 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-03-29 00:51:31.072291 | orchestrator | Sunday 29 March 2026 00:51:15 +0000 (0:00:00.776) 0:00:06.606 ********** 2026-03-29 00:51:31.072296 | orchestrator | ok: [testbed-manager] 2026-03-29 00:51:31.072301 | orchestrator | 2026-03-29 00:51:31.072306 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-03-29 00:51:31.072311 | orchestrator | Sunday 29 March 2026 00:51:15 +0000 (0:00:00.379) 0:00:06.986 ********** 2026-03-29 00:51:31.072334 | orchestrator | ok: [testbed-manager] 2026-03-29 00:51:31.072366 | orchestrator | 2026-03-29 00:51:31.072372 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 00:51:31.072378 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-29 00:51:31.072384 | orchestrator | 2026-03-29 00:51:31.072389 | orchestrator | 2026-03-29 00:51:31.072400 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-29 00:51:31.072405 | orchestrator | Sunday 29 March 2026 00:51:15 +0000 (0:00:00.236) 0:00:07.222 ********** 2026-03-29 00:51:31.072410 | orchestrator | =============================================================================== 2026-03-29 00:51:31.072423 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 1.55s 2026-03-29 00:51:31.072428 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.17s 2026-03-29 00:51:31.072443 | orchestrator | Get kubeconfig file ----------------------------------------------------- 1.11s 2026-03-29 00:51:31.072462 | orchestrator | Change server address in the kubeconfig inside the manager service ------ 0.78s 2026-03-29 00:51:31.072468 | orchestrator | Get home directory of operator user ------------------------------------- 0.75s 2026-03-29 00:51:31.072474 | orchestrator | Create .kube directory -------------------------------------------------- 0.58s 2026-03-29 00:51:31.072479 | orchestrator | Change server address in the kubeconfig --------------------------------- 0.46s 2026-03-29 00:51:31.072484 | orchestrator | Set KUBECONFIG environment variable ------------------------------------- 0.38s 2026-03-29 00:51:31.072489 | orchestrator | Enable kubectl command line completion ---------------------------------- 0.24s 2026-03-29 00:51:31.072494 | orchestrator | 2026-03-29 00:51:31.072499 | orchestrator | 2026-03-29 00:51:31.072505 | orchestrator | PLAY [Set kolla_action_rabbitmq] *********************************************** 2026-03-29 00:51:31.072510 | orchestrator | 2026-03-29 00:51:31.072515 | orchestrator | TASK [Inform the user about the following task] ******************************** 2026-03-29 00:51:31.072521 | orchestrator | Sunday 29 March 2026 00:49:10 +0000 (0:00:00.119) 0:00:00.121 ********** 2026-03-29 00:51:31.072526 | orchestrator | ok: [localhost] => { 2026-03-29 00:51:31.072532 | orchestrator |  "msg": "The task 'Check RabbitMQ service' fails if the RabbitMQ service has not yet been deployed. This is fine." 2026-03-29 00:51:31.072538 | orchestrator | } 2026-03-29 00:51:31.072544 | orchestrator | 2026-03-29 00:51:31.072549 | orchestrator | TASK [Check RabbitMQ service] ************************************************** 2026-03-29 00:51:31.072554 | orchestrator | Sunday 29 March 2026 00:49:10 +0000 (0:00:00.076) 0:00:00.197 ********** 2026-03-29 00:51:31.072561 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string RabbitMQ Management in 192.168.16.9:15672"} 2026-03-29 00:51:31.072567 | orchestrator | ...ignoring 2026-03-29 00:51:31.072595 | orchestrator | 2026-03-29 00:51:31.072601 | orchestrator | TASK [Set kolla_action_rabbitmq = upgrade if RabbitMQ is already running] ****** 2026-03-29 00:51:31.072607 | orchestrator | Sunday 29 March 2026 00:49:13 +0000 (0:00:03.223) 0:00:03.421 ********** 2026-03-29 00:51:31.072613 | orchestrator | skipping: [localhost] 2026-03-29 00:51:31.072618 | orchestrator | 2026-03-29 00:51:31.072623 | orchestrator | TASK [Set kolla_action_rabbitmq = kolla_action_ng] ***************************** 2026-03-29 00:51:31.072636 | orchestrator | Sunday 29 March 2026 00:49:13 +0000 (0:00:00.042) 0:00:03.463 ********** 2026-03-29 00:51:31.072641 | orchestrator | ok: [localhost] 2026-03-29 00:51:31.072646 | orchestrator | 2026-03-29 00:51:31.072651 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-29 00:51:31.072656 | orchestrator | 2026-03-29 00:51:31.072661 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-29 00:51:31.072666 | orchestrator | Sunday 29 March 2026 00:49:13 +0000 (0:00:00.204) 0:00:03.667 ********** 2026-03-29 00:51:31.072679 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:51:31.072697 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:51:31.072703 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:51:31.072709 | orchestrator | 2026-03-29 00:51:31.072714 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-29 00:51:31.072719 | orchestrator | Sunday 29 March 2026 00:49:14 +0000 (0:00:00.274) 0:00:03.942 ********** 2026-03-29 00:51:31.072724 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2026-03-29 00:51:31.072729 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2026-03-29 00:51:31.072734 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2026-03-29 00:51:31.072739 | orchestrator | 2026-03-29 00:51:31.072745 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2026-03-29 00:51:31.072749 | orchestrator | 2026-03-29 00:51:31.072754 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-03-29 00:51:31.072760 | orchestrator | Sunday 29 March 2026 00:49:14 +0000 (0:00:00.698) 0:00:04.640 ********** 2026-03-29 00:51:31.072765 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 00:51:31.072771 | orchestrator | 2026-03-29 00:51:31.072776 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-03-29 00:51:31.072781 | orchestrator | Sunday 29 March 2026 00:49:15 +0000 (0:00:00.719) 0:00:05.361 ********** 2026-03-29 00:51:31.072787 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:51:31.072792 | orchestrator | 2026-03-29 00:51:31.072798 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2026-03-29 00:51:31.072803 | orchestrator | Sunday 29 March 2026 00:49:17 +0000 (0:00:01.512) 0:00:06.873 ********** 2026-03-29 00:51:31.072808 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:51:31.072814 | orchestrator | 2026-03-29 00:51:31.072820 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2026-03-29 00:51:31.072826 | orchestrator | Sunday 29 March 2026 00:49:17 +0000 (0:00:00.944) 0:00:07.818 ********** 2026-03-29 00:51:31.072831 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:51:31.072837 | orchestrator | 2026-03-29 00:51:31.072842 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2026-03-29 00:51:31.072848 | orchestrator | Sunday 29 March 2026 00:49:19 +0000 (0:00:01.448) 0:00:09.267 ********** 2026-03-29 00:51:31.072853 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:51:31.072858 | orchestrator | 2026-03-29 00:51:31.072863 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2026-03-29 00:51:31.072876 | orchestrator | Sunday 29 March 2026 00:49:20 +0000 (0:00:00.950) 0:00:10.217 ********** 2026-03-29 00:51:31.072882 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:51:31.072888 | orchestrator | 2026-03-29 00:51:31.072893 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-03-29 00:51:31.072899 | orchestrator | Sunday 29 March 2026 00:49:20 +0000 (0:00:00.372) 0:00:10.590 ********** 2026-03-29 00:51:31.072905 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 00:51:31.072910 | orchestrator | 2026-03-29 00:51:31.072915 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-03-29 00:51:31.072934 | orchestrator | Sunday 29 March 2026 00:49:21 +0000 (0:00:00.569) 0:00:11.159 ********** 2026-03-29 00:51:31.072941 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:51:31.072946 | orchestrator | 2026-03-29 00:51:31.072951 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2026-03-29 00:51:31.072957 | orchestrator | Sunday 29 March 2026 00:49:22 +0000 (0:00:01.150) 0:00:12.309 ********** 2026-03-29 00:51:31.072979 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:51:31.072984 | orchestrator | 2026-03-29 00:51:31.072989 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2026-03-29 00:51:31.072994 | orchestrator | Sunday 29 March 2026 00:49:23 +0000 (0:00:01.019) 0:00:13.329 ********** 2026-03-29 00:51:31.073000 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:51:31.073012 | orchestrator | 2026-03-29 00:51:31.073018 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2026-03-29 00:51:31.073023 | orchestrator | Sunday 29 March 2026 00:49:23 +0000 (0:00:00.272) 0:00:13.601 ********** 2026-03-29 00:51:31.073033 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-29 00:51:31.073050 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-29 00:51:31.073058 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-29 00:51:31.073066 | orchestrator | 2026-03-29 00:51:31.073071 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2026-03-29 00:51:31.073077 | orchestrator | Sunday 29 March 2026 00:49:25 +0000 (0:00:01.234) 0:00:14.836 ********** 2026-03-29 00:51:31.073093 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-29 00:51:31.073106 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-29 00:51:31.073113 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-29 00:51:31.073119 | orchestrator | 2026-03-29 00:51:31.073124 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2026-03-29 00:51:31.073130 | orchestrator | Sunday 29 March 2026 00:49:26 +0000 (0:00:01.617) 0:00:16.453 ********** 2026-03-29 00:51:31.073135 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-03-29 00:51:31.073141 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-03-29 00:51:31.073147 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-03-29 00:51:31.073153 | orchestrator | 2026-03-29 00:51:31.073158 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2026-03-29 00:51:31.073163 | orchestrator | Sunday 29 March 2026 00:49:28 +0000 (0:00:01.742) 0:00:18.195 ********** 2026-03-29 00:51:31.073168 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-03-29 00:51:31.073173 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-03-29 00:51:31.073178 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-03-29 00:51:31.073199 | orchestrator | 2026-03-29 00:51:31.073208 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2026-03-29 00:51:31.073222 | orchestrator | Sunday 29 March 2026 00:49:32 +0000 (0:00:03.839) 0:00:22.034 ********** 2026-03-29 00:51:31.073228 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-03-29 00:51:31.073233 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-03-29 00:51:31.073239 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-03-29 00:51:31.073244 | orchestrator | 2026-03-29 00:51:31.073249 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2026-03-29 00:51:31.073254 | orchestrator | Sunday 29 March 2026 00:49:34 +0000 (0:00:02.603) 0:00:24.638 ********** 2026-03-29 00:51:31.073260 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-03-29 00:51:31.073266 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-03-29 00:51:31.073271 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-03-29 00:51:31.073277 | orchestrator | 2026-03-29 00:51:31.073282 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2026-03-29 00:51:31.073287 | orchestrator | Sunday 29 March 2026 00:49:36 +0000 (0:00:02.154) 0:00:26.793 ********** 2026-03-29 00:51:31.073293 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-03-29 00:51:31.073298 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-03-29 00:51:31.073312 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-03-29 00:51:31.073318 | orchestrator | 2026-03-29 00:51:31.073323 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2026-03-29 00:51:31.073329 | orchestrator | Sunday 29 March 2026 00:49:38 +0000 (0:00:01.816) 0:00:28.609 ********** 2026-03-29 00:51:31.073334 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-03-29 00:51:31.073340 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-03-29 00:51:31.073345 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-03-29 00:51:31.073350 | orchestrator | 2026-03-29 00:51:31.073356 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-03-29 00:51:31.073361 | orchestrator | Sunday 29 March 2026 00:49:40 +0000 (0:00:02.065) 0:00:30.675 ********** 2026-03-29 00:51:31.073366 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:51:31.073372 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:51:31.073378 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:51:31.073383 | orchestrator | 2026-03-29 00:51:31.073389 | orchestrator | TASK [rabbitmq : Check rabbitmq containers] ************************************ 2026-03-29 00:51:31.073395 | orchestrator | Sunday 29 March 2026 00:49:41 +0000 (0:00:00.371) 0:00:31.047 ********** 2026-03-29 00:51:31.073401 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-29 00:51:31.073421 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-29 00:51:31.073428 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-29 00:51:31.073434 | orchestrator | 2026-03-29 00:51:31.073440 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2026-03-29 00:51:31.073445 | orchestrator | Sunday 29 March 2026 00:49:42 +0000 (0:00:01.148) 0:00:32.195 ********** 2026-03-29 00:51:31.073451 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:51:31.073457 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:51:31.073462 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:51:31.073467 | orchestrator | 2026-03-29 00:51:31.073473 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2026-03-29 00:51:31.073478 | orchestrator | Sunday 29 March 2026 00:49:43 +0000 (0:00:01.039) 0:00:33.235 ********** 2026-03-29 00:51:31.073484 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:51:31.073489 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:51:31.073495 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:51:31.073501 | orchestrator | 2026-03-29 00:51:31.073506 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2026-03-29 00:51:31.073512 | orchestrator | Sunday 29 March 2026 00:49:50 +0000 (0:00:07.052) 0:00:40.287 ********** 2026-03-29 00:51:31.073518 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:51:31.073524 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:51:31.073529 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:51:31.073544 | orchestrator | 2026-03-29 00:51:31.073551 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-03-29 00:51:31.073556 | orchestrator | 2026-03-29 00:51:31.073562 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-03-29 00:51:31.073572 | orchestrator | Sunday 29 March 2026 00:49:50 +0000 (0:00:00.280) 0:00:40.568 ********** 2026-03-29 00:51:31.073578 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:51:31.073584 | orchestrator | 2026-03-29 00:51:31.073590 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-03-29 00:51:31.073596 | orchestrator | Sunday 29 March 2026 00:49:51 +0000 (0:00:00.638) 0:00:41.207 ********** 2026-03-29 00:51:31.073602 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:51:31.073608 | orchestrator | 2026-03-29 00:51:31.073614 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-03-29 00:51:31.073619 | orchestrator | Sunday 29 March 2026 00:49:51 +0000 (0:00:00.270) 0:00:41.477 ********** 2026-03-29 00:51:31.073624 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:51:31.073629 | orchestrator | 2026-03-29 00:51:31.073635 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-03-29 00:51:31.073640 | orchestrator | Sunday 29 March 2026 00:49:58 +0000 (0:00:06.729) 0:00:48.206 ********** 2026-03-29 00:51:31.073645 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:51:31.073650 | orchestrator | 2026-03-29 00:51:31.073655 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-03-29 00:51:31.073659 | orchestrator | 2026-03-29 00:51:31.073664 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-03-29 00:51:31.073669 | orchestrator | Sunday 29 March 2026 00:50:47 +0000 (0:00:49.419) 0:01:37.626 ********** 2026-03-29 00:51:31.073674 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:51:31.073679 | orchestrator | 2026-03-29 00:51:31.073684 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-03-29 00:51:31.073689 | orchestrator | Sunday 29 March 2026 00:50:48 +0000 (0:00:00.631) 0:01:38.257 ********** 2026-03-29 00:51:31.073694 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:51:31.073698 | orchestrator | 2026-03-29 00:51:31.073703 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-03-29 00:51:31.073709 | orchestrator | Sunday 29 March 2026 00:50:48 +0000 (0:00:00.208) 0:01:38.466 ********** 2026-03-29 00:51:31.073714 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:51:31.073719 | orchestrator | 2026-03-29 00:51:31.073724 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-03-29 00:51:31.073729 | orchestrator | Sunday 29 March 2026 00:50:50 +0000 (0:00:01.535) 0:01:40.002 ********** 2026-03-29 00:51:31.073734 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:51:31.073739 | orchestrator | 2026-03-29 00:51:31.073744 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-03-29 00:51:31.073749 | orchestrator | 2026-03-29 00:51:31.073753 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-03-29 00:51:31.073758 | orchestrator | Sunday 29 March 2026 00:51:05 +0000 (0:00:15.674) 0:01:55.677 ********** 2026-03-29 00:51:31.073764 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:51:31.073770 | orchestrator | 2026-03-29 00:51:31.073781 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-03-29 00:51:31.073787 | orchestrator | Sunday 29 March 2026 00:51:06 +0000 (0:00:00.642) 0:01:56.320 ********** 2026-03-29 00:51:31.073793 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:51:31.073799 | orchestrator | 2026-03-29 00:51:31.073805 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-03-29 00:51:31.073811 | orchestrator | Sunday 29 March 2026 00:51:06 +0000 (0:00:00.336) 0:01:56.656 ********** 2026-03-29 00:51:31.073817 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:51:31.073823 | orchestrator | 2026-03-29 00:51:31.073829 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-03-29 00:51:31.073834 | orchestrator | Sunday 29 March 2026 00:51:13 +0000 (0:00:07.003) 0:02:03.660 ********** 2026-03-29 00:51:31.073840 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:51:31.073845 | orchestrator | 2026-03-29 00:51:31.073850 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2026-03-29 00:51:31.073856 | orchestrator | 2026-03-29 00:51:31.073868 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2026-03-29 00:51:31.073874 | orchestrator | Sunday 29 March 2026 00:51:25 +0000 (0:00:11.913) 0:02:15.573 ********** 2026-03-29 00:51:31.073880 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 00:51:31.073886 | orchestrator | 2026-03-29 00:51:31.073892 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2026-03-29 00:51:31.073897 | orchestrator | Sunday 29 March 2026 00:51:26 +0000 (0:00:00.570) 0:02:16.143 ********** 2026-03-29 00:51:31.073903 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:51:31.073909 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:51:31.073916 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:51:31.073922 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-03-29 00:51:31.073928 | orchestrator | enable_outward_rabbitmq_True 2026-03-29 00:51:31.073934 | orchestrator | 2026-03-29 00:51:31.073939 | orchestrator | PLAY [Apply role rabbitmq (outward)] ******************************************* 2026-03-29 00:51:31.073945 | orchestrator | skipping: no hosts matched 2026-03-29 00:51:31.073950 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-03-29 00:51:31.073957 | orchestrator | outward_rabbitmq_restart 2026-03-29 00:51:31.073981 | orchestrator | 2026-03-29 00:51:31.073987 | orchestrator | PLAY [Restart rabbitmq (outward) services] ************************************* 2026-03-29 00:51:31.073993 | orchestrator | skipping: no hosts matched 2026-03-29 00:51:31.073998 | orchestrator | 2026-03-29 00:51:31.074004 | orchestrator | PLAY [Apply rabbitmq (outward) post-configuration] ***************************** 2026-03-29 00:51:31.074010 | orchestrator | skipping: no hosts matched 2026-03-29 00:51:31.074059 | orchestrator | 2026-03-29 00:51:31.074067 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 00:51:31.074074 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2026-03-29 00:51:31.074080 | orchestrator | testbed-node-0 : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-03-29 00:51:31.074087 | orchestrator | testbed-node-1 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-29 00:51:31.074093 | orchestrator | testbed-node-2 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-29 00:51:31.074098 | orchestrator | 2026-03-29 00:51:31.074104 | orchestrator | 2026-03-29 00:51:31.074110 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-29 00:51:31.074115 | orchestrator | Sunday 29 March 2026 00:51:28 +0000 (0:00:02.389) 0:02:18.532 ********** 2026-03-29 00:51:31.074121 | orchestrator | =============================================================================== 2026-03-29 00:51:31.074127 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 77.01s 2026-03-29 00:51:31.074132 | orchestrator | rabbitmq : Restart rabbitmq container ---------------------------------- 15.27s 2026-03-29 00:51:31.074190 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------- 7.05s 2026-03-29 00:51:31.074204 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 3.84s 2026-03-29 00:51:31.074210 | orchestrator | Check RabbitMQ service -------------------------------------------------- 3.22s 2026-03-29 00:51:31.074215 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 2.60s 2026-03-29 00:51:31.074221 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 2.39s 2026-03-29 00:51:31.074236 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 2.15s 2026-03-29 00:51:31.074242 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 2.07s 2026-03-29 00:51:31.074247 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 1.91s 2026-03-29 00:51:31.074258 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 1.82s 2026-03-29 00:51:31.074264 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 1.74s 2026-03-29 00:51:31.074269 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 1.62s 2026-03-29 00:51:31.074275 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 1.51s 2026-03-29 00:51:31.074281 | orchestrator | rabbitmq : Get new RabbitMQ version ------------------------------------- 1.45s 2026-03-29 00:51:31.074286 | orchestrator | rabbitmq : Ensuring config directories exist ---------------------------- 1.23s 2026-03-29 00:51:31.074294 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 1.15s 2026-03-29 00:51:31.074308 | orchestrator | rabbitmq : Check rabbitmq containers ------------------------------------ 1.15s 2026-03-29 00:51:31.074313 | orchestrator | rabbitmq : Creating rabbitmq volume ------------------------------------- 1.04s 2026-03-29 00:51:31.074319 | orchestrator | rabbitmq : List RabbitMQ policies --------------------------------------- 1.02s 2026-03-29 00:51:31.074324 | orchestrator | 2026-03-29 00:51:31 | INFO  | Task 1a16439d-2e03-442a-b330-ac506537ee12 is in state STARTED 2026-03-29 00:51:31.074330 | orchestrator | 2026-03-29 00:51:31 | INFO  | Task 113dda18-3d42-487e-b5bf-95f5cdea23c6 is in state STARTED 2026-03-29 00:51:31.074336 | orchestrator | 2026-03-29 00:51:31 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:51:34.110462 | orchestrator | 2026-03-29 00:51:34 | INFO  | Task 8e5df341-fcd4-485c-9419-3d748b148bbf is in state STARTED 2026-03-29 00:51:34.113505 | orchestrator | 2026-03-29 00:51:34 | INFO  | Task 1a16439d-2e03-442a-b330-ac506537ee12 is in state STARTED 2026-03-29 00:51:34.115782 | orchestrator | 2026-03-29 00:51:34 | INFO  | Task 113dda18-3d42-487e-b5bf-95f5cdea23c6 is in state STARTED 2026-03-29 00:51:34.115838 | orchestrator | 2026-03-29 00:51:34 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:51:37.155221 | orchestrator | 2026-03-29 00:51:37 | INFO  | Task 8e5df341-fcd4-485c-9419-3d748b148bbf is in state STARTED 2026-03-29 00:51:37.156986 | orchestrator | 2026-03-29 00:51:37 | INFO  | Task 1a16439d-2e03-442a-b330-ac506537ee12 is in state STARTED 2026-03-29 00:51:37.158890 | orchestrator | 2026-03-29 00:51:37 | INFO  | Task 113dda18-3d42-487e-b5bf-95f5cdea23c6 is in state STARTED 2026-03-29 00:51:37.158924 | orchestrator | 2026-03-29 00:51:37 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:51:40.192224 | orchestrator | 2026-03-29 00:51:40 | INFO  | Task 8e5df341-fcd4-485c-9419-3d748b148bbf is in state STARTED 2026-03-29 00:51:40.192903 | orchestrator | 2026-03-29 00:51:40 | INFO  | Task 1a16439d-2e03-442a-b330-ac506537ee12 is in state STARTED 2026-03-29 00:51:40.195351 | orchestrator | 2026-03-29 00:51:40 | INFO  | Task 113dda18-3d42-487e-b5bf-95f5cdea23c6 is in state STARTED 2026-03-29 00:51:40.195402 | orchestrator | 2026-03-29 00:51:40 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:51:43.221050 | orchestrator | 2026-03-29 00:51:43 | INFO  | Task 8e5df341-fcd4-485c-9419-3d748b148bbf is in state STARTED 2026-03-29 00:51:43.221635 | orchestrator | 2026-03-29 00:51:43 | INFO  | Task 1a16439d-2e03-442a-b330-ac506537ee12 is in state STARTED 2026-03-29 00:51:43.222007 | orchestrator | 2026-03-29 00:51:43 | INFO  | Task 113dda18-3d42-487e-b5bf-95f5cdea23c6 is in state STARTED 2026-03-29 00:51:43.222143 | orchestrator | 2026-03-29 00:51:43 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:51:46.257338 | orchestrator | 2026-03-29 00:51:46 | INFO  | Task 8e5df341-fcd4-485c-9419-3d748b148bbf is in state STARTED 2026-03-29 00:51:46.257526 | orchestrator | 2026-03-29 00:51:46 | INFO  | Task 1a16439d-2e03-442a-b330-ac506537ee12 is in state STARTED 2026-03-29 00:51:46.258196 | orchestrator | 2026-03-29 00:51:46 | INFO  | Task 113dda18-3d42-487e-b5bf-95f5cdea23c6 is in state STARTED 2026-03-29 00:51:46.258220 | orchestrator | 2026-03-29 00:51:46 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:51:49.292175 | orchestrator | 2026-03-29 00:51:49 | INFO  | Task 8e5df341-fcd4-485c-9419-3d748b148bbf is in state STARTED 2026-03-29 00:51:49.292349 | orchestrator | 2026-03-29 00:51:49 | INFO  | Task 1a16439d-2e03-442a-b330-ac506537ee12 is in state STARTED 2026-03-29 00:51:49.294228 | orchestrator | 2026-03-29 00:51:49 | INFO  | Task 113dda18-3d42-487e-b5bf-95f5cdea23c6 is in state STARTED 2026-03-29 00:51:49.294393 | orchestrator | 2026-03-29 00:51:49 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:51:52.327362 | orchestrator | 2026-03-29 00:51:52 | INFO  | Task 8e5df341-fcd4-485c-9419-3d748b148bbf is in state STARTED 2026-03-29 00:51:52.328160 | orchestrator | 2026-03-29 00:51:52 | INFO  | Task 1a16439d-2e03-442a-b330-ac506537ee12 is in state STARTED 2026-03-29 00:51:52.328746 | orchestrator | 2026-03-29 00:51:52 | INFO  | Task 113dda18-3d42-487e-b5bf-95f5cdea23c6 is in state STARTED 2026-03-29 00:51:52.328778 | orchestrator | 2026-03-29 00:51:52 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:51:55.368443 | orchestrator | 2026-03-29 00:51:55 | INFO  | Task 8e5df341-fcd4-485c-9419-3d748b148bbf is in state STARTED 2026-03-29 00:51:55.369593 | orchestrator | 2026-03-29 00:51:55 | INFO  | Task 1a16439d-2e03-442a-b330-ac506537ee12 is in state STARTED 2026-03-29 00:51:55.370800 | orchestrator | 2026-03-29 00:51:55 | INFO  | Task 113dda18-3d42-487e-b5bf-95f5cdea23c6 is in state STARTED 2026-03-29 00:51:55.371100 | orchestrator | 2026-03-29 00:51:55 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:51:58.423558 | orchestrator | 2026-03-29 00:51:58 | INFO  | Task 8e5df341-fcd4-485c-9419-3d748b148bbf is in state STARTED 2026-03-29 00:51:58.425882 | orchestrator | 2026-03-29 00:51:58 | INFO  | Task 1a16439d-2e03-442a-b330-ac506537ee12 is in state STARTED 2026-03-29 00:51:58.426688 | orchestrator | 2026-03-29 00:51:58 | INFO  | Task 113dda18-3d42-487e-b5bf-95f5cdea23c6 is in state STARTED 2026-03-29 00:51:58.426750 | orchestrator | 2026-03-29 00:51:58 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:52:01.466251 | orchestrator | 2026-03-29 00:52:01 | INFO  | Task 8e5df341-fcd4-485c-9419-3d748b148bbf is in state STARTED 2026-03-29 00:52:01.467918 | orchestrator | 2026-03-29 00:52:01 | INFO  | Task 1a16439d-2e03-442a-b330-ac506537ee12 is in state STARTED 2026-03-29 00:52:01.469774 | orchestrator | 2026-03-29 00:52:01 | INFO  | Task 113dda18-3d42-487e-b5bf-95f5cdea23c6 is in state STARTED 2026-03-29 00:52:01.469840 | orchestrator | 2026-03-29 00:52:01 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:52:04.499887 | orchestrator | 2026-03-29 00:52:04 | INFO  | Task 8e5df341-fcd4-485c-9419-3d748b148bbf is in state STARTED 2026-03-29 00:52:04.501832 | orchestrator | 2026-03-29 00:52:04 | INFO  | Task 1a16439d-2e03-442a-b330-ac506537ee12 is in state STARTED 2026-03-29 00:52:04.504083 | orchestrator | 2026-03-29 00:52:04 | INFO  | Task 113dda18-3d42-487e-b5bf-95f5cdea23c6 is in state STARTED 2026-03-29 00:52:04.504121 | orchestrator | 2026-03-29 00:52:04 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:52:07.547717 | orchestrator | 2026-03-29 00:52:07 | INFO  | Task 8e5df341-fcd4-485c-9419-3d748b148bbf is in state STARTED 2026-03-29 00:52:07.548018 | orchestrator | 2026-03-29 00:52:07 | INFO  | Task 1a16439d-2e03-442a-b330-ac506537ee12 is in state STARTED 2026-03-29 00:52:07.548726 | orchestrator | 2026-03-29 00:52:07 | INFO  | Task 113dda18-3d42-487e-b5bf-95f5cdea23c6 is in state STARTED 2026-03-29 00:52:07.548797 | orchestrator | 2026-03-29 00:52:07 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:52:10.588589 | orchestrator | 2026-03-29 00:52:10 | INFO  | Task 8e5df341-fcd4-485c-9419-3d748b148bbf is in state STARTED 2026-03-29 00:52:10.589078 | orchestrator | 2026-03-29 00:52:10 | INFO  | Task 1a16439d-2e03-442a-b330-ac506537ee12 is in state STARTED 2026-03-29 00:52:10.589904 | orchestrator | 2026-03-29 00:52:10 | INFO  | Task 113dda18-3d42-487e-b5bf-95f5cdea23c6 is in state STARTED 2026-03-29 00:52:10.589971 | orchestrator | 2026-03-29 00:52:10 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:52:13.628165 | orchestrator | 2026-03-29 00:52:13 | INFO  | Task 8e5df341-fcd4-485c-9419-3d748b148bbf is in state STARTED 2026-03-29 00:52:13.630241 | orchestrator | 2026-03-29 00:52:13 | INFO  | Task 1a16439d-2e03-442a-b330-ac506537ee12 is in state STARTED 2026-03-29 00:52:13.635133 | orchestrator | 2026-03-29 00:52:13 | INFO  | Task 113dda18-3d42-487e-b5bf-95f5cdea23c6 is in state STARTED 2026-03-29 00:52:13.635198 | orchestrator | 2026-03-29 00:52:13 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:52:16.674507 | orchestrator | 2026-03-29 00:52:16 | INFO  | Task 8e5df341-fcd4-485c-9419-3d748b148bbf is in state STARTED 2026-03-29 00:52:16.675579 | orchestrator | 2026-03-29 00:52:16 | INFO  | Task 1a16439d-2e03-442a-b330-ac506537ee12 is in state STARTED 2026-03-29 00:52:16.676128 | orchestrator | 2026-03-29 00:52:16 | INFO  | Task 113dda18-3d42-487e-b5bf-95f5cdea23c6 is in state STARTED 2026-03-29 00:52:16.676156 | orchestrator | 2026-03-29 00:52:16 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:52:19.713947 | orchestrator | 2026-03-29 00:52:19 | INFO  | Task 8e5df341-fcd4-485c-9419-3d748b148bbf is in state SUCCESS 2026-03-29 00:52:19.715287 | orchestrator | 2026-03-29 00:52:19.715330 | orchestrator | 2026-03-29 00:52:19.715339 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-29 00:52:19.715346 | orchestrator | 2026-03-29 00:52:19.715352 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-29 00:52:19.715357 | orchestrator | Sunday 29 March 2026 00:49:56 +0000 (0:00:00.161) 0:00:00.161 ********** 2026-03-29 00:52:19.715361 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:52:19.715367 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:52:19.715371 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:52:19.715375 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:52:19.715379 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:52:19.715382 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:52:19.715386 | orchestrator | 2026-03-29 00:52:19.715390 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-29 00:52:19.715394 | orchestrator | Sunday 29 March 2026 00:49:57 +0000 (0:00:00.576) 0:00:00.737 ********** 2026-03-29 00:52:19.715409 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2026-03-29 00:52:19.715414 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2026-03-29 00:52:19.715417 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2026-03-29 00:52:19.715421 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2026-03-29 00:52:19.715425 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2026-03-29 00:52:19.715429 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2026-03-29 00:52:19.715432 | orchestrator | 2026-03-29 00:52:19.715436 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2026-03-29 00:52:19.715440 | orchestrator | 2026-03-29 00:52:19.715444 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2026-03-29 00:52:19.715448 | orchestrator | Sunday 29 March 2026 00:49:58 +0000 (0:00:00.918) 0:00:01.655 ********** 2026-03-29 00:52:19.715454 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 00:52:19.715475 | orchestrator | 2026-03-29 00:52:19.715479 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2026-03-29 00:52:19.715534 | orchestrator | Sunday 29 March 2026 00:49:59 +0000 (0:00:01.277) 0:00:02.933 ********** 2026-03-29 00:52:19.715540 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:52:19.715546 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:52:19.715552 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:52:19.715558 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:52:19.715586 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:52:19.715593 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:52:19.715599 | orchestrator | 2026-03-29 00:52:19.715615 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2026-03-29 00:52:19.715619 | orchestrator | Sunday 29 March 2026 00:50:01 +0000 (0:00:01.736) 0:00:04.669 ********** 2026-03-29 00:52:19.715623 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:52:19.715633 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:52:19.715681 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:52:19.715688 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:52:19.715692 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:52:19.715695 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:52:19.715701 | orchestrator | 2026-03-29 00:52:19.715707 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2026-03-29 00:52:19.715713 | orchestrator | Sunday 29 March 2026 00:50:02 +0000 (0:00:01.486) 0:00:06.156 ********** 2026-03-29 00:52:19.715719 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:52:19.715726 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:52:19.715737 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:52:19.715744 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:52:19.715754 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:52:19.715765 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:52:19.715771 | orchestrator | 2026-03-29 00:52:19.715777 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2026-03-29 00:52:19.715783 | orchestrator | Sunday 29 March 2026 00:50:03 +0000 (0:00:01.218) 0:00:07.375 ********** 2026-03-29 00:52:19.715789 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:52:19.715796 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:52:19.715803 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:52:19.715808 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:52:19.715811 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:52:19.715815 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:52:19.715819 | orchestrator | 2026-03-29 00:52:19.715825 | orchestrator | TASK [ovn-controller : Check ovn-controller containers] ************************ 2026-03-29 00:52:19.715829 | orchestrator | Sunday 29 March 2026 00:50:05 +0000 (0:00:01.451) 0:00:08.826 ********** 2026-03-29 00:52:19.715837 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:52:19.715844 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:52:19.715848 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:52:19.715852 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:52:19.715856 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:52:19.715860 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:52:19.715863 | orchestrator | 2026-03-29 00:52:19.715867 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2026-03-29 00:52:19.715871 | orchestrator | Sunday 29 March 2026 00:50:06 +0000 (0:00:01.254) 0:00:10.081 ********** 2026-03-29 00:52:19.715875 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:52:19.715880 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:52:19.715885 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:52:19.715898 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:52:19.715903 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:52:19.715907 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:52:19.715991 | orchestrator | 2026-03-29 00:52:19.715997 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2026-03-29 00:52:19.716001 | orchestrator | Sunday 29 March 2026 00:50:08 +0000 (0:00:02.359) 0:00:12.441 ********** 2026-03-29 00:52:19.716006 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2026-03-29 00:52:19.716011 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2026-03-29 00:52:19.716016 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2026-03-29 00:52:19.716020 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2026-03-29 00:52:19.716028 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2026-03-29 00:52:19.716033 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2026-03-29 00:52:19.716037 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-29 00:52:19.716042 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-29 00:52:19.716049 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-29 00:52:19.716054 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-29 00:52:19.716058 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-29 00:52:19.716062 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-29 00:52:19.716067 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-03-29 00:52:19.716073 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-03-29 00:52:19.716081 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-03-29 00:52:19.716085 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-03-29 00:52:19.716090 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-03-29 00:52:19.716094 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-29 00:52:19.716100 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-03-29 00:52:19.716104 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-29 00:52:19.716109 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-29 00:52:19.716113 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-29 00:52:19.716117 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-29 00:52:19.716121 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-29 00:52:19.716126 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-29 00:52:19.716130 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-29 00:52:19.716134 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-29 00:52:19.716138 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-29 00:52:19.716143 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-29 00:52:19.716149 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-29 00:52:19.716157 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-29 00:52:19.716166 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-29 00:52:19.716172 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-29 00:52:19.716178 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-03-29 00:52:19.716195 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-29 00:52:19.716200 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-29 00:52:19.716205 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-29 00:52:19.716211 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-03-29 00:52:19.716217 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2026-03-29 00:52:19.716223 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-03-29 00:52:19.716229 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-03-29 00:52:19.716235 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-03-29 00:52:19.716241 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-03-29 00:52:19.716247 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2026-03-29 00:52:19.716257 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2026-03-29 00:52:19.716263 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2026-03-29 00:52:19.716269 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-03-29 00:52:19.716275 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2026-03-29 00:52:19.716280 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2026-03-29 00:52:19.716289 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-03-29 00:52:19.716296 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-03-29 00:52:19.716301 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-03-29 00:52:19.716308 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-03-29 00:52:19.716314 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-03-29 00:52:19.716319 | orchestrator | 2026-03-29 00:52:19.716325 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-29 00:52:19.716331 | orchestrator | Sunday 29 March 2026 00:50:28 +0000 (0:00:19.901) 0:00:32.342 ********** 2026-03-29 00:52:19.716337 | orchestrator | 2026-03-29 00:52:19.716343 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-29 00:52:19.716349 | orchestrator | Sunday 29 March 2026 00:50:28 +0000 (0:00:00.063) 0:00:32.406 ********** 2026-03-29 00:52:19.716355 | orchestrator | 2026-03-29 00:52:19.716361 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-29 00:52:19.716366 | orchestrator | Sunday 29 March 2026 00:50:28 +0000 (0:00:00.063) 0:00:32.469 ********** 2026-03-29 00:52:19.716372 | orchestrator | 2026-03-29 00:52:19.716378 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-29 00:52:19.716384 | orchestrator | Sunday 29 March 2026 00:50:28 +0000 (0:00:00.079) 0:00:32.549 ********** 2026-03-29 00:52:19.716393 | orchestrator | 2026-03-29 00:52:19.716399 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-29 00:52:19.716405 | orchestrator | Sunday 29 March 2026 00:50:29 +0000 (0:00:00.059) 0:00:32.608 ********** 2026-03-29 00:52:19.716410 | orchestrator | 2026-03-29 00:52:19.716416 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-29 00:52:19.716422 | orchestrator | Sunday 29 March 2026 00:50:29 +0000 (0:00:00.055) 0:00:32.664 ********** 2026-03-29 00:52:19.716428 | orchestrator | 2026-03-29 00:52:19.716434 | orchestrator | RUNNING HANDLER [ovn-controller : Reload systemd config] *********************** 2026-03-29 00:52:19.716440 | orchestrator | Sunday 29 March 2026 00:50:29 +0000 (0:00:00.124) 0:00:32.789 ********** 2026-03-29 00:52:19.716445 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:52:19.716452 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:52:19.716457 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:52:19.716463 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:52:19.716469 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:52:19.716475 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:52:19.716480 | orchestrator | 2026-03-29 00:52:19.716486 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2026-03-29 00:52:19.716492 | orchestrator | Sunday 29 March 2026 00:50:31 +0000 (0:00:02.310) 0:00:35.099 ********** 2026-03-29 00:52:19.716498 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:52:19.716504 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:52:19.716510 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:52:19.716516 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:52:19.716522 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:52:19.716527 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:52:19.716533 | orchestrator | 2026-03-29 00:52:19.716539 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2026-03-29 00:52:19.716544 | orchestrator | 2026-03-29 00:52:19.716550 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-03-29 00:52:19.716556 | orchestrator | Sunday 29 March 2026 00:51:01 +0000 (0:00:30.227) 0:01:05.327 ********** 2026-03-29 00:52:19.716562 | orchestrator | included: /ansible/roles/ovn-db/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 00:52:19.716568 | orchestrator | 2026-03-29 00:52:19.716573 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-03-29 00:52:19.716579 | orchestrator | Sunday 29 March 2026 00:51:02 +0000 (0:00:00.794) 0:01:06.122 ********** 2026-03-29 00:52:19.716585 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 00:52:19.716591 | orchestrator | 2026-03-29 00:52:19.716597 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2026-03-29 00:52:19.716602 | orchestrator | Sunday 29 March 2026 00:51:03 +0000 (0:00:00.979) 0:01:07.101 ********** 2026-03-29 00:52:19.716608 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:52:19.716614 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:52:19.716620 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:52:19.716626 | orchestrator | 2026-03-29 00:52:19.716631 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2026-03-29 00:52:19.716637 | orchestrator | Sunday 29 March 2026 00:51:04 +0000 (0:00:00.878) 0:01:07.980 ********** 2026-03-29 00:52:19.716643 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:52:19.716649 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:52:19.716655 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:52:19.716663 | orchestrator | 2026-03-29 00:52:19.716669 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2026-03-29 00:52:19.716675 | orchestrator | Sunday 29 March 2026 00:51:04 +0000 (0:00:00.279) 0:01:08.260 ********** 2026-03-29 00:52:19.716680 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:52:19.716686 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:52:19.716692 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:52:19.716698 | orchestrator | 2026-03-29 00:52:19.716709 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2026-03-29 00:52:19.716714 | orchestrator | Sunday 29 March 2026 00:51:05 +0000 (0:00:00.333) 0:01:08.593 ********** 2026-03-29 00:52:19.716731 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:52:19.716737 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:52:19.716743 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:52:19.716749 | orchestrator | 2026-03-29 00:52:19.716755 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2026-03-29 00:52:19.716766 | orchestrator | Sunday 29 March 2026 00:51:05 +0000 (0:00:00.239) 0:01:08.833 ********** 2026-03-29 00:52:19.716771 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:52:19.716774 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:52:19.716778 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:52:19.716781 | orchestrator | 2026-03-29 00:52:19.716785 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2026-03-29 00:52:19.716789 | orchestrator | Sunday 29 March 2026 00:51:05 +0000 (0:00:00.231) 0:01:09.064 ********** 2026-03-29 00:52:19.716793 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:52:19.716796 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:52:19.716800 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:52:19.716804 | orchestrator | 2026-03-29 00:52:19.716807 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2026-03-29 00:52:19.716811 | orchestrator | Sunday 29 March 2026 00:51:05 +0000 (0:00:00.284) 0:01:09.349 ********** 2026-03-29 00:52:19.716815 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:52:19.716818 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:52:19.716822 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:52:19.716826 | orchestrator | 2026-03-29 00:52:19.716830 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2026-03-29 00:52:19.716833 | orchestrator | Sunday 29 March 2026 00:51:06 +0000 (0:00:00.263) 0:01:09.613 ********** 2026-03-29 00:52:19.716837 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:52:19.716841 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:52:19.716844 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:52:19.716848 | orchestrator | 2026-03-29 00:52:19.716852 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2026-03-29 00:52:19.716855 | orchestrator | Sunday 29 March 2026 00:51:06 +0000 (0:00:00.455) 0:01:10.069 ********** 2026-03-29 00:52:19.716859 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:52:19.716863 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:52:19.716866 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:52:19.716870 | orchestrator | 2026-03-29 00:52:19.716874 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2026-03-29 00:52:19.716878 | orchestrator | Sunday 29 March 2026 00:51:06 +0000 (0:00:00.418) 0:01:10.487 ********** 2026-03-29 00:52:19.716881 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:52:19.716885 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:52:19.716888 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:52:19.716892 | orchestrator | 2026-03-29 00:52:19.716896 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2026-03-29 00:52:19.716900 | orchestrator | Sunday 29 March 2026 00:51:07 +0000 (0:00:00.324) 0:01:10.811 ********** 2026-03-29 00:52:19.716903 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:52:19.716907 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:52:19.716936 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:52:19.716942 | orchestrator | 2026-03-29 00:52:19.716947 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2026-03-29 00:52:19.716954 | orchestrator | Sunday 29 March 2026 00:51:07 +0000 (0:00:00.301) 0:01:11.113 ********** 2026-03-29 00:52:19.716959 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:52:19.716967 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:52:19.716976 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:52:19.716982 | orchestrator | 2026-03-29 00:52:19.716988 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2026-03-29 00:52:19.716999 | orchestrator | Sunday 29 March 2026 00:51:07 +0000 (0:00:00.406) 0:01:11.519 ********** 2026-03-29 00:52:19.717005 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:52:19.717011 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:52:19.717017 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:52:19.717023 | orchestrator | 2026-03-29 00:52:19.717029 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2026-03-29 00:52:19.717035 | orchestrator | Sunday 29 March 2026 00:51:08 +0000 (0:00:00.263) 0:01:11.783 ********** 2026-03-29 00:52:19.717041 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:52:19.717047 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:52:19.717053 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:52:19.717059 | orchestrator | 2026-03-29 00:52:19.717074 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2026-03-29 00:52:19.717082 | orchestrator | Sunday 29 March 2026 00:51:08 +0000 (0:00:00.274) 0:01:12.058 ********** 2026-03-29 00:52:19.717086 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:52:19.717089 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:52:19.717094 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:52:19.717100 | orchestrator | 2026-03-29 00:52:19.717107 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2026-03-29 00:52:19.717113 | orchestrator | Sunday 29 March 2026 00:51:08 +0000 (0:00:00.253) 0:01:12.312 ********** 2026-03-29 00:52:19.717119 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:52:19.717125 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:52:19.717132 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:52:19.717138 | orchestrator | 2026-03-29 00:52:19.717144 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2026-03-29 00:52:19.717150 | orchestrator | Sunday 29 March 2026 00:51:09 +0000 (0:00:00.384) 0:01:12.696 ********** 2026-03-29 00:52:19.717157 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:52:19.717163 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:52:19.717176 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:52:19.717180 | orchestrator | 2026-03-29 00:52:19.717183 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-03-29 00:52:19.717187 | orchestrator | Sunday 29 March 2026 00:51:09 +0000 (0:00:00.312) 0:01:13.009 ********** 2026-03-29 00:52:19.717193 | orchestrator | included: /ansible/roles/ovn-db/tasks/bootstrap-initial.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 00:52:19.717199 | orchestrator | 2026-03-29 00:52:19.717206 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new cluster)] ******************* 2026-03-29 00:52:19.717212 | orchestrator | Sunday 29 March 2026 00:51:10 +0000 (0:00:01.046) 0:01:14.055 ********** 2026-03-29 00:52:19.717218 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:52:19.717224 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:52:19.717231 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:52:19.717237 | orchestrator | 2026-03-29 00:52:19.717243 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new cluster)] ******************* 2026-03-29 00:52:19.717253 | orchestrator | Sunday 29 March 2026 00:51:11 +0000 (0:00:00.742) 0:01:14.798 ********** 2026-03-29 00:52:19.717260 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:52:19.717265 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:52:19.717273 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:52:19.717276 | orchestrator | 2026-03-29 00:52:19.717281 | orchestrator | TASK [ovn-db : Check NB cluster status] **************************************** 2026-03-29 00:52:19.717287 | orchestrator | Sunday 29 March 2026 00:51:11 +0000 (0:00:00.704) 0:01:15.503 ********** 2026-03-29 00:52:19.717294 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:52:19.717300 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:52:19.717306 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:52:19.717312 | orchestrator | 2026-03-29 00:52:19.717318 | orchestrator | TASK [ovn-db : Check SB cluster status] **************************************** 2026-03-29 00:52:19.717324 | orchestrator | Sunday 29 March 2026 00:51:12 +0000 (0:00:00.347) 0:01:15.850 ********** 2026-03-29 00:52:19.717336 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:52:19.717342 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:52:19.717347 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:52:19.717353 | orchestrator | 2026-03-29 00:52:19.717360 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in NB DB] *** 2026-03-29 00:52:19.717367 | orchestrator | Sunday 29 March 2026 00:51:12 +0000 (0:00:00.290) 0:01:16.141 ********** 2026-03-29 00:52:19.717373 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:52:19.717379 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:52:19.717385 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:52:19.717391 | orchestrator | 2026-03-29 00:52:19.717397 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in SB DB] *** 2026-03-29 00:52:19.717403 | orchestrator | Sunday 29 March 2026 00:51:13 +0000 (0:00:00.479) 0:01:16.621 ********** 2026-03-29 00:52:19.717409 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:52:19.717415 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:52:19.717422 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:52:19.717429 | orchestrator | 2026-03-29 00:52:19.717435 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new member)] ******************** 2026-03-29 00:52:19.717442 | orchestrator | Sunday 29 March 2026 00:51:13 +0000 (0:00:00.277) 0:01:16.899 ********** 2026-03-29 00:52:19.717448 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:52:19.717454 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:52:19.717461 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:52:19.717467 | orchestrator | 2026-03-29 00:52:19.717473 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new member)] ******************** 2026-03-29 00:52:19.717479 | orchestrator | Sunday 29 March 2026 00:51:13 +0000 (0:00:00.282) 0:01:17.181 ********** 2026-03-29 00:52:19.717485 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:52:19.717491 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:52:19.717497 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:52:19.717504 | orchestrator | 2026-03-29 00:52:19.717510 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-03-29 00:52:19.717516 | orchestrator | Sunday 29 March 2026 00:51:13 +0000 (0:00:00.282) 0:01:17.463 ********** 2026-03-29 00:52:19.717525 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:52:19.717539 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:52:19.717546 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:52:19.717787 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:52:19.717801 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:52:19.717821 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:52:19.717828 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:52:19.717835 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:52:19.717841 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:52:19.717848 | orchestrator | 2026-03-29 00:52:19.717853 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-03-29 00:52:19.717859 | orchestrator | Sunday 29 March 2026 00:51:15 +0000 (0:00:01.694) 0:01:19.157 ********** 2026-03-29 00:52:19.717865 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:52:19.717871 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:52:19.717877 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:52:19.717882 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:52:19.717893 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:52:19.717906 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:52:19.717934 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:52:19.717941 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:52:19.717947 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:52:19.717954 | orchestrator | 2026-03-29 00:52:19.717960 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2026-03-29 00:52:19.717965 | orchestrator | Sunday 29 March 2026 00:51:19 +0000 (0:00:03.883) 0:01:23.041 ********** 2026-03-29 00:52:19.717972 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:52:19.717979 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:52:19.717985 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:52:19.717992 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:52:19.717998 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:52:19.718050 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:52:19.718061 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:52:19.718071 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:52:19.718078 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:52:19.718084 | orchestrator | 2026-03-29 00:52:19.718091 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-29 00:52:19.718097 | orchestrator | Sunday 29 March 2026 00:51:21 +0000 (0:00:02.241) 0:01:25.282 ********** 2026-03-29 00:52:19.718103 | orchestrator | 2026-03-29 00:52:19.718109 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-29 00:52:19.718115 | orchestrator | Sunday 29 March 2026 00:51:21 +0000 (0:00:00.060) 0:01:25.342 ********** 2026-03-29 00:52:19.718122 | orchestrator | 2026-03-29 00:52:19.718128 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-29 00:52:19.718134 | orchestrator | Sunday 29 March 2026 00:51:21 +0000 (0:00:00.060) 0:01:25.403 ********** 2026-03-29 00:52:19.718140 | orchestrator | 2026-03-29 00:52:19.718146 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-03-29 00:52:19.718152 | orchestrator | Sunday 29 March 2026 00:51:21 +0000 (0:00:00.059) 0:01:25.463 ********** 2026-03-29 00:52:19.718158 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:52:19.718164 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:52:19.718170 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:52:19.718176 | orchestrator | 2026-03-29 00:52:19.718183 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-03-29 00:52:19.718189 | orchestrator | Sunday 29 March 2026 00:51:24 +0000 (0:00:02.522) 0:01:27.985 ********** 2026-03-29 00:52:19.718195 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:52:19.718201 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:52:19.718207 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:52:19.718213 | orchestrator | 2026-03-29 00:52:19.718219 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-03-29 00:52:19.718226 | orchestrator | Sunday 29 March 2026 00:51:31 +0000 (0:00:07.466) 0:01:35.452 ********** 2026-03-29 00:52:19.718231 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:52:19.718238 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:52:19.718255 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:52:19.718262 | orchestrator | 2026-03-29 00:52:19.718268 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-03-29 00:52:19.718279 | orchestrator | Sunday 29 March 2026 00:51:39 +0000 (0:00:07.267) 0:01:42.719 ********** 2026-03-29 00:52:19.718286 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:52:19.718292 | orchestrator | 2026-03-29 00:52:19.718298 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-03-29 00:52:19.718304 | orchestrator | Sunday 29 March 2026 00:51:39 +0000 (0:00:00.133) 0:01:42.853 ********** 2026-03-29 00:52:19.718311 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:52:19.718317 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:52:19.718323 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:52:19.718329 | orchestrator | 2026-03-29 00:52:19.718336 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-03-29 00:52:19.718343 | orchestrator | Sunday 29 March 2026 00:51:40 +0000 (0:00:00.910) 0:01:43.763 ********** 2026-03-29 00:52:19.718349 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:52:19.718355 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:52:19.718361 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:52:19.718367 | orchestrator | 2026-03-29 00:52:19.718374 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-03-29 00:52:19.718380 | orchestrator | Sunday 29 March 2026 00:51:40 +0000 (0:00:00.627) 0:01:44.391 ********** 2026-03-29 00:52:19.718386 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:52:19.718392 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:52:19.718398 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:52:19.718405 | orchestrator | 2026-03-29 00:52:19.718411 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-03-29 00:52:19.718418 | orchestrator | Sunday 29 March 2026 00:51:41 +0000 (0:00:00.861) 0:01:45.252 ********** 2026-03-29 00:52:19.718425 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:52:19.718432 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:52:19.718438 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:52:19.718444 | orchestrator | 2026-03-29 00:52:19.718451 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-03-29 00:52:19.718457 | orchestrator | Sunday 29 March 2026 00:51:42 +0000 (0:00:00.624) 0:01:45.877 ********** 2026-03-29 00:52:19.718464 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:52:19.718470 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:52:19.718481 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:52:19.718487 | orchestrator | 2026-03-29 00:52:19.718494 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-03-29 00:52:19.718500 | orchestrator | Sunday 29 March 2026 00:51:43 +0000 (0:00:00.791) 0:01:46.668 ********** 2026-03-29 00:52:19.718506 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:52:19.718513 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:52:19.718520 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:52:19.718526 | orchestrator | 2026-03-29 00:52:19.718533 | orchestrator | TASK [ovn-db : Unset bootstrap args fact] ************************************** 2026-03-29 00:52:19.718539 | orchestrator | Sunday 29 March 2026 00:51:43 +0000 (0:00:00.693) 0:01:47.362 ********** 2026-03-29 00:52:19.718546 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:52:19.718552 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:52:19.718558 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:52:19.718564 | orchestrator | 2026-03-29 00:52:19.718571 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-03-29 00:52:19.718584 | orchestrator | Sunday 29 March 2026 00:51:44 +0000 (0:00:00.380) 0:01:47.742 ********** 2026-03-29 00:52:19.718592 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:52:19.718598 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:52:19.718611 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:52:19.718618 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:52:19.718625 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:52:19.718632 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:52:19.718638 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:52:19.718644 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:52:19.718658 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:52:19.718665 | orchestrator | 2026-03-29 00:52:19.718671 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-03-29 00:52:19.718677 | orchestrator | Sunday 29 March 2026 00:51:45 +0000 (0:00:01.465) 0:01:49.207 ********** 2026-03-29 00:52:19.718683 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:52:19.718690 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:52:19.718702 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:52:19.718709 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:52:19.718716 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:52:19.718784 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:52:19.718814 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:52:19.718822 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:52:19.718837 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:52:19.718844 | orchestrator | 2026-03-29 00:52:19.718850 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2026-03-29 00:52:19.718857 | orchestrator | Sunday 29 March 2026 00:51:49 +0000 (0:00:04.067) 0:01:53.275 ********** 2026-03-29 00:52:19.718870 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:52:19.718885 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:52:19.718907 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:52:19.718968 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:52:19.718976 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:52:19.718982 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:52:19.718989 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:52:19.718995 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:52:19.719009 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:52:19.719015 | orchestrator | 2026-03-29 00:52:19.719036 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-29 00:52:19.719043 | orchestrator | Sunday 29 March 2026 00:51:53 +0000 (0:00:03.331) 0:01:56.606 ********** 2026-03-29 00:52:19.719049 | orchestrator | 2026-03-29 00:52:19.719055 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-29 00:52:19.719061 | orchestrator | Sunday 29 March 2026 00:51:53 +0000 (0:00:00.091) 0:01:56.697 ********** 2026-03-29 00:52:19.719067 | orchestrator | 2026-03-29 00:52:19.719074 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-29 00:52:19.719080 | orchestrator | Sunday 29 March 2026 00:51:53 +0000 (0:00:00.222) 0:01:56.920 ********** 2026-03-29 00:52:19.719086 | orchestrator | 2026-03-29 00:52:19.719092 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-03-29 00:52:19.719098 | orchestrator | Sunday 29 March 2026 00:51:53 +0000 (0:00:00.061) 0:01:56.982 ********** 2026-03-29 00:52:19.719105 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:52:19.719119 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:52:19.719137 | orchestrator | 2026-03-29 00:52:19.719156 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-03-29 00:52:19.719162 | orchestrator | Sunday 29 March 2026 00:51:59 +0000 (0:00:06.122) 0:02:03.105 ********** 2026-03-29 00:52:19.719169 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:52:19.719175 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:52:19.719189 | orchestrator | 2026-03-29 00:52:19.719203 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-03-29 00:52:19.719209 | orchestrator | Sunday 29 March 2026 00:52:05 +0000 (0:00:06.200) 0:02:09.305 ********** 2026-03-29 00:52:19.719267 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:52:19.719282 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:52:19.719295 | orchestrator | 2026-03-29 00:52:19.719301 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-03-29 00:52:19.719308 | orchestrator | Sunday 29 March 2026 00:52:11 +0000 (0:00:06.231) 0:02:15.537 ********** 2026-03-29 00:52:19.719314 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:52:19.719320 | orchestrator | 2026-03-29 00:52:19.719331 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-03-29 00:52:19.719338 | orchestrator | Sunday 29 March 2026 00:52:12 +0000 (0:00:00.120) 0:02:15.658 ********** 2026-03-29 00:52:19.719351 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:52:19.719358 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:52:19.719364 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:52:19.719377 | orchestrator | 2026-03-29 00:52:19.719383 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-03-29 00:52:19.719399 | orchestrator | Sunday 29 March 2026 00:52:12 +0000 (0:00:00.762) 0:02:16.420 ********** 2026-03-29 00:52:19.719405 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:52:19.719437 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:52:19.719443 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:52:19.719449 | orchestrator | 2026-03-29 00:52:19.719456 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-03-29 00:52:19.719487 | orchestrator | Sunday 29 March 2026 00:52:13 +0000 (0:00:00.617) 0:02:17.038 ********** 2026-03-29 00:52:19.719528 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:52:19.719534 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:52:19.719548 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:52:19.719554 | orchestrator | 2026-03-29 00:52:19.719559 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-03-29 00:52:19.719573 | orchestrator | Sunday 29 March 2026 00:52:14 +0000 (0:00:00.782) 0:02:17.820 ********** 2026-03-29 00:52:19.719594 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:52:19.719601 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:52:19.719607 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:52:19.719613 | orchestrator | 2026-03-29 00:52:19.719619 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-03-29 00:52:19.719625 | orchestrator | Sunday 29 March 2026 00:52:14 +0000 (0:00:00.636) 0:02:18.457 ********** 2026-03-29 00:52:19.719632 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:52:19.719646 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:52:19.719652 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:52:19.719666 | orchestrator | 2026-03-29 00:52:19.719673 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-03-29 00:52:19.719679 | orchestrator | Sunday 29 March 2026 00:52:15 +0000 (0:00:00.789) 0:02:19.247 ********** 2026-03-29 00:52:19.719685 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:52:19.719691 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:52:19.719697 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:52:19.719703 | orchestrator | 2026-03-29 00:52:19.719709 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 00:52:19.719715 | orchestrator | testbed-node-0 : ok=44  changed=18  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-03-29 00:52:19.719729 | orchestrator | testbed-node-1 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2026-03-29 00:52:19.719735 | orchestrator | testbed-node-2 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2026-03-29 00:52:19.719742 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-29 00:52:19.719748 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-29 00:52:19.719755 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-29 00:52:19.719761 | orchestrator | 2026-03-29 00:52:19.719767 | orchestrator | 2026-03-29 00:52:19.719773 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-29 00:52:19.719779 | orchestrator | Sunday 29 March 2026 00:52:17 +0000 (0:00:01.455) 0:02:20.702 ********** 2026-03-29 00:52:19.719785 | orchestrator | =============================================================================== 2026-03-29 00:52:19.719791 | orchestrator | ovn-controller : Restart ovn-controller container ---------------------- 30.23s 2026-03-29 00:52:19.719798 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 19.90s 2026-03-29 00:52:19.719804 | orchestrator | ovn-db : Restart ovn-sb-db container ----------------------------------- 13.67s 2026-03-29 00:52:19.719810 | orchestrator | ovn-db : Restart ovn-northd container ---------------------------------- 13.50s 2026-03-29 00:52:19.719816 | orchestrator | ovn-db : Restart ovn-nb-db container ------------------------------------ 8.64s 2026-03-29 00:52:19.719822 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 4.07s 2026-03-29 00:52:19.719829 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 3.88s 2026-03-29 00:52:19.719839 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 3.33s 2026-03-29 00:52:19.719845 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 2.36s 2026-03-29 00:52:19.719851 | orchestrator | ovn-controller : Reload systemd config ---------------------------------- 2.31s 2026-03-29 00:52:19.719857 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.24s 2026-03-29 00:52:19.719863 | orchestrator | ovn-controller : Ensuring config directories exist ---------------------- 1.74s 2026-03-29 00:52:19.719870 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.69s 2026-03-29 00:52:19.719876 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 1.49s 2026-03-29 00:52:19.719882 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.47s 2026-03-29 00:52:19.719892 | orchestrator | ovn-db : Wait for ovn-sb-db --------------------------------------------- 1.46s 2026-03-29 00:52:19.719898 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 1.45s 2026-03-29 00:52:19.719904 | orchestrator | ovn-controller : include_tasks ------------------------------------------ 1.28s 2026-03-29 00:52:19.719926 | orchestrator | ovn-controller : Check ovn-controller containers ------------------------ 1.25s 2026-03-29 00:52:19.719933 | orchestrator | ovn-controller : Ensuring systemd override directory exists ------------- 1.22s 2026-03-29 00:52:19.719939 | orchestrator | 2026-03-29 00:52:19 | INFO  | Task 1a16439d-2e03-442a-b330-ac506537ee12 is in state STARTED 2026-03-29 00:52:19.719945 | orchestrator | 2026-03-29 00:52:19 | INFO  | Task 113dda18-3d42-487e-b5bf-95f5cdea23c6 is in state STARTED 2026-03-29 00:52:19.719951 | orchestrator | 2026-03-29 00:52:19 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:52:22.767578 | orchestrator | 2026-03-29 00:52:22 | INFO  | Task 1a16439d-2e03-442a-b330-ac506537ee12 is in state STARTED 2026-03-29 00:52:22.768948 | orchestrator | 2026-03-29 00:52:22 | INFO  | Task 113dda18-3d42-487e-b5bf-95f5cdea23c6 is in state STARTED 2026-03-29 00:52:22.769005 | orchestrator | 2026-03-29 00:52:22 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:52:25.808058 | orchestrator | 2026-03-29 00:52:25 | INFO  | Task 1a16439d-2e03-442a-b330-ac506537ee12 is in state STARTED 2026-03-29 00:52:25.808597 | orchestrator | 2026-03-29 00:52:25 | INFO  | Task 113dda18-3d42-487e-b5bf-95f5cdea23c6 is in state STARTED 2026-03-29 00:52:25.808795 | orchestrator | 2026-03-29 00:52:25 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:52:28.848886 | orchestrator | 2026-03-29 00:52:28 | INFO  | Task 1a16439d-2e03-442a-b330-ac506537ee12 is in state STARTED 2026-03-29 00:52:28.849549 | orchestrator | 2026-03-29 00:52:28 | INFO  | Task 113dda18-3d42-487e-b5bf-95f5cdea23c6 is in state STARTED 2026-03-29 00:52:28.849745 | orchestrator | 2026-03-29 00:52:28 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:52:31.892577 | orchestrator | 2026-03-29 00:52:31 | INFO  | Task 1a16439d-2e03-442a-b330-ac506537ee12 is in state STARTED 2026-03-29 00:52:31.892947 | orchestrator | 2026-03-29 00:52:31 | INFO  | Task 113dda18-3d42-487e-b5bf-95f5cdea23c6 is in state STARTED 2026-03-29 00:52:31.892973 | orchestrator | 2026-03-29 00:52:31 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:52:34.926634 | orchestrator | 2026-03-29 00:52:34 | INFO  | Task 1a16439d-2e03-442a-b330-ac506537ee12 is in state STARTED 2026-03-29 00:52:34.927248 | orchestrator | 2026-03-29 00:52:34 | INFO  | Task 113dda18-3d42-487e-b5bf-95f5cdea23c6 is in state STARTED 2026-03-29 00:52:34.927280 | orchestrator | 2026-03-29 00:52:34 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:52:37.962440 | orchestrator | 2026-03-29 00:52:37 | INFO  | Task 1a16439d-2e03-442a-b330-ac506537ee12 is in state STARTED 2026-03-29 00:52:37.963745 | orchestrator | 2026-03-29 00:52:37 | INFO  | Task 113dda18-3d42-487e-b5bf-95f5cdea23c6 is in state STARTED 2026-03-29 00:52:37.963808 | orchestrator | 2026-03-29 00:52:37 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:52:41.007339 | orchestrator | 2026-03-29 00:52:41 | INFO  | Task 1a16439d-2e03-442a-b330-ac506537ee12 is in state STARTED 2026-03-29 00:52:41.007999 | orchestrator | 2026-03-29 00:52:41 | INFO  | Task 113dda18-3d42-487e-b5bf-95f5cdea23c6 is in state STARTED 2026-03-29 00:52:41.008033 | orchestrator | 2026-03-29 00:52:41 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:52:44.052617 | orchestrator | 2026-03-29 00:52:44 | INFO  | Task 1a16439d-2e03-442a-b330-ac506537ee12 is in state STARTED 2026-03-29 00:52:44.052691 | orchestrator | 2026-03-29 00:52:44 | INFO  | Task 113dda18-3d42-487e-b5bf-95f5cdea23c6 is in state STARTED 2026-03-29 00:52:44.052731 | orchestrator | 2026-03-29 00:52:44 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:52:47.100180 | orchestrator | 2026-03-29 00:52:47 | INFO  | Task 1a16439d-2e03-442a-b330-ac506537ee12 is in state STARTED 2026-03-29 00:52:47.101680 | orchestrator | 2026-03-29 00:52:47 | INFO  | Task 113dda18-3d42-487e-b5bf-95f5cdea23c6 is in state STARTED 2026-03-29 00:52:47.101710 | orchestrator | 2026-03-29 00:52:47 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:52:50.147273 | orchestrator | 2026-03-29 00:52:50 | INFO  | Task 1a16439d-2e03-442a-b330-ac506537ee12 is in state STARTED 2026-03-29 00:52:50.147368 | orchestrator | 2026-03-29 00:52:50 | INFO  | Task 113dda18-3d42-487e-b5bf-95f5cdea23c6 is in state STARTED 2026-03-29 00:52:50.147378 | orchestrator | 2026-03-29 00:52:50 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:52:53.186952 | orchestrator | 2026-03-29 00:52:53 | INFO  | Task 1a16439d-2e03-442a-b330-ac506537ee12 is in state STARTED 2026-03-29 00:52:53.190729 | orchestrator | 2026-03-29 00:52:53 | INFO  | Task 113dda18-3d42-487e-b5bf-95f5cdea23c6 is in state STARTED 2026-03-29 00:52:53.190809 | orchestrator | 2026-03-29 00:52:53 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:52:56.233967 | orchestrator | 2026-03-29 00:52:56 | INFO  | Task 1a16439d-2e03-442a-b330-ac506537ee12 is in state STARTED 2026-03-29 00:52:56.234611 | orchestrator | 2026-03-29 00:52:56 | INFO  | Task 113dda18-3d42-487e-b5bf-95f5cdea23c6 is in state STARTED 2026-03-29 00:52:56.234632 | orchestrator | 2026-03-29 00:52:56 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:52:59.285353 | orchestrator | 2026-03-29 00:52:59 | INFO  | Task 1a16439d-2e03-442a-b330-ac506537ee12 is in state STARTED 2026-03-29 00:52:59.285632 | orchestrator | 2026-03-29 00:52:59 | INFO  | Task 113dda18-3d42-487e-b5bf-95f5cdea23c6 is in state STARTED 2026-03-29 00:52:59.285675 | orchestrator | 2026-03-29 00:52:59 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:53:02.325299 | orchestrator | 2026-03-29 00:53:02 | INFO  | Task 1a16439d-2e03-442a-b330-ac506537ee12 is in state STARTED 2026-03-29 00:53:02.326347 | orchestrator | 2026-03-29 00:53:02 | INFO  | Task 113dda18-3d42-487e-b5bf-95f5cdea23c6 is in state STARTED 2026-03-29 00:53:02.326466 | orchestrator | 2026-03-29 00:53:02 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:53:05.375752 | orchestrator | 2026-03-29 00:53:05 | INFO  | Task 1a16439d-2e03-442a-b330-ac506537ee12 is in state STARTED 2026-03-29 00:53:05.377592 | orchestrator | 2026-03-29 00:53:05 | INFO  | Task 113dda18-3d42-487e-b5bf-95f5cdea23c6 is in state STARTED 2026-03-29 00:53:05.377710 | orchestrator | 2026-03-29 00:53:05 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:53:08.408810 | orchestrator | 2026-03-29 00:53:08 | INFO  | Task 1a16439d-2e03-442a-b330-ac506537ee12 is in state STARTED 2026-03-29 00:53:08.409965 | orchestrator | 2026-03-29 00:53:08 | INFO  | Task 113dda18-3d42-487e-b5bf-95f5cdea23c6 is in state STARTED 2026-03-29 00:53:08.410005 | orchestrator | 2026-03-29 00:53:08 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:53:11.462991 | orchestrator | 2026-03-29 00:53:11 | INFO  | Task 1a16439d-2e03-442a-b330-ac506537ee12 is in state STARTED 2026-03-29 00:53:11.464724 | orchestrator | 2026-03-29 00:53:11 | INFO  | Task 113dda18-3d42-487e-b5bf-95f5cdea23c6 is in state STARTED 2026-03-29 00:53:11.464762 | orchestrator | 2026-03-29 00:53:11 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:53:14.507257 | orchestrator | 2026-03-29 00:53:14 | INFO  | Task 1a16439d-2e03-442a-b330-ac506537ee12 is in state STARTED 2026-03-29 00:53:14.510320 | orchestrator | 2026-03-29 00:53:14 | INFO  | Task 113dda18-3d42-487e-b5bf-95f5cdea23c6 is in state STARTED 2026-03-29 00:53:14.510388 | orchestrator | 2026-03-29 00:53:14 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:53:17.556930 | orchestrator | 2026-03-29 00:53:17 | INFO  | Task 1a16439d-2e03-442a-b330-ac506537ee12 is in state STARTED 2026-03-29 00:53:17.557024 | orchestrator | 2026-03-29 00:53:17 | INFO  | Task 113dda18-3d42-487e-b5bf-95f5cdea23c6 is in state STARTED 2026-03-29 00:53:17.557036 | orchestrator | 2026-03-29 00:53:17 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:53:20.594946 | orchestrator | 2026-03-29 00:53:20 | INFO  | Task 1a16439d-2e03-442a-b330-ac506537ee12 is in state STARTED 2026-03-29 00:53:20.597788 | orchestrator | 2026-03-29 00:53:20 | INFO  | Task 113dda18-3d42-487e-b5bf-95f5cdea23c6 is in state STARTED 2026-03-29 00:53:20.597898 | orchestrator | 2026-03-29 00:53:20 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:53:23.637124 | orchestrator | 2026-03-29 00:53:23 | INFO  | Task 1a16439d-2e03-442a-b330-ac506537ee12 is in state STARTED 2026-03-29 00:53:23.637222 | orchestrator | 2026-03-29 00:53:23 | INFO  | Task 113dda18-3d42-487e-b5bf-95f5cdea23c6 is in state STARTED 2026-03-29 00:53:23.637233 | orchestrator | 2026-03-29 00:53:23 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:53:26.679921 | orchestrator | 2026-03-29 00:53:26 | INFO  | Task 1a16439d-2e03-442a-b330-ac506537ee12 is in state STARTED 2026-03-29 00:53:26.681887 | orchestrator | 2026-03-29 00:53:26 | INFO  | Task 113dda18-3d42-487e-b5bf-95f5cdea23c6 is in state STARTED 2026-03-29 00:53:26.681950 | orchestrator | 2026-03-29 00:53:26 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:53:29.726310 | orchestrator | 2026-03-29 00:53:29 | INFO  | Task 1a16439d-2e03-442a-b330-ac506537ee12 is in state STARTED 2026-03-29 00:53:29.726601 | orchestrator | 2026-03-29 00:53:29 | INFO  | Task 113dda18-3d42-487e-b5bf-95f5cdea23c6 is in state STARTED 2026-03-29 00:53:29.726863 | orchestrator | 2026-03-29 00:53:29 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:53:32.771740 | orchestrator | 2026-03-29 00:53:32 | INFO  | Task 1a16439d-2e03-442a-b330-ac506537ee12 is in state STARTED 2026-03-29 00:53:32.772453 | orchestrator | 2026-03-29 00:53:32 | INFO  | Task 113dda18-3d42-487e-b5bf-95f5cdea23c6 is in state STARTED 2026-03-29 00:53:32.772529 | orchestrator | 2026-03-29 00:53:32 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:53:35.819381 | orchestrator | 2026-03-29 00:53:35 | INFO  | Task 1a16439d-2e03-442a-b330-ac506537ee12 is in state STARTED 2026-03-29 00:53:35.821307 | orchestrator | 2026-03-29 00:53:35 | INFO  | Task 113dda18-3d42-487e-b5bf-95f5cdea23c6 is in state STARTED 2026-03-29 00:53:35.821376 | orchestrator | 2026-03-29 00:53:35 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:53:38.860856 | orchestrator | 2026-03-29 00:53:38 | INFO  | Task 1a16439d-2e03-442a-b330-ac506537ee12 is in state STARTED 2026-03-29 00:53:38.862722 | orchestrator | 2026-03-29 00:53:38 | INFO  | Task 113dda18-3d42-487e-b5bf-95f5cdea23c6 is in state STARTED 2026-03-29 00:53:38.863274 | orchestrator | 2026-03-29 00:53:38 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:53:41.905173 | orchestrator | 2026-03-29 00:53:41 | INFO  | Task 1a16439d-2e03-442a-b330-ac506537ee12 is in state STARTED 2026-03-29 00:53:41.905261 | orchestrator | 2026-03-29 00:53:41 | INFO  | Task 113dda18-3d42-487e-b5bf-95f5cdea23c6 is in state STARTED 2026-03-29 00:53:41.905270 | orchestrator | 2026-03-29 00:53:41 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:53:44.944555 | orchestrator | 2026-03-29 00:53:44 | INFO  | Task 1a16439d-2e03-442a-b330-ac506537ee12 is in state STARTED 2026-03-29 00:53:44.946140 | orchestrator | 2026-03-29 00:53:44 | INFO  | Task 113dda18-3d42-487e-b5bf-95f5cdea23c6 is in state STARTED 2026-03-29 00:53:44.946351 | orchestrator | 2026-03-29 00:53:44 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:53:47.995454 | orchestrator | 2026-03-29 00:53:47 | INFO  | Task 1a16439d-2e03-442a-b330-ac506537ee12 is in state STARTED 2026-03-29 00:53:47.997303 | orchestrator | 2026-03-29 00:53:47 | INFO  | Task 113dda18-3d42-487e-b5bf-95f5cdea23c6 is in state STARTED 2026-03-29 00:53:47.997407 | orchestrator | 2026-03-29 00:53:47 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:53:51.059790 | orchestrator | 2026-03-29 00:53:51 | INFO  | Task 1a16439d-2e03-442a-b330-ac506537ee12 is in state STARTED 2026-03-29 00:53:51.059976 | orchestrator | 2026-03-29 00:53:51 | INFO  | Task 113dda18-3d42-487e-b5bf-95f5cdea23c6 is in state STARTED 2026-03-29 00:53:51.060025 | orchestrator | 2026-03-29 00:53:51 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:53:54.108362 | orchestrator | 2026-03-29 00:53:54 | INFO  | Task 1a16439d-2e03-442a-b330-ac506537ee12 is in state STARTED 2026-03-29 00:53:54.109499 | orchestrator | 2026-03-29 00:53:54 | INFO  | Task 113dda18-3d42-487e-b5bf-95f5cdea23c6 is in state STARTED 2026-03-29 00:53:54.109528 | orchestrator | 2026-03-29 00:53:54 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:53:57.160950 | orchestrator | 2026-03-29 00:53:57 | INFO  | Task 1a16439d-2e03-442a-b330-ac506537ee12 is in state STARTED 2026-03-29 00:53:57.162589 | orchestrator | 2026-03-29 00:53:57 | INFO  | Task 113dda18-3d42-487e-b5bf-95f5cdea23c6 is in state STARTED 2026-03-29 00:53:57.162628 | orchestrator | 2026-03-29 00:53:57 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:54:00.213525 | orchestrator | 2026-03-29 00:54:00 | INFO  | Task 1a16439d-2e03-442a-b330-ac506537ee12 is in state STARTED 2026-03-29 00:54:00.213921 | orchestrator | 2026-03-29 00:54:00 | INFO  | Task 113dda18-3d42-487e-b5bf-95f5cdea23c6 is in state STARTED 2026-03-29 00:54:00.214076 | orchestrator | 2026-03-29 00:54:00 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:54:03.257633 | orchestrator | 2026-03-29 00:54:03 | INFO  | Task 1a16439d-2e03-442a-b330-ac506537ee12 is in state STARTED 2026-03-29 00:54:03.260205 | orchestrator | 2026-03-29 00:54:03 | INFO  | Task 113dda18-3d42-487e-b5bf-95f5cdea23c6 is in state STARTED 2026-03-29 00:54:03.260269 | orchestrator | 2026-03-29 00:54:03 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:54:06.300688 | orchestrator | 2026-03-29 00:54:06 | INFO  | Task 1a16439d-2e03-442a-b330-ac506537ee12 is in state STARTED 2026-03-29 00:54:06.301575 | orchestrator | 2026-03-29 00:54:06 | INFO  | Task 113dda18-3d42-487e-b5bf-95f5cdea23c6 is in state STARTED 2026-03-29 00:54:06.301606 | orchestrator | 2026-03-29 00:54:06 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:54:09.360180 | orchestrator | 2026-03-29 00:54:09 | INFO  | Task 1a16439d-2e03-442a-b330-ac506537ee12 is in state STARTED 2026-03-29 00:54:09.362689 | orchestrator | 2026-03-29 00:54:09 | INFO  | Task 113dda18-3d42-487e-b5bf-95f5cdea23c6 is in state STARTED 2026-03-29 00:54:09.362740 | orchestrator | 2026-03-29 00:54:09 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:54:12.409404 | orchestrator | 2026-03-29 00:54:12 | INFO  | Task 1a16439d-2e03-442a-b330-ac506537ee12 is in state STARTED 2026-03-29 00:54:12.411944 | orchestrator | 2026-03-29 00:54:12 | INFO  | Task 113dda18-3d42-487e-b5bf-95f5cdea23c6 is in state STARTED 2026-03-29 00:54:12.412017 | orchestrator | 2026-03-29 00:54:12 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:54:15.447308 | orchestrator | 2026-03-29 00:54:15 | INFO  | Task 1a16439d-2e03-442a-b330-ac506537ee12 is in state STARTED 2026-03-29 00:54:15.447831 | orchestrator | 2026-03-29 00:54:15 | INFO  | Task 113dda18-3d42-487e-b5bf-95f5cdea23c6 is in state STARTED 2026-03-29 00:54:15.447855 | orchestrator | 2026-03-29 00:54:15 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:54:18.496564 | orchestrator | 2026-03-29 00:54:18 | INFO  | Task 1a16439d-2e03-442a-b330-ac506537ee12 is in state STARTED 2026-03-29 00:54:18.496652 | orchestrator | 2026-03-29 00:54:18 | INFO  | Task 113dda18-3d42-487e-b5bf-95f5cdea23c6 is in state STARTED 2026-03-29 00:54:18.496660 | orchestrator | 2026-03-29 00:54:18 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:54:21.560249 | orchestrator | 2026-03-29 00:54:21 | INFO  | Task 1a16439d-2e03-442a-b330-ac506537ee12 is in state STARTED 2026-03-29 00:54:21.561348 | orchestrator | 2026-03-29 00:54:21 | INFO  | Task 113dda18-3d42-487e-b5bf-95f5cdea23c6 is in state STARTED 2026-03-29 00:54:21.561406 | orchestrator | 2026-03-29 00:54:21 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:54:24.615273 | orchestrator | 2026-03-29 00:54:24 | INFO  | Task 1a16439d-2e03-442a-b330-ac506537ee12 is in state STARTED 2026-03-29 00:54:24.618537 | orchestrator | 2026-03-29 00:54:24 | INFO  | Task 113dda18-3d42-487e-b5bf-95f5cdea23c6 is in state STARTED 2026-03-29 00:54:24.618573 | orchestrator | 2026-03-29 00:54:24 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:54:27.690951 | orchestrator | 2026-03-29 00:54:27 | INFO  | Task 1a16439d-2e03-442a-b330-ac506537ee12 is in state STARTED 2026-03-29 00:54:27.691078 | orchestrator | 2026-03-29 00:54:27 | INFO  | Task 113dda18-3d42-487e-b5bf-95f5cdea23c6 is in state STARTED 2026-03-29 00:54:27.691102 | orchestrator | 2026-03-29 00:54:27 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:54:30.731869 | orchestrator | 2026-03-29 00:54:30 | INFO  | Task 1a16439d-2e03-442a-b330-ac506537ee12 is in state STARTED 2026-03-29 00:54:30.734485 | orchestrator | 2026-03-29 00:54:30 | INFO  | Task 113dda18-3d42-487e-b5bf-95f5cdea23c6 is in state STARTED 2026-03-29 00:54:30.734535 | orchestrator | 2026-03-29 00:54:30 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:54:33.779826 | orchestrator | 2026-03-29 00:54:33 | INFO  | Task 1a16439d-2e03-442a-b330-ac506537ee12 is in state STARTED 2026-03-29 00:54:33.779913 | orchestrator | 2026-03-29 00:54:33 | INFO  | Task 113dda18-3d42-487e-b5bf-95f5cdea23c6 is in state STARTED 2026-03-29 00:54:33.779924 | orchestrator | 2026-03-29 00:54:33 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:54:36.818147 | orchestrator | 2026-03-29 00:54:36 | INFO  | Task 1a16439d-2e03-442a-b330-ac506537ee12 is in state STARTED 2026-03-29 00:54:36.818899 | orchestrator | 2026-03-29 00:54:36 | INFO  | Task 113dda18-3d42-487e-b5bf-95f5cdea23c6 is in state STARTED 2026-03-29 00:54:36.818922 | orchestrator | 2026-03-29 00:54:36 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:54:39.853661 | orchestrator | 2026-03-29 00:54:39 | INFO  | Task 1a16439d-2e03-442a-b330-ac506537ee12 is in state STARTED 2026-03-29 00:54:39.854815 | orchestrator | 2026-03-29 00:54:39 | INFO  | Task 113dda18-3d42-487e-b5bf-95f5cdea23c6 is in state STARTED 2026-03-29 00:54:39.854896 | orchestrator | 2026-03-29 00:54:39 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:54:42.884166 | orchestrator | 2026-03-29 00:54:42 | INFO  | Task 1a16439d-2e03-442a-b330-ac506537ee12 is in state STARTED 2026-03-29 00:54:42.885772 | orchestrator | 2026-03-29 00:54:42 | INFO  | Task 113dda18-3d42-487e-b5bf-95f5cdea23c6 is in state STARTED 2026-03-29 00:54:42.885858 | orchestrator | 2026-03-29 00:54:42 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:54:45.925852 | orchestrator | 2026-03-29 00:54:45 | INFO  | Task 1a16439d-2e03-442a-b330-ac506537ee12 is in state STARTED 2026-03-29 00:54:45.925959 | orchestrator | 2026-03-29 00:54:45 | INFO  | Task 113dda18-3d42-487e-b5bf-95f5cdea23c6 is in state STARTED 2026-03-29 00:54:45.925976 | orchestrator | 2026-03-29 00:54:45 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:54:48.978566 | orchestrator | 2026-03-29 00:54:48 | INFO  | Task 1a16439d-2e03-442a-b330-ac506537ee12 is in state STARTED 2026-03-29 00:54:48.980280 | orchestrator | 2026-03-29 00:54:48 | INFO  | Task 113dda18-3d42-487e-b5bf-95f5cdea23c6 is in state STARTED 2026-03-29 00:54:48.980333 | orchestrator | 2026-03-29 00:54:48 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:54:52.045144 | orchestrator | 2026-03-29 00:54:52 | INFO  | Task 1a16439d-2e03-442a-b330-ac506537ee12 is in state STARTED 2026-03-29 00:54:52.045225 | orchestrator | 2026-03-29 00:54:52 | INFO  | Task 113dda18-3d42-487e-b5bf-95f5cdea23c6 is in state STARTED 2026-03-29 00:54:52.045237 | orchestrator | 2026-03-29 00:54:52 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:54:55.071442 | orchestrator | 2026-03-29 00:54:55 | INFO  | Task 1a16439d-2e03-442a-b330-ac506537ee12 is in state STARTED 2026-03-29 00:54:55.073174 | orchestrator | 2026-03-29 00:54:55 | INFO  | Task 113dda18-3d42-487e-b5bf-95f5cdea23c6 is in state STARTED 2026-03-29 00:54:55.073404 | orchestrator | 2026-03-29 00:54:55 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:54:58.117576 | orchestrator | 2026-03-29 00:54:58 | INFO  | Task 1a16439d-2e03-442a-b330-ac506537ee12 is in state STARTED 2026-03-29 00:54:58.121048 | orchestrator | 2026-03-29 00:54:58 | INFO  | Task 113dda18-3d42-487e-b5bf-95f5cdea23c6 is in state STARTED 2026-03-29 00:54:58.121134 | orchestrator | 2026-03-29 00:54:58 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:55:01.158186 | orchestrator | 2026-03-29 00:55:01 | INFO  | Task 1a16439d-2e03-442a-b330-ac506537ee12 is in state STARTED 2026-03-29 00:55:01.158814 | orchestrator | 2026-03-29 00:55:01 | INFO  | Task 113dda18-3d42-487e-b5bf-95f5cdea23c6 is in state STARTED 2026-03-29 00:55:01.158867 | orchestrator | 2026-03-29 00:55:01 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:55:04.202301 | orchestrator | 2026-03-29 00:55:04 | INFO  | Task 1a16439d-2e03-442a-b330-ac506537ee12 is in state STARTED 2026-03-29 00:55:04.204618 | orchestrator | 2026-03-29 00:55:04 | INFO  | Task 113dda18-3d42-487e-b5bf-95f5cdea23c6 is in state STARTED 2026-03-29 00:55:04.204669 | orchestrator | 2026-03-29 00:55:04 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:55:07.254459 | orchestrator | 2026-03-29 00:55:07 | INFO  | Task 1a16439d-2e03-442a-b330-ac506537ee12 is in state STARTED 2026-03-29 00:55:07.262971 | orchestrator | 2026-03-29 00:55:07 | INFO  | Task 113dda18-3d42-487e-b5bf-95f5cdea23c6 is in state SUCCESS 2026-03-29 00:55:07.264499 | orchestrator | 2026-03-29 00:55:07.264544 | orchestrator | 2026-03-29 00:55:07.264553 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-29 00:55:07.264560 | orchestrator | 2026-03-29 00:55:07.264567 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-29 00:55:07.264574 | orchestrator | Sunday 29 March 2026 00:48:51 +0000 (0:00:00.287) 0:00:00.287 ********** 2026-03-29 00:55:07.264580 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:55:07.264587 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:55:07.264594 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:55:07.264600 | orchestrator | 2026-03-29 00:55:07.264606 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-29 00:55:07.264613 | orchestrator | Sunday 29 March 2026 00:48:51 +0000 (0:00:00.407) 0:00:00.695 ********** 2026-03-29 00:55:07.264620 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2026-03-29 00:55:07.264627 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2026-03-29 00:55:07.264634 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2026-03-29 00:55:07.264639 | orchestrator | 2026-03-29 00:55:07.264643 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2026-03-29 00:55:07.264647 | orchestrator | 2026-03-29 00:55:07.264651 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-03-29 00:55:07.264655 | orchestrator | Sunday 29 March 2026 00:48:52 +0000 (0:00:00.379) 0:00:01.075 ********** 2026-03-29 00:55:07.264659 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 00:55:07.264677 | orchestrator | 2026-03-29 00:55:07.264681 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2026-03-29 00:55:07.264685 | orchestrator | Sunday 29 March 2026 00:48:52 +0000 (0:00:00.596) 0:00:01.671 ********** 2026-03-29 00:55:07.264688 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:55:07.264693 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:55:07.264696 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:55:07.264700 | orchestrator | 2026-03-29 00:55:07.264730 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-03-29 00:55:07.264746 | orchestrator | Sunday 29 March 2026 00:48:54 +0000 (0:00:01.260) 0:00:02.931 ********** 2026-03-29 00:55:07.264750 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 00:55:07.264754 | orchestrator | 2026-03-29 00:55:07.264757 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2026-03-29 00:55:07.264761 | orchestrator | Sunday 29 March 2026 00:48:54 +0000 (0:00:00.701) 0:00:03.633 ********** 2026-03-29 00:55:07.264765 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:55:07.264769 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:55:07.264773 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:55:07.264776 | orchestrator | 2026-03-29 00:55:07.264780 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2026-03-29 00:55:07.264784 | orchestrator | Sunday 29 March 2026 00:48:56 +0000 (0:00:01.935) 0:00:05.569 ********** 2026-03-29 00:55:07.264787 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-03-29 00:55:07.264791 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-03-29 00:55:07.264795 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-03-29 00:55:07.264799 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-03-29 00:55:07.264803 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-03-29 00:55:07.264873 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-03-29 00:55:07.264878 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-03-29 00:55:07.264882 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-03-29 00:55:07.264885 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-03-29 00:55:07.264889 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-03-29 00:55:07.264893 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-03-29 00:55:07.264896 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-03-29 00:55:07.264900 | orchestrator | 2026-03-29 00:55:07.264904 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-03-29 00:55:07.264908 | orchestrator | Sunday 29 March 2026 00:49:00 +0000 (0:00:03.149) 0:00:08.719 ********** 2026-03-29 00:55:07.264912 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-03-29 00:55:07.264916 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-03-29 00:55:07.264919 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-03-29 00:55:07.264923 | orchestrator | 2026-03-29 00:55:07.264927 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-03-29 00:55:07.264931 | orchestrator | Sunday 29 March 2026 00:49:00 +0000 (0:00:00.891) 0:00:09.611 ********** 2026-03-29 00:55:07.264934 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-03-29 00:55:07.264938 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-03-29 00:55:07.264942 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-03-29 00:55:07.264946 | orchestrator | 2026-03-29 00:55:07.264949 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-03-29 00:55:07.264957 | orchestrator | Sunday 29 March 2026 00:49:02 +0000 (0:00:01.576) 0:00:11.187 ********** 2026-03-29 00:55:07.264962 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2026-03-29 00:55:07.264965 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:55:07.264978 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2026-03-29 00:55:07.264982 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:55:07.264986 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2026-03-29 00:55:07.264990 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:55:07.264994 | orchestrator | 2026-03-29 00:55:07.264997 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2026-03-29 00:55:07.265001 | orchestrator | Sunday 29 March 2026 00:49:03 +0000 (0:00:00.636) 0:00:11.824 ********** 2026-03-29 00:55:07.265006 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-29 00:55:07.265015 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-29 00:55:07.265020 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-29 00:55:07.265024 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-29 00:55:07.265028 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-29 00:55:07.265037 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-29 00:55:07.265042 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-29 00:55:07.265046 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-29 00:55:07.265052 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-29 00:55:07.265078 | orchestrator | 2026-03-29 00:55:07.265083 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2026-03-29 00:55:07.265087 | orchestrator | Sunday 29 March 2026 00:49:04 +0000 (0:00:01.583) 0:00:13.407 ********** 2026-03-29 00:55:07.265091 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:55:07.265094 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:55:07.265098 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:55:07.265102 | orchestrator | 2026-03-29 00:55:07.265106 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2026-03-29 00:55:07.265109 | orchestrator | Sunday 29 March 2026 00:49:05 +0000 (0:00:00.836) 0:00:14.243 ********** 2026-03-29 00:55:07.265113 | orchestrator | changed: [testbed-node-0] => (item=users) 2026-03-29 00:55:07.265117 | orchestrator | changed: [testbed-node-1] => (item=users) 2026-03-29 00:55:07.265121 | orchestrator | changed: [testbed-node-2] => (item=users) 2026-03-29 00:55:07.265124 | orchestrator | changed: [testbed-node-0] => (item=rules) 2026-03-29 00:55:07.265128 | orchestrator | changed: [testbed-node-1] => (item=rules) 2026-03-29 00:55:07.265132 | orchestrator | changed: [testbed-node-2] => (item=rules) 2026-03-29 00:55:07.265136 | orchestrator | 2026-03-29 00:55:07.265139 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2026-03-29 00:55:07.265143 | orchestrator | Sunday 29 March 2026 00:49:07 +0000 (0:00:01.879) 0:00:16.123 ********** 2026-03-29 00:55:07.265147 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:55:07.265151 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:55:07.265154 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:55:07.265158 | orchestrator | 2026-03-29 00:55:07.265162 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2026-03-29 00:55:07.265168 | orchestrator | Sunday 29 March 2026 00:49:09 +0000 (0:00:01.673) 0:00:17.796 ********** 2026-03-29 00:55:07.265172 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:55:07.265176 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:55:07.265180 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:55:07.265183 | orchestrator | 2026-03-29 00:55:07.265187 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2026-03-29 00:55:07.265191 | orchestrator | Sunday 29 March 2026 00:49:10 +0000 (0:00:01.284) 0:00:19.080 ********** 2026-03-29 00:55:07.265195 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-29 00:55:07.265207 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-29 00:55:07.265214 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-29 00:55:07.265225 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__33b03c57e2bc85bcb3491e8b50e65eabcc730ef7', '__omit_place_holder__33b03c57e2bc85bcb3491e8b50e65eabcc730ef7'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-29 00:55:07.265232 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:55:07.265242 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-29 00:55:07.265249 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-29 00:55:07.265259 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-29 00:55:07.265265 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__33b03c57e2bc85bcb3491e8b50e65eabcc730ef7', '__omit_place_holder__33b03c57e2bc85bcb3491e8b50e65eabcc730ef7'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-29 00:55:07.265271 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:55:07.265283 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-29 00:55:07.265290 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-29 00:55:07.265299 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-29 00:55:07.265306 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__33b03c57e2bc85bcb3491e8b50e65eabcc730ef7', '__omit_place_holder__33b03c57e2bc85bcb3491e8b50e65eabcc730ef7'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-29 00:55:07.265316 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:55:07.265322 | orchestrator | 2026-03-29 00:55:07.265326 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2026-03-29 00:55:07.265330 | orchestrator | Sunday 29 March 2026 00:49:11 +0000 (0:00:00.937) 0:00:20.018 ********** 2026-03-29 00:55:07.265334 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-29 00:55:07.265338 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-29 00:55:07.265346 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-29 00:55:07.265351 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-29 00:55:07.265356 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-29 00:55:07.265361 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__33b03c57e2bc85bcb3491e8b50e65eabcc730ef7', '__omit_place_holder__33b03c57e2bc85bcb3491e8b50e65eabcc730ef7'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-29 00:55:07.265369 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-29 00:55:07.265373 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-29 00:55:07.265377 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__33b03c57e2bc85bcb3491e8b50e65eabcc730ef7', '__omit_place_holder__33b03c57e2bc85bcb3491e8b50e65eabcc730ef7'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-29 00:55:07.265383 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-29 00:55:07.265387 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-29 00:55:07.265393 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__33b03c57e2bc85bcb3491e8b50e65eabcc730ef7', '__omit_place_holder__33b03c57e2bc85bcb3491e8b50e65eabcc730ef7'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-29 00:55:07.265400 | orchestrator | 2026-03-29 00:55:07.265438 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2026-03-29 00:55:07.265445 | orchestrator | Sunday 29 March 2026 00:49:14 +0000 (0:00:02.958) 0:00:22.976 ********** 2026-03-29 00:55:07.265452 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-29 00:55:07.265459 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-29 00:55:07.265466 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-29 00:55:07.265478 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-29 00:55:07.265485 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-29 00:55:07.265494 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-29 00:55:07.265508 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-29 00:55:07.265515 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-29 00:55:07.265522 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-29 00:55:07.265536 | orchestrator | 2026-03-29 00:55:07.265542 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2026-03-29 00:55:07.265549 | orchestrator | Sunday 29 March 2026 00:49:18 +0000 (0:00:04.192) 0:00:27.169 ********** 2026-03-29 00:55:07.265554 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-03-29 00:55:07.265558 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-03-29 00:55:07.265562 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-03-29 00:55:07.265566 | orchestrator | 2026-03-29 00:55:07.265570 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2026-03-29 00:55:07.265574 | orchestrator | Sunday 29 March 2026 00:49:21 +0000 (0:00:02.896) 0:00:30.065 ********** 2026-03-29 00:55:07.265577 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-03-29 00:55:07.265581 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-03-29 00:55:07.265585 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-03-29 00:55:07.265589 | orchestrator | 2026-03-29 00:55:07.265800 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2026-03-29 00:55:07.265810 | orchestrator | Sunday 29 March 2026 00:49:24 +0000 (0:00:03.385) 0:00:33.451 ********** 2026-03-29 00:55:07.265814 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:55:07.265818 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:55:07.265822 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:55:07.265826 | orchestrator | 2026-03-29 00:55:07.265830 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2026-03-29 00:55:07.265834 | orchestrator | Sunday 29 March 2026 00:49:25 +0000 (0:00:01.054) 0:00:34.506 ********** 2026-03-29 00:55:07.265837 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-03-29 00:55:07.265847 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-03-29 00:55:07.265851 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-03-29 00:55:07.265855 | orchestrator | 2026-03-29 00:55:07.265858 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2026-03-29 00:55:07.265862 | orchestrator | Sunday 29 March 2026 00:49:28 +0000 (0:00:02.945) 0:00:37.451 ********** 2026-03-29 00:55:07.265866 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-03-29 00:55:07.265870 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-03-29 00:55:07.265874 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-03-29 00:55:07.265900 | orchestrator | 2026-03-29 00:55:07.265908 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2026-03-29 00:55:07.265912 | orchestrator | Sunday 29 March 2026 00:49:31 +0000 (0:00:03.238) 0:00:40.690 ********** 2026-03-29 00:55:07.265916 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2026-03-29 00:55:07.265920 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2026-03-29 00:55:07.265923 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2026-03-29 00:55:07.265927 | orchestrator | 2026-03-29 00:55:07.265931 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2026-03-29 00:55:07.265935 | orchestrator | Sunday 29 March 2026 00:49:34 +0000 (0:00:02.150) 0:00:42.840 ********** 2026-03-29 00:55:07.265939 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2026-03-29 00:55:07.265943 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2026-03-29 00:55:07.265946 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2026-03-29 00:55:07.265950 | orchestrator | 2026-03-29 00:55:07.265954 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-03-29 00:55:07.265958 | orchestrator | Sunday 29 March 2026 00:49:36 +0000 (0:00:02.655) 0:00:45.496 ********** 2026-03-29 00:55:07.265961 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 00:55:07.265966 | orchestrator | 2026-03-29 00:55:07.265969 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over extra CA certificates] *** 2026-03-29 00:55:07.265973 | orchestrator | Sunday 29 March 2026 00:49:37 +0000 (0:00:01.059) 0:00:46.556 ********** 2026-03-29 00:55:07.265977 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-29 00:55:07.265981 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-29 00:55:07.265989 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-29 00:55:07.265996 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-29 00:55:07.266055 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-29 00:55:07.266068 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-29 00:55:07.266073 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-29 00:55:07.266077 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-29 00:55:07.266081 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-29 00:55:07.266088 | orchestrator | 2026-03-29 00:55:07.266092 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS certificate] *** 2026-03-29 00:55:07.266096 | orchestrator | Sunday 29 March 2026 00:49:41 +0000 (0:00:04.083) 0:00:50.640 ********** 2026-03-29 00:55:07.266109 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-29 00:55:07.266119 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-29 00:55:07.266130 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-29 00:55:07.266137 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:55:07.266143 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-29 00:55:07.266150 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-29 00:55:07.266156 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-29 00:55:07.266165 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:55:07.266171 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-29 00:55:07.266182 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-29 00:55:07.266189 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-29 00:55:07.266195 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:55:07.266232 | orchestrator | 2026-03-29 00:55:07.266240 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS key] *** 2026-03-29 00:55:07.266246 | orchestrator | Sunday 29 March 2026 00:49:42 +0000 (0:00:00.865) 0:00:51.505 ********** 2026-03-29 00:55:07.266256 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-29 00:55:07.266264 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-29 00:55:07.266269 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-29 00:55:07.266276 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:55:07.266294 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-29 00:55:07.266303 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-29 00:55:07.266307 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-29 00:55:07.266311 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:55:07.266321 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-29 00:55:07.266328 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-29 00:55:07.266338 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-29 00:55:07.266344 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:55:07.266350 | orchestrator | 2026-03-29 00:55:07.266360 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-03-29 00:55:07.266366 | orchestrator | Sunday 29 March 2026 00:49:44 +0000 (0:00:02.095) 0:00:53.600 ********** 2026-03-29 00:55:07.266373 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-29 00:55:07.266384 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-29 00:55:07.266391 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-29 00:55:07.266398 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:55:07.266405 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-29 00:55:07.266416 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-29 00:55:07.266423 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-29 00:55:07.266430 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:55:07.266438 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-29 00:55:07.266443 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-29 00:55:07.266450 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-29 00:55:07.266455 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:55:07.266459 | orchestrator | 2026-03-29 00:55:07.266464 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-03-29 00:55:07.266468 | orchestrator | Sunday 29 March 2026 00:49:46 +0000 (0:00:01.698) 0:00:55.299 ********** 2026-03-29 00:55:07.266473 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-29 00:55:07.266480 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-29 00:55:07.266517 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-29 00:55:07.266527 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:55:07.266539 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-29 00:55:07.266546 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-29 00:55:07.266553 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-29 00:55:07.266565 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-29 00:55:07.266572 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:55:07.266578 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-29 00:55:07.266589 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-29 00:55:07.266595 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:55:07.266601 | orchestrator | 2026-03-29 00:55:07.266607 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-03-29 00:55:07.266614 | orchestrator | Sunday 29 March 2026 00:49:47 +0000 (0:00:00.628) 0:00:55.928 ********** 2026-03-29 00:55:07.266626 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-29 00:55:07.266634 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-29 00:55:07.266641 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-29 00:55:07.266648 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:55:07.266955 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-29 00:55:07.266968 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-29 00:55:07.266972 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-29 00:55:07.266976 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:55:07.266983 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-29 00:55:07.266992 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-29 00:55:07.266996 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-29 00:55:07.267000 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:55:07.267004 | orchestrator | 2026-03-29 00:55:07.267007 | orchestrator | TASK [service-cert-copy : proxysql | Copying over extra CA certificates] ******* 2026-03-29 00:55:07.267030 | orchestrator | Sunday 29 March 2026 00:49:48 +0000 (0:00:01.045) 0:00:56.974 ********** 2026-03-29 00:55:07.267035 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-29 00:55:07.267046 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-29 00:55:07.267056 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-29 00:55:07.267063 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:55:07.267074 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-29 00:55:07.267085 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-29 00:55:07.267092 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-29 00:55:07.267098 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:55:07.267104 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-29 00:55:07.267114 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-29 00:55:07.267129 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-29 00:55:07.267136 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:55:07.267149 | orchestrator | 2026-03-29 00:55:07.267154 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS certificate] *** 2026-03-29 00:55:07.267158 | orchestrator | Sunday 29 March 2026 00:49:48 +0000 (0:00:00.597) 0:00:57.571 ********** 2026-03-29 00:55:07.267161 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-29 00:55:07.267171 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-29 00:55:07.267175 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-29 00:55:07.267179 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:55:07.267183 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-29 00:55:07.267187 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-29 00:55:07.267194 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-29 00:55:07.267199 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:55:07.267202 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-29 00:55:07.267210 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-29 00:55:07.267214 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-29 00:55:07.267218 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:55:07.267222 | orchestrator | 2026-03-29 00:55:07.267226 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS key] **** 2026-03-29 00:55:07.267230 | orchestrator | Sunday 29 March 2026 00:49:49 +0000 (0:00:00.590) 0:00:58.161 ********** 2026-03-29 00:55:07.267234 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-29 00:55:07.267238 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-29 00:55:07.267242 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-29 00:55:07.267246 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:55:07.267252 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-29 00:55:07.267258 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-29 00:55:07.267264 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-29 00:55:07.267268 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:55:07.267272 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-29 00:55:07.267276 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-29 00:55:07.267280 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-29 00:55:07.267284 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:55:07.267288 | orchestrator | 2026-03-29 00:55:07.267292 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2026-03-29 00:55:07.267314 | orchestrator | Sunday 29 March 2026 00:49:50 +0000 (0:00:01.352) 0:00:59.513 ********** 2026-03-29 00:55:07.267319 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-03-29 00:55:07.267323 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-03-29 00:55:07.267329 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-03-29 00:55:07.267336 | orchestrator | 2026-03-29 00:55:07.267340 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2026-03-29 00:55:07.267344 | orchestrator | Sunday 29 March 2026 00:49:52 +0000 (0:00:01.427) 0:01:00.941 ********** 2026-03-29 00:55:07.267357 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-03-29 00:55:07.267362 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-03-29 00:55:07.267365 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-03-29 00:55:07.267369 | orchestrator | 2026-03-29 00:55:07.267373 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2026-03-29 00:55:07.267377 | orchestrator | Sunday 29 March 2026 00:49:53 +0000 (0:00:01.521) 0:01:02.463 ********** 2026-03-29 00:55:07.267380 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-03-29 00:55:07.267384 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-29 00:55:07.267388 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:55:07.267392 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-03-29 00:55:07.267396 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-03-29 00:55:07.267403 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-29 00:55:07.267429 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:55:07.267437 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-29 00:55:07.267447 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:55:07.267453 | orchestrator | 2026-03-29 00:55:07.267459 | orchestrator | TASK [loadbalancer : Check loadbalancer containers] **************************** 2026-03-29 00:55:07.267464 | orchestrator | Sunday 29 March 2026 00:49:55 +0000 (0:00:01.285) 0:01:03.748 ********** 2026-03-29 00:55:07.267470 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-29 00:55:07.267477 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-29 00:55:07.267500 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-29 00:55:07.267537 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-29 00:55:07.267545 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-29 00:55:07.267552 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-29 00:55:07.267562 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-29 00:55:07.267568 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-29 00:55:07.267574 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-29 00:55:07.267581 | orchestrator | 2026-03-29 00:55:07.267588 | orchestrator | TASK [include_role : aodh] ***************************************************** 2026-03-29 00:55:07.267594 | orchestrator | Sunday 29 March 2026 00:49:57 +0000 (0:00:02.930) 0:01:06.679 ********** 2026-03-29 00:55:07.267601 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 00:55:07.267613 | orchestrator | 2026-03-29 00:55:07.267619 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2026-03-29 00:55:07.267625 | orchestrator | Sunday 29 March 2026 00:49:58 +0000 (0:00:00.540) 0:01:07.220 ********** 2026-03-29 00:55:07.267632 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-29 00:55:07.267643 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-29 00:55:07.267651 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-29 00:55:07.267661 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-29 00:55:07.267669 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-29 00:55:07.267676 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-29 00:55:07.267687 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-29 00:55:07.267700 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-29 00:55:07.267742 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-29 00:55:07.267750 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-29 00:55:07.267763 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-29 00:55:07.267768 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-29 00:55:07.267778 | orchestrator | 2026-03-29 00:55:07.267782 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2026-03-29 00:55:07.267787 | orchestrator | Sunday 29 March 2026 00:50:02 +0000 (0:00:04.248) 0:01:11.468 ********** 2026-03-29 00:55:07.267792 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-03-29 00:55:07.267800 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-03-29 00:55:07.267805 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-29 00:55:07.267812 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-29 00:55:07.267817 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-29 00:55:07.267821 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-29 00:55:07.267829 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-29 00:55:07.267833 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:55:07.267839 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-29 00:55:07.267843 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:55:07.267851 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-03-29 00:55:07.267861 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-29 00:55:07.267919 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-29 00:55:07.267928 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-29 00:55:07.267939 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:55:07.267946 | orchestrator | 2026-03-29 00:55:07.267951 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2026-03-29 00:55:07.267958 | orchestrator | Sunday 29 March 2026 00:50:03 +0000 (0:00:00.825) 0:01:12.294 ********** 2026-03-29 00:55:07.267964 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-03-29 00:55:07.267971 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-03-29 00:55:07.267978 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:55:07.267985 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-03-29 00:55:07.268009 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-03-29 00:55:07.268016 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:55:07.268022 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-03-29 00:55:07.268049 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-03-29 00:55:07.268056 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:55:07.268062 | orchestrator | 2026-03-29 00:55:07.268073 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2026-03-29 00:55:07.268080 | orchestrator | Sunday 29 March 2026 00:50:04 +0000 (0:00:01.220) 0:01:13.515 ********** 2026-03-29 00:55:07.268086 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:55:07.268092 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:55:07.268097 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:55:07.268104 | orchestrator | 2026-03-29 00:55:07.268109 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2026-03-29 00:55:07.268115 | orchestrator | Sunday 29 March 2026 00:50:06 +0000 (0:00:01.422) 0:01:14.938 ********** 2026-03-29 00:55:07.268121 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:55:07.268127 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:55:07.268133 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:55:07.268138 | orchestrator | 2026-03-29 00:55:07.268144 | orchestrator | TASK [include_role : barbican] ************************************************* 2026-03-29 00:55:07.268150 | orchestrator | Sunday 29 March 2026 00:50:08 +0000 (0:00:01.908) 0:01:16.847 ********** 2026-03-29 00:55:07.268157 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 00:55:07.268163 | orchestrator | 2026-03-29 00:55:07.268170 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2026-03-29 00:55:07.268176 | orchestrator | Sunday 29 March 2026 00:50:08 +0000 (0:00:00.670) 0:01:17.518 ********** 2026-03-29 00:55:07.268182 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-29 00:55:07.268193 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-29 00:55:07.268200 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-29 00:55:07.268206 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-29 00:55:07.268254 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-29 00:55:07.268267 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-29 00:55:07.268279 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-29 00:55:07.268286 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-29 00:55:07.268292 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-29 00:55:07.268298 | orchestrator | 2026-03-29 00:55:07.268305 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2026-03-29 00:55:07.268311 | orchestrator | Sunday 29 March 2026 00:50:14 +0000 (0:00:05.801) 0:01:23.320 ********** 2026-03-29 00:55:07.268322 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-29 00:55:07.268329 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-29 00:55:07.268349 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-29 00:55:07.268356 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:55:07.268363 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-29 00:55:07.268369 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-29 00:55:07.268375 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-29 00:55:07.268382 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:55:07.268393 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-29 00:55:07.268404 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-29 00:55:07.268439 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-29 00:55:07.268446 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:55:07.268452 | orchestrator | 2026-03-29 00:55:07.268458 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2026-03-29 00:55:07.268465 | orchestrator | Sunday 29 March 2026 00:50:15 +0000 (0:00:00.952) 0:01:24.273 ********** 2026-03-29 00:55:07.268472 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-03-29 00:55:07.268478 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-03-29 00:55:07.268511 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:55:07.268519 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-03-29 00:55:07.268525 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-03-29 00:55:07.268532 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-03-29 00:55:07.268551 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:55:07.268559 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-03-29 00:55:07.268566 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:55:07.268572 | orchestrator | 2026-03-29 00:55:07.268595 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2026-03-29 00:55:07.268603 | orchestrator | Sunday 29 March 2026 00:50:16 +0000 (0:00:00.851) 0:01:25.124 ********** 2026-03-29 00:55:07.268609 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:55:07.268616 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:55:07.268622 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:55:07.268628 | orchestrator | 2026-03-29 00:55:07.268635 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2026-03-29 00:55:07.268642 | orchestrator | Sunday 29 March 2026 00:50:17 +0000 (0:00:01.223) 0:01:26.347 ********** 2026-03-29 00:55:07.268653 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:55:07.268660 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:55:07.268666 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:55:07.268673 | orchestrator | 2026-03-29 00:55:07.268683 | orchestrator | TASK [include_role : blazar] *************************************************** 2026-03-29 00:55:07.268691 | orchestrator | Sunday 29 March 2026 00:50:19 +0000 (0:00:02.073) 0:01:28.421 ********** 2026-03-29 00:55:07.268697 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:55:07.268713 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:55:07.268720 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:55:07.268726 | orchestrator | 2026-03-29 00:55:07.268733 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2026-03-29 00:55:07.268739 | orchestrator | Sunday 29 March 2026 00:50:19 +0000 (0:00:00.258) 0:01:28.680 ********** 2026-03-29 00:55:07.268745 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 00:55:07.268751 | orchestrator | 2026-03-29 00:55:07.268758 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2026-03-29 00:55:07.268764 | orchestrator | Sunday 29 March 2026 00:50:20 +0000 (0:00:00.735) 0:01:29.415 ********** 2026-03-29 00:55:07.269217 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}}) 2026-03-29 00:55:07.269233 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}}) 2026-03-29 00:55:07.269239 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}}) 2026-03-29 00:55:07.269246 | orchestrator | 2026-03-29 00:55:07.269252 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2026-03-29 00:55:07.269259 | orchestrator | Sunday 29 March 2026 00:50:22 +0000 (0:00:02.277) 0:01:31.693 ********** 2026-03-29 00:55:07.269272 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}})  2026-03-29 00:55:07.269278 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:55:07.269290 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}})  2026-03-29 00:55:07.269298 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:55:07.269307 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}})  2026-03-29 00:55:07.269314 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:55:07.269320 | orchestrator | 2026-03-29 00:55:07.269326 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2026-03-29 00:55:07.269332 | orchestrator | Sunday 29 March 2026 00:50:24 +0000 (0:00:01.317) 0:01:33.010 ********** 2026-03-29 00:55:07.269339 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-03-29 00:55:07.269347 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-03-29 00:55:07.269355 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:55:07.269362 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-03-29 00:55:07.269373 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-03-29 00:55:07.269380 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:55:07.269386 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-03-29 00:55:07.269393 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-03-29 00:55:07.269399 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:55:07.269405 | orchestrator | 2026-03-29 00:55:07.269412 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2026-03-29 00:55:07.269418 | orchestrator | Sunday 29 March 2026 00:50:26 +0000 (0:00:01.829) 0:01:34.840 ********** 2026-03-29 00:55:07.269424 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:55:07.269430 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:55:07.269439 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:55:07.269446 | orchestrator | 2026-03-29 00:55:07.269453 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2026-03-29 00:55:07.269459 | orchestrator | Sunday 29 March 2026 00:50:26 +0000 (0:00:00.428) 0:01:35.268 ********** 2026-03-29 00:55:07.269466 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:55:07.269472 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:55:07.269479 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:55:07.269485 | orchestrator | 2026-03-29 00:55:07.269492 | orchestrator | TASK [include_role : cinder] *************************************************** 2026-03-29 00:55:07.269498 | orchestrator | Sunday 29 March 2026 00:50:28 +0000 (0:00:01.534) 0:01:36.803 ********** 2026-03-29 00:55:07.269505 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 00:55:07.269511 | orchestrator | 2026-03-29 00:55:07.269568 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2026-03-29 00:55:07.269576 | orchestrator | Sunday 29 March 2026 00:50:29 +0000 (0:00:00.909) 0:01:37.712 ********** 2026-03-29 00:55:07.269583 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-29 00:55:07.269594 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-29 00:55:07.269601 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-29 00:55:07.269609 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-29 00:55:07.269619 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-29 00:55:07.269632 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-29 00:55:07.269640 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-29 00:55:07.269651 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-29 00:55:07.269658 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-29 00:55:07.269665 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-29 00:55:07.269675 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-29 00:55:07.269685 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-29 00:55:07.269695 | orchestrator | 2026-03-29 00:55:07.269711 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2026-03-29 00:55:07.269718 | orchestrator | Sunday 29 March 2026 00:50:32 +0000 (0:00:03.716) 0:01:41.428 ********** 2026-03-29 00:55:07.269725 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-29 00:55:07.269732 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-29 00:55:07.269738 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-29 00:55:07.269747 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-29 00:55:07.269754 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:55:07.269762 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-29 00:55:07.269773 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-29 00:55:07.269780 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-29 00:55:07.269787 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-29 00:55:07.269794 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-29 00:55:07.269800 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:55:07.269810 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-29 00:55:07.269819 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-29 00:55:07.269830 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-29 00:55:07.269837 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:55:07.269843 | orchestrator | 2026-03-29 00:55:07.269850 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2026-03-29 00:55:07.269856 | orchestrator | Sunday 29 March 2026 00:50:33 +0000 (0:00:00.628) 0:01:42.057 ********** 2026-03-29 00:55:07.269864 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-03-29 00:55:07.269870 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-03-29 00:55:07.269877 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:55:07.269884 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-03-29 00:55:07.269891 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-03-29 00:55:07.269897 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-03-29 00:55:07.269903 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:55:07.269909 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-03-29 00:55:07.269915 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:55:07.269922 | orchestrator | 2026-03-29 00:55:07.269927 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2026-03-29 00:55:07.269933 | orchestrator | Sunday 29 March 2026 00:50:34 +0000 (0:00:01.153) 0:01:43.210 ********** 2026-03-29 00:55:07.269939 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:55:07.269946 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:55:07.269953 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:55:07.269959 | orchestrator | 2026-03-29 00:55:07.269966 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2026-03-29 00:55:07.269972 | orchestrator | Sunday 29 March 2026 00:50:35 +0000 (0:00:01.477) 0:01:44.687 ********** 2026-03-29 00:55:07.269978 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:55:07.269989 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:55:07.269995 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:55:07.270001 | orchestrator | 2026-03-29 00:55:07.270010 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2026-03-29 00:55:07.270048 | orchestrator | Sunday 29 March 2026 00:50:37 +0000 (0:00:01.995) 0:01:46.683 ********** 2026-03-29 00:55:07.270054 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:55:07.270102 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:55:07.270108 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:55:07.270115 | orchestrator | 2026-03-29 00:55:07.270122 | orchestrator | TASK [include_role : cyborg] *************************************************** 2026-03-29 00:55:07.270128 | orchestrator | Sunday 29 March 2026 00:50:38 +0000 (0:00:00.413) 0:01:47.096 ********** 2026-03-29 00:55:07.270135 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:55:07.270140 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:55:07.270146 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:55:07.270152 | orchestrator | 2026-03-29 00:55:07.270162 | orchestrator | TASK [include_role : designate] ************************************************ 2026-03-29 00:55:07.270169 | orchestrator | Sunday 29 March 2026 00:50:38 +0000 (0:00:00.330) 0:01:47.427 ********** 2026-03-29 00:55:07.270175 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 00:55:07.270181 | orchestrator | 2026-03-29 00:55:07.270187 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2026-03-29 00:55:07.270194 | orchestrator | Sunday 29 March 2026 00:50:39 +0000 (0:00:00.941) 0:01:48.369 ********** 2026-03-29 00:55:07.270201 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-29 00:55:07.270210 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-29 00:55:07.270217 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-29 00:55:07.270223 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-29 00:55:07.270238 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-29 00:55:07.270248 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-29 00:55:07.270289 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-29 00:55:07.270296 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-29 00:55:07.270302 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-29 00:55:07.270308 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-29 00:55:07.270319 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-29 00:55:07.270329 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-29 00:55:07.270337 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-29 00:55:07.270344 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-29 00:55:07.270350 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-29 00:55:07.270357 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-29 00:55:07.270368 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-29 00:55:07.270378 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-29 00:55:07.270389 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-29 00:55:07.270396 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-29 00:55:07.270402 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-29 00:55:07.270409 | orchestrator | 2026-03-29 00:55:07.270414 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2026-03-29 00:55:07.270420 | orchestrator | Sunday 29 March 2026 00:50:43 +0000 (0:00:04.190) 0:01:52.559 ********** 2026-03-29 00:55:07.270427 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-29 00:55:07.270437 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-29 00:55:07.270448 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-29 00:55:07.270457 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-29 00:55:07.270464 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-29 00:55:07.270471 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-29 00:55:07.270477 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-29 00:55:07.270488 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-29 00:55:07.270494 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:55:07.270503 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-29 00:55:07.270513 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-29 00:55:07.270520 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-29 00:55:07.270527 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-29 00:55:07.270534 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-29 00:55:07.270543 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-29 00:55:07.270550 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:55:07.270556 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-29 00:55:07.270566 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-29 00:55:07.270575 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-29 00:55:07.270582 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-29 00:55:07.270588 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-29 00:55:07.270600 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-29 00:55:07.270607 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-29 00:55:07.270614 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:55:07.270620 | orchestrator | 2026-03-29 00:55:07.270626 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2026-03-29 00:55:07.270632 | orchestrator | Sunday 29 March 2026 00:50:44 +0000 (0:00:00.809) 0:01:53.369 ********** 2026-03-29 00:55:07.270638 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-03-29 00:55:07.270645 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-03-29 00:55:07.270650 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:55:07.270659 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-03-29 00:55:07.270665 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-03-29 00:55:07.270671 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:55:07.270741 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-03-29 00:55:07.270750 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-03-29 00:55:07.270756 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:55:07.270763 | orchestrator | 2026-03-29 00:55:07.270769 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2026-03-29 00:55:07.270776 | orchestrator | Sunday 29 March 2026 00:50:46 +0000 (0:00:01.356) 0:01:54.726 ********** 2026-03-29 00:55:07.270782 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:55:07.270789 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:55:07.270795 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:55:07.270802 | orchestrator | 2026-03-29 00:55:07.270808 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2026-03-29 00:55:07.270814 | orchestrator | Sunday 29 March 2026 00:50:47 +0000 (0:00:01.409) 0:01:56.135 ********** 2026-03-29 00:55:07.270821 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:55:07.270827 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:55:07.270838 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:55:07.270845 | orchestrator | 2026-03-29 00:55:07.270872 | orchestrator | TASK [include_role : etcd] ***************************************************** 2026-03-29 00:55:07.270879 | orchestrator | Sunday 29 March 2026 00:50:49 +0000 (0:00:01.999) 0:01:58.134 ********** 2026-03-29 00:55:07.270886 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:55:07.270909 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:55:07.270916 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:55:07.270922 | orchestrator | 2026-03-29 00:55:07.270929 | orchestrator | TASK [include_role : glance] *************************************************** 2026-03-29 00:55:07.270935 | orchestrator | Sunday 29 March 2026 00:50:49 +0000 (0:00:00.264) 0:01:58.399 ********** 2026-03-29 00:55:07.270941 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 00:55:07.270948 | orchestrator | 2026-03-29 00:55:07.270955 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2026-03-29 00:55:07.270962 | orchestrator | Sunday 29 March 2026 00:50:50 +0000 (0:00:00.924) 0:01:59.323 ********** 2026-03-29 00:55:07.270970 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-29 00:55:07.270988 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-29 00:55:07.271000 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-29 00:55:07.271015 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-29 00:55:07.271026 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-29 00:55:07.271039 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-29 00:55:07.271046 | orchestrator | 2026-03-29 00:55:07.271053 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2026-03-29 00:55:07.271060 | orchestrator | Sunday 29 March 2026 00:50:55 +0000 (0:00:04.911) 0:02:04.235 ********** 2026-03-29 00:55:07.271070 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-29 00:55:07.271098 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-29 00:55:07.271105 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:55:07.271115 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-29 00:55:07.271127 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-29 00:55:07.271142 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-29 00:55:07.271152 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:55:07.271159 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-29 00:55:07.271167 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:55:07.271173 | orchestrator | 2026-03-29 00:55:07.271180 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2026-03-29 00:55:07.271186 | orchestrator | Sunday 29 March 2026 00:50:59 +0000 (0:00:03.774) 0:02:08.009 ********** 2026-03-29 00:55:07.271193 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-29 00:55:07.271205 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-29 00:55:07.271216 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:55:07.271225 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-29 00:55:07.271233 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-29 00:55:07.271240 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:55:07.271246 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-29 00:55:07.271253 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-29 00:55:07.271260 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:55:07.271267 | orchestrator | 2026-03-29 00:55:07.271273 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2026-03-29 00:55:07.271324 | orchestrator | Sunday 29 March 2026 00:51:04 +0000 (0:00:04.807) 0:02:12.816 ********** 2026-03-29 00:55:07.271331 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:55:07.271337 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:55:07.271344 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:55:07.271350 | orchestrator | 2026-03-29 00:55:07.271356 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2026-03-29 00:55:07.271363 | orchestrator | Sunday 29 March 2026 00:51:05 +0000 (0:00:01.257) 0:02:14.074 ********** 2026-03-29 00:55:07.271369 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:55:07.271376 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:55:07.271383 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:55:07.271389 | orchestrator | 2026-03-29 00:55:07.271396 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2026-03-29 00:55:07.271402 | orchestrator | Sunday 29 March 2026 00:51:07 +0000 (0:00:02.461) 0:02:16.535 ********** 2026-03-29 00:55:07.271409 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:55:07.271415 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:55:07.271421 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:55:07.271427 | orchestrator | 2026-03-29 00:55:07.271434 | orchestrator | TASK [include_role : grafana] ************************************************** 2026-03-29 00:55:07.271440 | orchestrator | Sunday 29 March 2026 00:51:08 +0000 (0:00:00.312) 0:02:16.848 ********** 2026-03-29 00:55:07.271480 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 00:55:07.271488 | orchestrator | 2026-03-29 00:55:07.271494 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2026-03-29 00:55:07.271501 | orchestrator | Sunday 29 March 2026 00:51:09 +0000 (0:00:01.215) 0:02:18.064 ********** 2026-03-29 00:55:07.271511 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-29 00:55:07.271521 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-29 00:55:07.271529 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-29 00:55:07.271535 | orchestrator | 2026-03-29 00:55:07.271542 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2026-03-29 00:55:07.271547 | orchestrator | Sunday 29 March 2026 00:51:13 +0000 (0:00:03.881) 0:02:21.945 ********** 2026-03-29 00:55:07.271552 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-29 00:55:07.271560 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-29 00:55:07.271571 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:55:07.271577 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:55:07.271583 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-29 00:55:07.271590 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:55:07.271596 | orchestrator | 2026-03-29 00:55:07.271605 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2026-03-29 00:55:07.271612 | orchestrator | Sunday 29 March 2026 00:51:13 +0000 (0:00:00.332) 0:02:22.278 ********** 2026-03-29 00:55:07.271619 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-03-29 00:55:07.271626 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-03-29 00:55:07.271635 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-03-29 00:55:07.271642 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:55:07.271648 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-03-29 00:55:07.271655 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:55:07.271661 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-03-29 00:55:07.271668 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-03-29 00:55:07.271674 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:55:07.271681 | orchestrator | 2026-03-29 00:55:07.271687 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2026-03-29 00:55:07.271694 | orchestrator | Sunday 29 March 2026 00:51:14 +0000 (0:00:00.716) 0:02:22.995 ********** 2026-03-29 00:55:07.271700 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:55:07.271718 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:55:07.271725 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:55:07.271731 | orchestrator | 2026-03-29 00:55:07.271738 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2026-03-29 00:55:07.271744 | orchestrator | Sunday 29 March 2026 00:51:15 +0000 (0:00:01.490) 0:02:24.485 ********** 2026-03-29 00:55:07.271750 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:55:07.271756 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:55:07.271763 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:55:07.271769 | orchestrator | 2026-03-29 00:55:07.271775 | orchestrator | TASK [include_role : heat] ***************************************************** 2026-03-29 00:55:07.271782 | orchestrator | Sunday 29 March 2026 00:51:17 +0000 (0:00:02.042) 0:02:26.528 ********** 2026-03-29 00:55:07.271788 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:55:07.271795 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:55:07.271801 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:55:07.271812 | orchestrator | 2026-03-29 00:55:07.271818 | orchestrator | TASK [include_role : horizon] ************************************************** 2026-03-29 00:55:07.271825 | orchestrator | Sunday 29 March 2026 00:51:18 +0000 (0:00:00.274) 0:02:26.803 ********** 2026-03-29 00:55:07.271831 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 00:55:07.271837 | orchestrator | 2026-03-29 00:55:07.271843 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2026-03-29 00:55:07.271850 | orchestrator | Sunday 29 March 2026 00:51:19 +0000 (0:00:00.910) 0:02:27.713 ********** 2026-03-29 00:55:07.271866 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-29 00:55:07.271875 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-29 00:55:07.271894 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-29 00:55:07.271901 | orchestrator | 2026-03-29 00:55:07.271907 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2026-03-29 00:55:07.271914 | orchestrator | Sunday 29 March 2026 00:51:22 +0000 (0:00:03.139) 0:02:30.853 ********** 2026-03-29 00:55:07.271921 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-29 00:55:07.271933 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:55:07.272603 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-29 00:55:07.272622 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:55:07.272629 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-29 00:55:07.272641 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:55:07.272648 | orchestrator | 2026-03-29 00:55:07.272654 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2026-03-29 00:55:07.272661 | orchestrator | Sunday 29 March 2026 00:51:22 +0000 (0:00:00.556) 0:02:31.410 ********** 2026-03-29 00:55:07.272672 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-03-29 00:55:07.272681 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-29 00:55:07.272690 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-03-29 00:55:07.272697 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-29 00:55:07.272736 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-03-29 00:55:07.272743 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:55:07.272750 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-03-29 00:55:07.272761 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-29 00:55:07.272768 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-03-29 00:55:07.272848 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-29 00:55:07.272856 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-03-29 00:55:07.272862 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:55:07.272869 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-03-29 00:55:07.272875 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-29 00:55:07.272881 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-03-29 00:55:07.272891 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-29 00:55:07.272898 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-03-29 00:55:07.272904 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:55:07.272910 | orchestrator | 2026-03-29 00:55:07.272917 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2026-03-29 00:55:07.272924 | orchestrator | Sunday 29 March 2026 00:51:23 +0000 (0:00:00.885) 0:02:32.295 ********** 2026-03-29 00:55:07.272931 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:55:07.272938 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:55:07.272944 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:55:07.272950 | orchestrator | 2026-03-29 00:55:07.272953 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2026-03-29 00:55:07.272960 | orchestrator | Sunday 29 March 2026 00:51:25 +0000 (0:00:01.594) 0:02:33.889 ********** 2026-03-29 00:55:07.272964 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:55:07.272967 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:55:07.272971 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:55:07.272978 | orchestrator | 2026-03-29 00:55:07.272982 | orchestrator | TASK [include_role : influxdb] ************************************************* 2026-03-29 00:55:07.272986 | orchestrator | Sunday 29 March 2026 00:51:27 +0000 (0:00:01.944) 0:02:35.834 ********** 2026-03-29 00:55:07.272990 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:55:07.272994 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:55:07.272997 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:55:07.273001 | orchestrator | 2026-03-29 00:55:07.273005 | orchestrator | TASK [include_role : ironic] *************************************************** 2026-03-29 00:55:07.273009 | orchestrator | Sunday 29 March 2026 00:51:27 +0000 (0:00:00.309) 0:02:36.143 ********** 2026-03-29 00:55:07.273012 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:55:07.273016 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:55:07.273020 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:55:07.273024 | orchestrator | 2026-03-29 00:55:07.273028 | orchestrator | TASK [include_role : keystone] ************************************************* 2026-03-29 00:55:07.273031 | orchestrator | Sunday 29 March 2026 00:51:27 +0000 (0:00:00.247) 0:02:36.391 ********** 2026-03-29 00:55:07.273035 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 00:55:07.273039 | orchestrator | 2026-03-29 00:55:07.273043 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2026-03-29 00:55:07.273046 | orchestrator | Sunday 29 March 2026 00:51:28 +0000 (0:00:01.019) 0:02:37.410 ********** 2026-03-29 00:55:07.273051 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-29 00:55:07.273056 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-29 00:55:07.273063 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-29 00:55:07.273072 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-29 00:55:07.273076 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-29 00:55:07.273080 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-29 00:55:07.273084 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-29 00:55:07.273088 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-29 00:55:07.273095 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-29 00:55:07.273101 | orchestrator | 2026-03-29 00:55:07.273105 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2026-03-29 00:55:07.273109 | orchestrator | Sunday 29 March 2026 00:51:31 +0000 (0:00:02.849) 0:02:40.260 ********** 2026-03-29 00:55:07.273115 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-29 00:55:07.273119 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-29 00:55:07.273123 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-29 00:55:07.273127 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:55:07.273131 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-29 00:55:07.273137 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-29 00:55:07.273146 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-29 00:55:07.273150 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:55:07.273154 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-29 00:55:07.273158 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-29 00:55:07.273162 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-29 00:55:07.273166 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:55:07.273170 | orchestrator | 2026-03-29 00:55:07.273174 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2026-03-29 00:55:07.273177 | orchestrator | Sunday 29 March 2026 00:51:32 +0000 (0:00:00.575) 0:02:40.836 ********** 2026-03-29 00:55:07.273181 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-03-29 00:55:07.273188 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-03-29 00:55:07.273192 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:55:07.273198 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-03-29 00:55:07.273202 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-03-29 00:55:07.273206 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:55:07.273212 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-03-29 00:55:07.273217 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-03-29 00:55:07.273221 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:55:07.273226 | orchestrator | 2026-03-29 00:55:07.273230 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2026-03-29 00:55:07.273234 | orchestrator | Sunday 29 March 2026 00:51:33 +0000 (0:00:00.933) 0:02:41.769 ********** 2026-03-29 00:55:07.273239 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:55:07.273243 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:55:07.273247 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:55:07.273252 | orchestrator | 2026-03-29 00:55:07.273256 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2026-03-29 00:55:07.273260 | orchestrator | Sunday 29 March 2026 00:51:34 +0000 (0:00:01.132) 0:02:42.902 ********** 2026-03-29 00:55:07.273265 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:55:07.273269 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:55:07.273273 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:55:07.273277 | orchestrator | 2026-03-29 00:55:07.273290 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2026-03-29 00:55:07.273295 | orchestrator | Sunday 29 March 2026 00:51:36 +0000 (0:00:01.865) 0:02:44.767 ********** 2026-03-29 00:55:07.273299 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:55:07.273303 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:55:07.273323 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:55:07.273328 | orchestrator | 2026-03-29 00:55:07.273333 | orchestrator | TASK [include_role : magnum] *************************************************** 2026-03-29 00:55:07.273337 | orchestrator | Sunday 29 March 2026 00:51:36 +0000 (0:00:00.308) 0:02:45.076 ********** 2026-03-29 00:55:07.273342 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 00:55:07.273349 | orchestrator | 2026-03-29 00:55:07.273357 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2026-03-29 00:55:07.273367 | orchestrator | Sunday 29 March 2026 00:51:37 +0000 (0:00:01.233) 0:02:46.309 ********** 2026-03-29 00:55:07.273374 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-29 00:55:07.273387 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-29 00:55:07.273416 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-29 00:55:07.273425 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-29 00:55:07.273432 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-29 00:55:07.273438 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-29 00:55:07.273470 | orchestrator | 2026-03-29 00:55:07.273477 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2026-03-29 00:55:07.273483 | orchestrator | Sunday 29 March 2026 00:51:41 +0000 (0:00:03.496) 0:02:49.805 ********** 2026-03-29 00:55:07.273490 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-29 00:55:07.273504 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-29 00:55:07.273511 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:55:07.273519 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-29 00:55:07.273526 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-29 00:55:07.273535 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:55:07.273540 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-29 00:55:07.273545 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-29 00:55:07.273549 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:55:07.273554 | orchestrator | 2026-03-29 00:55:07.273560 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2026-03-29 00:55:07.273564 | orchestrator | Sunday 29 March 2026 00:51:41 +0000 (0:00:00.615) 0:02:50.421 ********** 2026-03-29 00:55:07.273568 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-03-29 00:55:07.273573 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-03-29 00:55:07.273577 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:55:07.273582 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-03-29 00:55:07.273586 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-03-29 00:55:07.273590 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:55:07.273594 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-03-29 00:55:07.273598 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-03-29 00:55:07.273602 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:55:07.273605 | orchestrator | 2026-03-29 00:55:07.273609 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2026-03-29 00:55:07.273613 | orchestrator | Sunday 29 March 2026 00:51:42 +0000 (0:00:00.990) 0:02:51.412 ********** 2026-03-29 00:55:07.273619 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:55:07.273623 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:55:07.273627 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:55:07.273630 | orchestrator | 2026-03-29 00:55:07.273634 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2026-03-29 00:55:07.273638 | orchestrator | Sunday 29 March 2026 00:51:44 +0000 (0:00:01.336) 0:02:52.749 ********** 2026-03-29 00:55:07.273642 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:55:07.273646 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:55:07.273649 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:55:07.273653 | orchestrator | 2026-03-29 00:55:07.273657 | orchestrator | TASK [include_role : manila] *************************************************** 2026-03-29 00:55:07.273661 | orchestrator | Sunday 29 March 2026 00:51:46 +0000 (0:00:02.081) 0:02:54.830 ********** 2026-03-29 00:55:07.273665 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 00:55:07.273668 | orchestrator | 2026-03-29 00:55:07.273672 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2026-03-29 00:55:07.273676 | orchestrator | Sunday 29 March 2026 00:51:47 +0000 (0:00:01.042) 0:02:55.873 ********** 2026-03-29 00:55:07.273680 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-29 00:55:07.273684 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-29 00:55:07.273691 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-29 00:55:07.273697 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-29 00:55:07.273715 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-29 00:55:07.273721 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-29 00:55:07.273725 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-29 00:55:07.273729 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-29 00:55:07.273737 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-29 00:55:07.273743 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-29 00:55:07.273749 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-29 00:55:07.273753 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-29 00:55:07.273757 | orchestrator | 2026-03-29 00:55:07.273761 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2026-03-29 00:55:07.273765 | orchestrator | Sunday 29 March 2026 00:51:51 +0000 (0:00:04.366) 0:03:00.239 ********** 2026-03-29 00:55:07.273769 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-03-29 00:55:07.273773 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-29 00:55:07.273779 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-29 00:55:07.273785 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-29 00:55:07.273791 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:55:07.273795 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-03-29 00:55:07.273799 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-29 00:55:07.273803 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-29 00:55:07.273807 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-03-29 00:55:07.273813 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-29 00:55:07.273819 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:55:07.273825 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-29 00:55:07.273829 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-29 00:55:07.273833 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-29 00:55:07.273837 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:55:07.273841 | orchestrator | 2026-03-29 00:55:07.273844 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2026-03-29 00:55:07.273848 | orchestrator | Sunday 29 March 2026 00:51:52 +0000 (0:00:00.609) 0:03:00.849 ********** 2026-03-29 00:55:07.273852 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-03-29 00:55:07.273856 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-03-29 00:55:07.273860 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:55:07.273864 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-03-29 00:55:07.273867 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-03-29 00:55:07.273871 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:55:07.273875 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-03-29 00:55:07.273879 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-03-29 00:55:07.273883 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:55:07.273886 | orchestrator | 2026-03-29 00:55:07.273890 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2026-03-29 00:55:07.273898 | orchestrator | Sunday 29 March 2026 00:51:52 +0000 (0:00:00.753) 0:03:01.602 ********** 2026-03-29 00:55:07.273901 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:55:07.273927 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:55:07.273932 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:55:07.273936 | orchestrator | 2026-03-29 00:55:07.273939 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2026-03-29 00:55:07.273943 | orchestrator | Sunday 29 March 2026 00:51:54 +0000 (0:00:01.428) 0:03:03.031 ********** 2026-03-29 00:55:07.273947 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:55:07.273951 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:55:07.273955 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:55:07.273958 | orchestrator | 2026-03-29 00:55:07.273962 | orchestrator | TASK [include_role : mariadb] ************************************************** 2026-03-29 00:55:07.273966 | orchestrator | Sunday 29 March 2026 00:51:56 +0000 (0:00:02.152) 0:03:05.183 ********** 2026-03-29 00:55:07.273970 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 00:55:07.273974 | orchestrator | 2026-03-29 00:55:07.273980 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2026-03-29 00:55:07.273983 | orchestrator | Sunday 29 March 2026 00:51:57 +0000 (0:00:01.357) 0:03:06.541 ********** 2026-03-29 00:55:07.273988 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-29 00:55:07.273991 | orchestrator | 2026-03-29 00:55:07.273995 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2026-03-29 00:55:07.273999 | orchestrator | Sunday 29 March 2026 00:52:01 +0000 (0:00:03.634) 0:03:10.175 ********** 2026-03-29 00:55:07.274003 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-29 00:55:07.274008 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-29 00:55:07.274051 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:55:07.274063 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-29 00:55:07.274098 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-29 00:55:07.274102 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:55:07.274106 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-29 00:55:07.274138 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-29 00:55:07.274142 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:55:07.274146 | orchestrator | 2026-03-29 00:55:07.274150 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2026-03-29 00:55:07.274156 | orchestrator | Sunday 29 March 2026 00:52:03 +0000 (0:00:02.039) 0:03:12.215 ********** 2026-03-29 00:55:07.274162 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-29 00:55:07.274182 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-29 00:55:07.274189 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:55:07.274200 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-29 00:55:07.274215 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-29 00:55:07.274219 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:55:07.274223 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-29 00:55:07.274230 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-29 00:55:07.274234 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:55:07.274237 | orchestrator | 2026-03-29 00:55:07.274241 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2026-03-29 00:55:07.274245 | orchestrator | Sunday 29 March 2026 00:52:05 +0000 (0:00:02.156) 0:03:14.371 ********** 2026-03-29 00:55:07.274251 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-29 00:55:07.274258 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-29 00:55:07.274262 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:55:07.274265 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-29 00:55:07.274277 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-29 00:55:07.274281 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:55:07.274284 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-29 00:55:07.274291 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-29 00:55:07.274295 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:55:07.274299 | orchestrator | 2026-03-29 00:55:07.274303 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2026-03-29 00:55:07.274307 | orchestrator | Sunday 29 March 2026 00:52:08 +0000 (0:00:02.547) 0:03:16.919 ********** 2026-03-29 00:55:07.274321 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:55:07.274325 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:55:07.274329 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:55:07.274333 | orchestrator | 2026-03-29 00:55:07.274337 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2026-03-29 00:55:07.274340 | orchestrator | Sunday 29 March 2026 00:52:10 +0000 (0:00:02.137) 0:03:19.056 ********** 2026-03-29 00:55:07.274344 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:55:07.274348 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:55:07.274352 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:55:07.274356 | orchestrator | 2026-03-29 00:55:07.274359 | orchestrator | TASK [include_role : masakari] ************************************************* 2026-03-29 00:55:07.274363 | orchestrator | Sunday 29 March 2026 00:52:12 +0000 (0:00:01.880) 0:03:20.937 ********** 2026-03-29 00:55:07.274369 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:55:07.274373 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:55:07.274377 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:55:07.274381 | orchestrator | 2026-03-29 00:55:07.274395 | orchestrator | TASK [include_role : memcached] ************************************************ 2026-03-29 00:55:07.274400 | orchestrator | Sunday 29 March 2026 00:52:12 +0000 (0:00:00.317) 0:03:21.254 ********** 2026-03-29 00:55:07.274403 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 00:55:07.274407 | orchestrator | 2026-03-29 00:55:07.274411 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2026-03-29 00:55:07.274415 | orchestrator | Sunday 29 March 2026 00:52:14 +0000 (0:00:01.474) 0:03:22.728 ********** 2026-03-29 00:55:07.274421 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-03-29 00:55:07.274426 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-03-29 00:55:07.274432 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-03-29 00:55:07.274436 | orchestrator | 2026-03-29 00:55:07.274440 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2026-03-29 00:55:07.274444 | orchestrator | Sunday 29 March 2026 00:52:15 +0000 (0:00:01.554) 0:03:24.283 ********** 2026-03-29 00:55:07.274448 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-03-29 00:55:07.274459 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-03-29 00:55:07.274463 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:55:07.274494 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:55:07.274506 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-03-29 00:55:07.274513 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:55:07.274519 | orchestrator | 2026-03-29 00:55:07.274525 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2026-03-29 00:55:07.274541 | orchestrator | Sunday 29 March 2026 00:52:15 +0000 (0:00:00.381) 0:03:24.664 ********** 2026-03-29 00:55:07.274547 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-03-29 00:55:07.274553 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:55:07.274560 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-03-29 00:55:07.274566 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:55:07.274572 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-03-29 00:55:07.274579 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:55:07.274586 | orchestrator | 2026-03-29 00:55:07.274593 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2026-03-29 00:55:07.274599 | orchestrator | Sunday 29 March 2026 00:52:17 +0000 (0:00:01.079) 0:03:25.743 ********** 2026-03-29 00:55:07.274606 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:55:07.274612 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:55:07.274619 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:55:07.274625 | orchestrator | 2026-03-29 00:55:07.274631 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2026-03-29 00:55:07.274637 | orchestrator | Sunday 29 March 2026 00:52:17 +0000 (0:00:00.465) 0:03:26.209 ********** 2026-03-29 00:55:07.274644 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:55:07.274650 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:55:07.274656 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:55:07.274700 | orchestrator | 2026-03-29 00:55:07.274716 | orchestrator | TASK [include_role : mistral] ************************************************** 2026-03-29 00:55:07.274720 | orchestrator | Sunday 29 March 2026 00:52:18 +0000 (0:00:01.076) 0:03:27.285 ********** 2026-03-29 00:55:07.274724 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:55:07.274728 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:55:07.274731 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:55:07.274735 | orchestrator | 2026-03-29 00:55:07.274739 | orchestrator | TASK [include_role : neutron] ************************************************** 2026-03-29 00:55:07.274743 | orchestrator | Sunday 29 March 2026 00:52:18 +0000 (0:00:00.287) 0:03:27.573 ********** 2026-03-29 00:55:07.274746 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 00:55:07.274750 | orchestrator | 2026-03-29 00:55:07.274754 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2026-03-29 00:55:07.274757 | orchestrator | Sunday 29 March 2026 00:52:20 +0000 (0:00:01.299) 0:03:28.873 ********** 2026-03-29 00:55:07.274766 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-29 00:55:07.274779 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-29 00:55:07.274784 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-03-29 00:55:07.274788 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-03-29 00:55:07.274792 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-03-29 00:55:07.274796 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-29 00:55:07.274804 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-29 00:55:07.274814 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-29 00:55:07.274818 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-03-29 00:55:07.274822 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-29 00:55:07.274826 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-29 00:55:07.274831 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-03-29 00:55:07.274835 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-29 00:55:07.274841 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-29 00:55:07.274850 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-29 00:55:07.274867 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-29 00:55:07.274872 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-29 00:55:07.274876 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-29 00:55:07.274889 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-29 00:55:07.274893 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-29 00:55:07.274897 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-03-29 00:55:07.274901 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-03-29 00:55:07.274905 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-03-29 00:55:07.274912 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-03-29 00:55:07.274920 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-03-29 00:55:07.274924 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-03-29 00:55:07.274928 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-29 00:55:07.274932 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-29 00:55:07.274936 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-29 00:55:07.274940 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-29 00:55:07.274949 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-29 00:55:07.274955 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-29 00:55:07.274959 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-03-29 00:55:07.274963 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-03-29 00:55:07.274967 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-29 00:55:07.274971 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-29 00:55:07.274981 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-29 00:55:07.274987 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-29 00:55:07.274991 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-03-29 00:55:07.274995 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-03-29 00:55:07.274999 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-29 00:55:07.275003 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-29 00:55:07.275015 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-29 00:55:07.275025 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-29 00:55:07.275038 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-29 00:55:07.275043 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-29 00:55:07.275047 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-29 00:55:07.275051 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-29 00:55:07.275057 | orchestrator | 2026-03-29 00:55:07.275061 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2026-03-29 00:55:07.275065 | orchestrator | Sunday 29 March 2026 00:52:23 +0000 (0:00:03.827) 0:03:32.700 ********** 2026-03-29 00:55:07.275071 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-29 00:55:07.275077 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-29 00:55:07.275081 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-29 00:55:07.275085 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-29 00:55:07.275092 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-03-29 00:55:07.275098 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-03-29 00:55:07.275104 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-03-29 00:55:07.275108 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-03-29 00:55:07.275112 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-03-29 00:55:07.275116 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-03-29 00:55:07.275123 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-29 00:55:07.275128 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-29 00:55:07.275135 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-29 00:55:07.275139 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-29 00:55:07.275143 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-29 00:55:07.275147 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-03-29 00:55:07.275152 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-29 00:55:07.275158 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-29 00:55:07.275164 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-03-29 00:55:07.275170 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-29 00:55:07.275174 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-29 00:55:07.275178 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-29 00:55:07.275185 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-29 00:55:07.275189 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-03-29 00:55:07.275212 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-29 00:55:07.275223 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-03-29 00:55:07.275227 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-29 00:55:07.275231 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-03-29 00:55:07.275238 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-29 00:55:07.275242 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-29 00:55:07.275246 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-03-29 00:55:07.275254 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-29 00:55:07.275258 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-03-29 00:55:07.275263 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-29 00:55:07.275270 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-29 00:55:07.275274 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-29 00:55:07.275280 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-29 00:55:07.275284 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:55:07.275290 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-29 00:55:07.275294 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:55:07.275298 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-29 00:55:07.275302 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-29 00:55:07.275310 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-03-29 00:55:07.275314 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-29 00:55:07.275460 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-29 00:55:07.275478 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-03-29 00:55:07.275484 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-29 00:55:07.275488 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-29 00:55:07.275497 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-29 00:55:07.275502 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-29 00:55:07.275505 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:55:07.275509 | orchestrator | 2026-03-29 00:55:07.275513 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2026-03-29 00:55:07.275517 | orchestrator | Sunday 29 March 2026 00:52:25 +0000 (0:00:01.665) 0:03:34.366 ********** 2026-03-29 00:55:07.275521 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-03-29 00:55:07.275525 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-03-29 00:55:07.275529 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:55:07.275536 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-03-29 00:55:07.275541 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-03-29 00:55:07.275545 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:55:07.275549 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-03-29 00:55:07.275554 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-03-29 00:55:07.275558 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:55:07.275562 | orchestrator | 2026-03-29 00:55:07.275566 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2026-03-29 00:55:07.275569 | orchestrator | Sunday 29 March 2026 00:52:27 +0000 (0:00:01.397) 0:03:35.764 ********** 2026-03-29 00:55:07.275577 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:55:07.275581 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:55:07.275585 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:55:07.275588 | orchestrator | 2026-03-29 00:55:07.275592 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2026-03-29 00:55:07.275596 | orchestrator | Sunday 29 March 2026 00:52:28 +0000 (0:00:01.187) 0:03:36.951 ********** 2026-03-29 00:55:07.275600 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:55:07.275603 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:55:07.275607 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:55:07.275611 | orchestrator | 2026-03-29 00:55:07.275615 | orchestrator | TASK [include_role : placement] ************************************************ 2026-03-29 00:55:07.275618 | orchestrator | Sunday 29 March 2026 00:52:30 +0000 (0:00:01.898) 0:03:38.850 ********** 2026-03-29 00:55:07.275622 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 00:55:07.275626 | orchestrator | 2026-03-29 00:55:07.275630 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2026-03-29 00:55:07.275633 | orchestrator | Sunday 29 March 2026 00:52:31 +0000 (0:00:01.245) 0:03:40.096 ********** 2026-03-29 00:55:07.275637 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-29 00:55:07.275641 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-29 00:55:07.275648 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-29 00:55:07.275652 | orchestrator | 2026-03-29 00:55:07.275655 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2026-03-29 00:55:07.275662 | orchestrator | Sunday 29 March 2026 00:52:34 +0000 (0:00:02.859) 0:03:42.956 ********** 2026-03-29 00:55:07.275668 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-29 00:55:07.275672 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:55:07.275676 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-29 00:55:07.275680 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:55:07.275684 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-29 00:55:07.275690 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:55:07.275698 | orchestrator | 2026-03-29 00:55:07.275738 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2026-03-29 00:55:07.275744 | orchestrator | Sunday 29 March 2026 00:52:34 +0000 (0:00:00.439) 0:03:43.395 ********** 2026-03-29 00:55:07.275750 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-03-29 00:55:07.275756 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-03-29 00:55:07.275763 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:55:07.275772 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-03-29 00:55:07.275784 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-03-29 00:55:07.275791 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:55:07.275800 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-03-29 00:55:07.275807 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-03-29 00:55:07.275811 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:55:07.275815 | orchestrator | 2026-03-29 00:55:07.275819 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2026-03-29 00:55:07.275822 | orchestrator | Sunday 29 March 2026 00:52:35 +0000 (0:00:01.050) 0:03:44.446 ********** 2026-03-29 00:55:07.275826 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:55:07.275830 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:55:07.275834 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:55:07.275838 | orchestrator | 2026-03-29 00:55:07.275841 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2026-03-29 00:55:07.275845 | orchestrator | Sunday 29 March 2026 00:52:37 +0000 (0:00:01.408) 0:03:45.854 ********** 2026-03-29 00:55:07.275849 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:55:07.275853 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:55:07.275857 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:55:07.275861 | orchestrator | 2026-03-29 00:55:07.275865 | orchestrator | TASK [include_role : nova] ***************************************************** 2026-03-29 00:55:07.275868 | orchestrator | Sunday 29 March 2026 00:52:39 +0000 (0:00:01.869) 0:03:47.723 ********** 2026-03-29 00:55:07.275872 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 00:55:07.275876 | orchestrator | 2026-03-29 00:55:07.275880 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2026-03-29 00:55:07.275884 | orchestrator | Sunday 29 March 2026 00:52:40 +0000 (0:00:01.304) 0:03:49.027 ********** 2026-03-29 00:55:07.275888 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-29 00:55:07.275893 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-29 00:55:07.275902 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-29 00:55:07.275909 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-29 00:55:07.275914 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-29 00:55:07.275918 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-29 00:55:07.275925 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-29 00:55:07.275931 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-29 00:55:07.275937 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-29 00:55:07.275941 | orchestrator | 2026-03-29 00:55:07.275944 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2026-03-29 00:55:07.275948 | orchestrator | Sunday 29 March 2026 00:52:44 +0000 (0:00:03.741) 0:03:52.769 ********** 2026-03-29 00:55:07.275952 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-29 00:55:07.275957 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-29 00:55:07.275960 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-29 00:55:07.275967 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:55:07.275975 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-29 00:55:07.275980 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-29 00:55:07.275984 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-29 00:55:07.275987 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:55:07.275992 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-29 00:55:07.275999 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-29 00:55:07.276006 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-29 00:55:07.276010 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:55:07.276014 | orchestrator | 2026-03-29 00:55:07.276018 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2026-03-29 00:55:07.276051 | orchestrator | Sunday 29 March 2026 00:52:44 +0000 (0:00:00.543) 0:03:53.314 ********** 2026-03-29 00:55:07.276056 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-03-29 00:55:07.276060 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-03-29 00:55:07.276075 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-03-29 00:55:07.276079 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-03-29 00:55:07.276083 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:55:07.276087 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-03-29 00:55:07.276091 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-03-29 00:55:07.276095 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-03-29 00:55:07.276098 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-03-29 00:55:07.276102 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:55:07.276106 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-03-29 00:55:07.276113 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-03-29 00:55:07.276117 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-03-29 00:55:07.276120 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-03-29 00:55:07.276124 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:55:07.276128 | orchestrator | 2026-03-29 00:55:07.276132 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2026-03-29 00:55:07.276136 | orchestrator | Sunday 29 March 2026 00:52:45 +0000 (0:00:00.809) 0:03:54.123 ********** 2026-03-29 00:55:07.276139 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:55:07.276143 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:55:07.276147 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:55:07.276150 | orchestrator | 2026-03-29 00:55:07.276154 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2026-03-29 00:55:07.276158 | orchestrator | Sunday 29 March 2026 00:52:46 +0000 (0:00:01.440) 0:03:55.563 ********** 2026-03-29 00:55:07.276162 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:55:07.276166 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:55:07.276169 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:55:07.276173 | orchestrator | 2026-03-29 00:55:07.276177 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2026-03-29 00:55:07.276181 | orchestrator | Sunday 29 March 2026 00:52:48 +0000 (0:00:01.947) 0:03:57.511 ********** 2026-03-29 00:55:07.276184 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 00:55:07.276188 | orchestrator | 2026-03-29 00:55:07.276192 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2026-03-29 00:55:07.276198 | orchestrator | Sunday 29 March 2026 00:52:50 +0000 (0:00:01.298) 0:03:58.809 ********** 2026-03-29 00:55:07.276202 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2026-03-29 00:55:07.276206 | orchestrator | 2026-03-29 00:55:07.276210 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2026-03-29 00:55:07.276213 | orchestrator | Sunday 29 March 2026 00:52:51 +0000 (0:00:01.633) 0:04:00.443 ********** 2026-03-29 00:55:07.276219 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-03-29 00:55:07.276224 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-03-29 00:55:07.276228 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-03-29 00:55:07.276234 | orchestrator | 2026-03-29 00:55:07.276238 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2026-03-29 00:55:07.276242 | orchestrator | Sunday 29 March 2026 00:52:55 +0000 (0:00:04.229) 0:04:04.672 ********** 2026-03-29 00:55:07.276246 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-29 00:55:07.276250 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:55:07.276254 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-29 00:55:07.276258 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:55:07.276262 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-29 00:55:07.276266 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:55:07.276269 | orchestrator | 2026-03-29 00:55:07.276273 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2026-03-29 00:55:07.276277 | orchestrator | Sunday 29 March 2026 00:52:57 +0000 (0:00:01.632) 0:04:06.305 ********** 2026-03-29 00:55:07.276283 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-29 00:55:07.276287 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-29 00:55:07.276291 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:55:07.276295 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-29 00:55:07.276301 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-29 00:55:07.276305 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:55:07.276311 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-29 00:55:07.276315 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-29 00:55:07.276319 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:55:07.276323 | orchestrator | 2026-03-29 00:55:07.276327 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-03-29 00:55:07.276331 | orchestrator | Sunday 29 March 2026 00:52:59 +0000 (0:00:02.371) 0:04:08.677 ********** 2026-03-29 00:55:07.276334 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:55:07.276338 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:55:07.276342 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:55:07.276346 | orchestrator | 2026-03-29 00:55:07.276349 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-03-29 00:55:07.276353 | orchestrator | Sunday 29 March 2026 00:53:02 +0000 (0:00:02.470) 0:04:11.147 ********** 2026-03-29 00:55:07.276357 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:55:07.276361 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:55:07.276364 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:55:07.276368 | orchestrator | 2026-03-29 00:55:07.276372 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2026-03-29 00:55:07.276376 | orchestrator | Sunday 29 March 2026 00:53:05 +0000 (0:00:03.080) 0:04:14.228 ********** 2026-03-29 00:55:07.276379 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2026-03-29 00:55:07.276383 | orchestrator | 2026-03-29 00:55:07.276387 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2026-03-29 00:55:07.276391 | orchestrator | Sunday 29 March 2026 00:53:06 +0000 (0:00:00.749) 0:04:14.978 ********** 2026-03-29 00:55:07.276395 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-29 00:55:07.276399 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:55:07.276403 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-29 00:55:07.276407 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:55:07.276412 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-29 00:55:07.276417 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:55:07.276423 | orchestrator | 2026-03-29 00:55:07.276427 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2026-03-29 00:55:07.276432 | orchestrator | Sunday 29 March 2026 00:53:07 +0000 (0:00:01.131) 0:04:16.110 ********** 2026-03-29 00:55:07.276443 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-29 00:55:07.276452 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:55:07.276458 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-29 00:55:07.276465 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:55:07.276470 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-29 00:55:07.276477 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:55:07.276483 | orchestrator | 2026-03-29 00:55:07.276488 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2026-03-29 00:55:07.276495 | orchestrator | Sunday 29 March 2026 00:53:08 +0000 (0:00:01.556) 0:04:17.666 ********** 2026-03-29 00:55:07.276501 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:55:07.276507 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:55:07.276513 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:55:07.276520 | orchestrator | 2026-03-29 00:55:07.276526 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-03-29 00:55:07.276533 | orchestrator | Sunday 29 March 2026 00:53:10 +0000 (0:00:01.203) 0:04:18.870 ********** 2026-03-29 00:55:07.276539 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:55:07.276543 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:55:07.276547 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:55:07.276551 | orchestrator | 2026-03-29 00:55:07.276555 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-03-29 00:55:07.276559 | orchestrator | Sunday 29 March 2026 00:53:12 +0000 (0:00:02.505) 0:04:21.375 ********** 2026-03-29 00:55:07.276562 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:55:07.276566 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:55:07.276570 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:55:07.276574 | orchestrator | 2026-03-29 00:55:07.276578 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2026-03-29 00:55:07.276581 | orchestrator | Sunday 29 March 2026 00:53:15 +0000 (0:00:03.121) 0:04:24.497 ********** 2026-03-29 00:55:07.276585 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2026-03-29 00:55:07.276589 | orchestrator | 2026-03-29 00:55:07.276593 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2026-03-29 00:55:07.276597 | orchestrator | Sunday 29 March 2026 00:53:16 +0000 (0:00:00.815) 0:04:25.312 ********** 2026-03-29 00:55:07.276605 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-29 00:55:07.276609 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:55:07.276616 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-29 00:55:07.276620 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:55:07.276626 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-29 00:55:07.276630 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:55:07.276634 | orchestrator | 2026-03-29 00:55:07.276638 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2026-03-29 00:55:07.276641 | orchestrator | Sunday 29 March 2026 00:53:17 +0000 (0:00:01.345) 0:04:26.658 ********** 2026-03-29 00:55:07.276645 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-29 00:55:07.276649 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:55:07.276653 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-29 00:55:07.276657 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:55:07.276661 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-29 00:55:07.276665 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:55:07.276671 | orchestrator | 2026-03-29 00:55:07.276675 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2026-03-29 00:55:07.276679 | orchestrator | Sunday 29 March 2026 00:53:19 +0000 (0:00:01.268) 0:04:27.926 ********** 2026-03-29 00:55:07.276682 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:55:07.276686 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:55:07.276690 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:55:07.276694 | orchestrator | 2026-03-29 00:55:07.276698 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-03-29 00:55:07.276717 | orchestrator | Sunday 29 March 2026 00:53:20 +0000 (0:00:01.514) 0:04:29.441 ********** 2026-03-29 00:55:07.276722 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:55:07.276726 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:55:07.276730 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:55:07.276733 | orchestrator | 2026-03-29 00:55:07.276737 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-03-29 00:55:07.276741 | orchestrator | Sunday 29 March 2026 00:53:23 +0000 (0:00:02.708) 0:04:32.149 ********** 2026-03-29 00:55:07.276745 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:55:07.276748 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:55:07.276752 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:55:07.276756 | orchestrator | 2026-03-29 00:55:07.276760 | orchestrator | TASK [include_role : octavia] ************************************************** 2026-03-29 00:55:07.276763 | orchestrator | Sunday 29 March 2026 00:53:26 +0000 (0:00:03.013) 0:04:35.163 ********** 2026-03-29 00:55:07.276767 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 00:55:07.276771 | orchestrator | 2026-03-29 00:55:07.276775 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2026-03-29 00:55:07.276778 | orchestrator | Sunday 29 March 2026 00:53:27 +0000 (0:00:01.328) 0:04:36.491 ********** 2026-03-29 00:55:07.276787 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-29 00:55:07.276792 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-29 00:55:07.276796 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-29 00:55:07.276803 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-29 00:55:07.276811 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-29 00:55:07.276821 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-29 00:55:07.276836 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-29 00:55:07.276843 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-29 00:55:07.276849 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-29 00:55:07.276861 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-29 00:55:07.276868 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-29 00:55:07.276874 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-29 00:55:07.276884 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-29 00:55:07.276894 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-29 00:55:07.276902 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-29 00:55:07.276908 | orchestrator | 2026-03-29 00:55:07.276915 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2026-03-29 00:55:07.276923 | orchestrator | Sunday 29 March 2026 00:53:31 +0000 (0:00:03.870) 0:04:40.361 ********** 2026-03-29 00:55:07.276929 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-29 00:55:07.276935 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-29 00:55:07.276942 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-29 00:55:07.276952 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-29 00:55:07.276962 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-29 00:55:07.276968 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:55:07.276975 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-29 00:55:07.276986 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-29 00:55:07.276993 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-29 00:55:07.276999 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-29 00:55:07.277005 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-29 00:55:07.277227 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:55:07.277247 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-29 00:55:07.277254 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-29 00:55:07.277262 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-29 00:55:07.277267 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-29 00:55:07.277270 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-29 00:55:07.277274 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:55:07.277278 | orchestrator | 2026-03-29 00:55:07.277282 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2026-03-29 00:55:07.277286 | orchestrator | Sunday 29 March 2026 00:53:32 +0000 (0:00:01.110) 0:04:41.472 ********** 2026-03-29 00:55:07.277290 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-29 00:55:07.277295 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-29 00:55:07.277299 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:55:07.277315 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-29 00:55:07.277319 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-29 00:55:07.277323 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:55:07.277327 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-29 00:55:07.277333 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-29 00:55:07.277339 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:55:07.277343 | orchestrator | 2026-03-29 00:55:07.277347 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2026-03-29 00:55:07.277351 | orchestrator | Sunday 29 March 2026 00:53:33 +0000 (0:00:00.905) 0:04:42.378 ********** 2026-03-29 00:55:07.277354 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:55:07.277358 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:55:07.277362 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:55:07.277365 | orchestrator | 2026-03-29 00:55:07.277369 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2026-03-29 00:55:07.277373 | orchestrator | Sunday 29 March 2026 00:53:35 +0000 (0:00:01.368) 0:04:43.747 ********** 2026-03-29 00:55:07.277377 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:55:07.277380 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:55:07.277384 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:55:07.277388 | orchestrator | 2026-03-29 00:55:07.277392 | orchestrator | TASK [include_role : opensearch] *********************************************** 2026-03-29 00:55:07.277395 | orchestrator | Sunday 29 March 2026 00:53:37 +0000 (0:00:02.240) 0:04:45.987 ********** 2026-03-29 00:55:07.277399 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 00:55:07.277403 | orchestrator | 2026-03-29 00:55:07.277407 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2026-03-29 00:55:07.277410 | orchestrator | Sunday 29 March 2026 00:53:38 +0000 (0:00:01.640) 0:04:47.627 ********** 2026-03-29 00:55:07.277415 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-29 00:55:07.277419 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-29 00:55:07.277433 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-29 00:55:07.277443 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-29 00:55:07.277448 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-29 00:55:07.277453 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-29 00:55:07.277457 | orchestrator | 2026-03-29 00:55:07.277461 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2026-03-29 00:55:07.277465 | orchestrator | Sunday 29 March 2026 00:53:44 +0000 (0:00:05.256) 0:04:52.884 ********** 2026-03-29 00:55:07.277478 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-29 00:55:07.277488 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-29 00:55:07.277492 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:55:07.277496 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-29 00:55:07.277500 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-29 00:55:07.277505 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:55:07.277518 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-29 00:55:07.277528 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-29 00:55:07.277532 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:55:07.277536 | orchestrator | 2026-03-29 00:55:07.277540 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2026-03-29 00:55:07.277544 | orchestrator | Sunday 29 March 2026 00:53:45 +0000 (0:00:01.029) 0:04:53.914 ********** 2026-03-29 00:55:07.277547 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-03-29 00:55:07.277552 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-03-29 00:55:07.277556 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-03-29 00:55:07.277560 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:55:07.277564 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-03-29 00:55:07.277567 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-03-29 00:55:07.277571 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-03-29 00:55:07.277575 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:55:07.277579 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-03-29 00:55:07.277583 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-03-29 00:55:07.277587 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-03-29 00:55:07.277595 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:55:07.277599 | orchestrator | 2026-03-29 00:55:07.277602 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2026-03-29 00:55:07.277606 | orchestrator | Sunday 29 March 2026 00:53:46 +0000 (0:00:01.285) 0:04:55.200 ********** 2026-03-29 00:55:07.277610 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:55:07.277614 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:55:07.277617 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:55:07.277621 | orchestrator | 2026-03-29 00:55:07.277625 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2026-03-29 00:55:07.277629 | orchestrator | Sunday 29 March 2026 00:53:46 +0000 (0:00:00.426) 0:04:55.627 ********** 2026-03-29 00:55:07.277633 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:55:07.277636 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:55:07.277640 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:55:07.277644 | orchestrator | 2026-03-29 00:55:07.277658 | orchestrator | TASK [include_role : prometheus] *********************************************** 2026-03-29 00:55:07.277662 | orchestrator | Sunday 29 March 2026 00:53:48 +0000 (0:00:01.283) 0:04:56.910 ********** 2026-03-29 00:55:07.277666 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 00:55:07.277670 | orchestrator | 2026-03-29 00:55:07.277674 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2026-03-29 00:55:07.277678 | orchestrator | Sunday 29 March 2026 00:53:49 +0000 (0:00:01.611) 0:04:58.521 ********** 2026-03-29 00:55:07.277683 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-29 00:55:07.277688 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-29 00:55:07.277692 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-29 00:55:07.277696 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 00:55:07.277713 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-29 00:55:07.277717 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 00:55:07.277732 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 00:55:07.277739 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-29 00:55:07.277743 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 00:55:07.277747 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-29 00:55:07.277751 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-29 00:55:07.277758 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-29 00:55:07.277762 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 00:55:07.277776 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 00:55:07.277783 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-29 00:55:07.277787 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-29 00:55:07.277792 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-03-29 00:55:07.277799 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 00:55:07.277803 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 00:55:07.277817 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-29 00:55:07.277824 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-29 00:55:07.277829 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-03-29 00:55:07.277833 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 00:55:07.277841 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 00:55:07.277846 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-29 00:55:07.277852 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-29 00:55:07.277859 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-03-29 00:55:07.277864 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 00:55:07.277869 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 00:55:07.277876 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-29 00:55:07.277880 | orchestrator | 2026-03-29 00:55:07.277885 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2026-03-29 00:55:07.277889 | orchestrator | Sunday 29 March 2026 00:53:54 +0000 (0:00:04.336) 0:05:02.858 ********** 2026-03-29 00:55:07.277894 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-03-29 00:55:07.277899 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-29 00:55:07.277907 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 00:55:07.277913 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 00:55:07.277917 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-29 00:55:07.277921 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-03-29 00:55:07.277928 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-03-29 00:55:07.277932 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 00:55:07.277939 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/li2026-03-29 00:55:07 | INFO  | Task 01da8d9b-15ae-411a-8ee0-f8beb9938160 is in state STARTED 2026-03-29 00:55:07.277943 | orchestrator | b/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 00:55:07.277949 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-29 00:55:07.277953 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-03-29 00:55:07.277960 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:55:07.277964 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-29 00:55:07.277968 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 00:55:07.277972 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 00:55:07.277976 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-29 00:55:07.277982 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-03-29 00:55:07.277989 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-03-29 00:55:07.277995 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-29 00:55:07.277999 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-03-29 00:55:07.278004 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 00:55:07.278007 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 00:55:07.278032 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 00:55:07.278038 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 00:55:07.278044 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-29 00:55:07.278050 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-29 00:55:07.278054 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:55:07.278058 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-03-29 00:55:07.278064 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-03-29 00:55:07.278068 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 00:55:07.278074 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 00:55:07.278080 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-29 00:55:07.278089 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:55:07.278093 | orchestrator | 2026-03-29 00:55:07.278096 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2026-03-29 00:55:07.278100 | orchestrator | Sunday 29 March 2026 00:53:55 +0000 (0:00:00.919) 0:05:03.777 ********** 2026-03-29 00:55:07.278104 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-03-29 00:55:07.278108 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-03-29 00:55:07.278112 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-03-29 00:55:07.278116 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-03-29 00:55:07.278121 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:55:07.278125 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-03-29 00:55:07.278129 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-03-29 00:55:07.278132 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-03-29 00:55:07.278136 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-03-29 00:55:07.278140 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-03-29 00:55:07.278144 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-03-29 00:55:07.278148 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-03-29 00:55:07.278152 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:55:07.278158 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-03-29 00:55:07.278162 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:55:07.278168 | orchestrator | 2026-03-29 00:55:07.278172 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2026-03-29 00:55:07.278176 | orchestrator | Sunday 29 March 2026 00:53:56 +0000 (0:00:01.296) 0:05:05.074 ********** 2026-03-29 00:55:07.278179 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:55:07.278183 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:55:07.278187 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:55:07.278191 | orchestrator | 2026-03-29 00:55:07.278197 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2026-03-29 00:55:07.278200 | orchestrator | Sunday 29 March 2026 00:53:56 +0000 (0:00:00.463) 0:05:05.537 ********** 2026-03-29 00:55:07.278204 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:55:07.278208 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:55:07.278212 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:55:07.278215 | orchestrator | 2026-03-29 00:55:07.278219 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2026-03-29 00:55:07.278223 | orchestrator | Sunday 29 March 2026 00:53:58 +0000 (0:00:01.536) 0:05:07.073 ********** 2026-03-29 00:55:07.278227 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 00:55:07.278230 | orchestrator | 2026-03-29 00:55:07.278234 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2026-03-29 00:55:07.278238 | orchestrator | Sunday 29 March 2026 00:53:59 +0000 (0:00:01.362) 0:05:08.436 ********** 2026-03-29 00:55:07.278242 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-29 00:55:07.278246 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-29 00:55:07.278252 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-29 00:55:07.278259 | orchestrator | 2026-03-29 00:55:07.278262 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2026-03-29 00:55:07.278266 | orchestrator | Sunday 29 March 2026 00:54:02 +0000 (0:00:02.689) 0:05:11.126 ********** 2026-03-29 00:55:07.278272 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-29 00:55:07.278276 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:55:07.278280 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-29 00:55:07.278284 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:55:07.278288 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-29 00:55:07.278292 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:55:07.278296 | orchestrator | 2026-03-29 00:55:07.278300 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2026-03-29 00:55:07.278304 | orchestrator | Sunday 29 March 2026 00:54:02 +0000 (0:00:00.398) 0:05:11.524 ********** 2026-03-29 00:55:07.278309 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-03-29 00:55:07.278313 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:55:07.278317 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-03-29 00:55:07.278320 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:55:07.278324 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-03-29 00:55:07.278328 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:55:07.278332 | orchestrator | 2026-03-29 00:55:07.278336 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2026-03-29 00:55:07.278341 | orchestrator | Sunday 29 March 2026 00:54:03 +0000 (0:00:00.680) 0:05:12.205 ********** 2026-03-29 00:55:07.278345 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:55:07.278349 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:55:07.278353 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:55:07.278356 | orchestrator | 2026-03-29 00:55:07.278360 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2026-03-29 00:55:07.278364 | orchestrator | Sunday 29 March 2026 00:54:04 +0000 (0:00:00.966) 0:05:13.172 ********** 2026-03-29 00:55:07.278368 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:55:07.278371 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:55:07.278375 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:55:07.278379 | orchestrator | 2026-03-29 00:55:07.278383 | orchestrator | TASK [include_role : skyline] ************************************************** 2026-03-29 00:55:07.278388 | orchestrator | Sunday 29 March 2026 00:54:06 +0000 (0:00:01.699) 0:05:14.871 ********** 2026-03-29 00:55:07.278392 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 00:55:07.278396 | orchestrator | 2026-03-29 00:55:07.278400 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2026-03-29 00:55:07.278403 | orchestrator | Sunday 29 March 2026 00:54:07 +0000 (0:00:01.470) 0:05:16.342 ********** 2026-03-29 00:55:07.278407 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-29 00:55:07.278412 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-29 00:55:07.278418 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-29 00:55:07.278424 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-29 00:55:07.278431 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-29 00:55:07.278435 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-29 00:55:07.278439 | orchestrator | 2026-03-29 00:55:07.278443 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2026-03-29 00:55:07.278449 | orchestrator | Sunday 29 March 2026 00:54:14 +0000 (0:00:06.372) 0:05:22.715 ********** 2026-03-29 00:55:07.278453 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-03-29 00:55:07.278459 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-03-29 00:55:07.278463 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:55:07.278470 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-03-29 00:55:07.278474 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-03-29 00:55:07.278478 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:55:07.278484 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-03-29 00:55:07.278488 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-03-29 00:55:07.278492 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:55:07.278496 | orchestrator | 2026-03-29 00:55:07.278502 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2026-03-29 00:55:07.278505 | orchestrator | Sunday 29 March 2026 00:54:15 +0000 (0:00:01.019) 0:05:23.734 ********** 2026-03-29 00:55:07.278509 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-03-29 00:55:07.278513 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-03-29 00:55:07.278519 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-03-29 00:55:07.278523 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-03-29 00:55:07.278527 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:55:07.278531 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-03-29 00:55:07.278535 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-03-29 00:55:07.278539 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-03-29 00:55:07.278545 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-03-29 00:55:07.278549 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:55:07.278553 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-03-29 00:55:07.278556 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-03-29 00:55:07.278560 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-03-29 00:55:07.278564 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-03-29 00:55:07.278568 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:55:07.278572 | orchestrator | 2026-03-29 00:55:07.278575 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2026-03-29 00:55:07.278579 | orchestrator | Sunday 29 March 2026 00:54:15 +0000 (0:00:00.925) 0:05:24.660 ********** 2026-03-29 00:55:07.278583 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:55:07.278587 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:55:07.278590 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:55:07.278594 | orchestrator | 2026-03-29 00:55:07.278598 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2026-03-29 00:55:07.278602 | orchestrator | Sunday 29 March 2026 00:54:17 +0000 (0:00:01.285) 0:05:25.945 ********** 2026-03-29 00:55:07.278606 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:55:07.278609 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:55:07.278613 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:55:07.278617 | orchestrator | 2026-03-29 00:55:07.278620 | orchestrator | TASK [include_role : swift] **************************************************** 2026-03-29 00:55:07.278624 | orchestrator | Sunday 29 March 2026 00:54:19 +0000 (0:00:02.306) 0:05:28.251 ********** 2026-03-29 00:55:07.278628 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:55:07.278632 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:55:07.278636 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:55:07.278639 | orchestrator | 2026-03-29 00:55:07.278643 | orchestrator | TASK [include_role : tacker] *************************************************** 2026-03-29 00:55:07.278647 | orchestrator | Sunday 29 March 2026 00:54:20 +0000 (0:00:00.917) 0:05:29.169 ********** 2026-03-29 00:55:07.278651 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:55:07.278654 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:55:07.278658 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:55:07.278662 | orchestrator | 2026-03-29 00:55:07.278668 | orchestrator | TASK [include_role : trove] **************************************************** 2026-03-29 00:55:07.278672 | orchestrator | Sunday 29 March 2026 00:54:20 +0000 (0:00:00.367) 0:05:29.536 ********** 2026-03-29 00:55:07.278676 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:55:07.278679 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:55:07.278683 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:55:07.278687 | orchestrator | 2026-03-29 00:55:07.278691 | orchestrator | TASK [include_role : venus] **************************************************** 2026-03-29 00:55:07.278695 | orchestrator | Sunday 29 March 2026 00:54:21 +0000 (0:00:00.359) 0:05:29.896 ********** 2026-03-29 00:55:07.278698 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:55:07.278727 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:55:07.278731 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:55:07.278738 | orchestrator | 2026-03-29 00:55:07.278742 | orchestrator | TASK [include_role : watcher] ************************************************** 2026-03-29 00:55:07.278756 | orchestrator | Sunday 29 March 2026 00:54:21 +0000 (0:00:00.378) 0:05:30.274 ********** 2026-03-29 00:55:07.278760 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:55:07.278764 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:55:07.278768 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:55:07.278771 | orchestrator | 2026-03-29 00:55:07.278775 | orchestrator | TASK [include_role : zun] ****************************************************** 2026-03-29 00:55:07.278779 | orchestrator | Sunday 29 March 2026 00:54:22 +0000 (0:00:00.834) 0:05:31.109 ********** 2026-03-29 00:55:07.278783 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:55:07.278787 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:55:07.278790 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:55:07.278794 | orchestrator | 2026-03-29 00:55:07.278798 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2026-03-29 00:55:07.278802 | orchestrator | Sunday 29 March 2026 00:54:23 +0000 (0:00:00.646) 0:05:31.755 ********** 2026-03-29 00:55:07.278806 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:55:07.278809 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:55:07.278813 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:55:07.278817 | orchestrator | 2026-03-29 00:55:07.278821 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2026-03-29 00:55:07.278825 | orchestrator | Sunday 29 March 2026 00:54:23 +0000 (0:00:00.682) 0:05:32.438 ********** 2026-03-29 00:55:07.278828 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:55:07.278832 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:55:07.278836 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:55:07.278840 | orchestrator | 2026-03-29 00:55:07.278844 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2026-03-29 00:55:07.278847 | orchestrator | Sunday 29 March 2026 00:54:24 +0000 (0:00:00.882) 0:05:33.320 ********** 2026-03-29 00:55:07.278851 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:55:07.278855 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:55:07.278859 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:55:07.278862 | orchestrator | 2026-03-29 00:55:07.278866 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2026-03-29 00:55:07.278870 | orchestrator | Sunday 29 March 2026 00:54:25 +0000 (0:00:00.969) 0:05:34.290 ********** 2026-03-29 00:55:07.278874 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:55:07.278877 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:55:07.278881 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:55:07.278885 | orchestrator | 2026-03-29 00:55:07.278889 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2026-03-29 00:55:07.278893 | orchestrator | Sunday 29 March 2026 00:54:26 +0000 (0:00:00.930) 0:05:35.220 ********** 2026-03-29 00:55:07.278896 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:55:07.278900 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:55:07.278904 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:55:07.278908 | orchestrator | 2026-03-29 00:55:07.278911 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2026-03-29 00:55:07.278915 | orchestrator | Sunday 29 March 2026 00:54:27 +0000 (0:00:00.999) 0:05:36.220 ********** 2026-03-29 00:55:07.278919 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:55:07.278923 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:55:07.278927 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:55:07.278930 | orchestrator | 2026-03-29 00:55:07.278934 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2026-03-29 00:55:07.278938 | orchestrator | Sunday 29 March 2026 00:54:35 +0000 (0:00:08.353) 0:05:44.574 ********** 2026-03-29 00:55:07.278942 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:55:07.278946 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:55:07.278949 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:55:07.278953 | orchestrator | 2026-03-29 00:55:07.278957 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2026-03-29 00:55:07.278964 | orchestrator | Sunday 29 March 2026 00:54:36 +0000 (0:00:00.995) 0:05:45.570 ********** 2026-03-29 00:55:07.278968 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:55:07.278972 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:55:07.278975 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:55:07.278979 | orchestrator | 2026-03-29 00:55:07.278983 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2026-03-29 00:55:07.278987 | orchestrator | Sunday 29 March 2026 00:54:45 +0000 (0:00:08.864) 0:05:54.434 ********** 2026-03-29 00:55:07.278991 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:55:07.278994 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:55:07.278998 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:55:07.279002 | orchestrator | 2026-03-29 00:55:07.279006 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2026-03-29 00:55:07.279010 | orchestrator | Sunday 29 March 2026 00:54:50 +0000 (0:00:04.646) 0:05:59.081 ********** 2026-03-29 00:55:07.279013 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:55:07.279017 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:55:07.279021 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:55:07.279025 | orchestrator | 2026-03-29 00:55:07.279029 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2026-03-29 00:55:07.279032 | orchestrator | Sunday 29 March 2026 00:54:59 +0000 (0:00:09.544) 0:06:08.626 ********** 2026-03-29 00:55:07.279036 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:55:07.279040 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:55:07.279044 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:55:07.279047 | orchestrator | 2026-03-29 00:55:07.279051 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2026-03-29 00:55:07.279057 | orchestrator | Sunday 29 March 2026 00:55:00 +0000 (0:00:00.685) 0:06:09.311 ********** 2026-03-29 00:55:07.279061 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:55:07.279065 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:55:07.279069 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:55:07.279073 | orchestrator | 2026-03-29 00:55:07.279076 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2026-03-29 00:55:07.279080 | orchestrator | Sunday 29 March 2026 00:55:00 +0000 (0:00:00.312) 0:06:09.624 ********** 2026-03-29 00:55:07.279084 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:55:07.279088 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:55:07.279092 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:55:07.279095 | orchestrator | 2026-03-29 00:55:07.279099 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2026-03-29 00:55:07.279105 | orchestrator | Sunday 29 March 2026 00:55:01 +0000 (0:00:00.307) 0:06:09.931 ********** 2026-03-29 00:55:07.279109 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:55:07.279113 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:55:07.279117 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:55:07.279120 | orchestrator | 2026-03-29 00:55:07.279124 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2026-03-29 00:55:07.279128 | orchestrator | Sunday 29 March 2026 00:55:01 +0000 (0:00:00.299) 0:06:10.231 ********** 2026-03-29 00:55:07.279132 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:55:07.279135 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:55:07.279139 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:55:07.279143 | orchestrator | 2026-03-29 00:55:07.279147 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2026-03-29 00:55:07.279151 | orchestrator | Sunday 29 March 2026 00:55:02 +0000 (0:00:00.563) 0:06:10.795 ********** 2026-03-29 00:55:07.279154 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:55:07.279158 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:55:07.279162 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:55:07.279165 | orchestrator | 2026-03-29 00:55:07.279169 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2026-03-29 00:55:07.279173 | orchestrator | Sunday 29 March 2026 00:55:02 +0000 (0:00:00.339) 0:06:11.134 ********** 2026-03-29 00:55:07.279180 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:55:07.279184 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:55:07.279187 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:55:07.279191 | orchestrator | 2026-03-29 00:55:07.279195 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2026-03-29 00:55:07.279199 | orchestrator | Sunday 29 March 2026 00:55:03 +0000 (0:00:00.950) 0:06:12.085 ********** 2026-03-29 00:55:07.279203 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:55:07.279206 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:55:07.279210 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:55:07.279214 | orchestrator | 2026-03-29 00:55:07.279218 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 00:55:07.279222 | orchestrator | testbed-node-0 : ok=123  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-03-29 00:55:07.279226 | orchestrator | testbed-node-1 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-03-29 00:55:07.279230 | orchestrator | testbed-node-2 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-03-29 00:55:07.279233 | orchestrator | 2026-03-29 00:55:07.279237 | orchestrator | 2026-03-29 00:55:07.279241 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-29 00:55:07.279245 | orchestrator | Sunday 29 March 2026 00:55:04 +0000 (0:00:00.799) 0:06:12.884 ********** 2026-03-29 00:55:07.279249 | orchestrator | =============================================================================== 2026-03-29 00:55:07.279252 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 9.54s 2026-03-29 00:55:07.279256 | orchestrator | loadbalancer : Start backup proxysql container -------------------------- 8.86s 2026-03-29 00:55:07.279260 | orchestrator | loadbalancer : Start backup haproxy container --------------------------- 8.35s 2026-03-29 00:55:07.279264 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 6.37s 2026-03-29 00:55:07.279268 | orchestrator | haproxy-config : Copying over barbican haproxy config ------------------- 5.80s 2026-03-29 00:55:07.279271 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 5.26s 2026-03-29 00:55:07.279275 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 4.91s 2026-03-29 00:55:07.279279 | orchestrator | haproxy-config : Configuring firewall for glance ------------------------ 4.81s 2026-03-29 00:55:07.279283 | orchestrator | loadbalancer : Wait for backup proxysql to start ------------------------ 4.65s 2026-03-29 00:55:07.279286 | orchestrator | haproxy-config : Copying over manila haproxy config --------------------- 4.37s 2026-03-29 00:55:07.279290 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 4.34s 2026-03-29 00:55:07.279294 | orchestrator | haproxy-config : Copying over aodh haproxy config ----------------------- 4.25s 2026-03-29 00:55:07.279298 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 4.23s 2026-03-29 00:55:07.279301 | orchestrator | loadbalancer : Copying over config.json files for services -------------- 4.19s 2026-03-29 00:55:07.279305 | orchestrator | haproxy-config : Copying over designate haproxy config ------------------ 4.19s 2026-03-29 00:55:07.279309 | orchestrator | service-cert-copy : loadbalancer | Copying over extra CA certificates --- 4.08s 2026-03-29 00:55:07.279313 | orchestrator | haproxy-config : Copying over grafana haproxy config -------------------- 3.88s 2026-03-29 00:55:07.279316 | orchestrator | haproxy-config : Copying over octavia haproxy config -------------------- 3.87s 2026-03-29 00:55:07.279322 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 3.83s 2026-03-29 00:55:07.279326 | orchestrator | haproxy-config : Add configuration for glance when using single external frontend --- 3.77s 2026-03-29 00:55:07.279330 | orchestrator | 2026-03-29 00:55:07 | INFO  | Task 016221e2-dc07-4c3c-86a3-db4b32d3328c is in state STARTED 2026-03-29 00:55:07.279336 | orchestrator | 2026-03-29 00:55:07 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:55:10.311790 | orchestrator | 2026-03-29 00:55:10 | INFO  | Task 1a16439d-2e03-442a-b330-ac506537ee12 is in state STARTED 2026-03-29 00:55:10.311847 | orchestrator | 2026-03-29 00:55:10 | INFO  | Task 01da8d9b-15ae-411a-8ee0-f8beb9938160 is in state STARTED 2026-03-29 00:55:10.314159 | orchestrator | 2026-03-29 00:55:10 | INFO  | Task 016221e2-dc07-4c3c-86a3-db4b32d3328c is in state STARTED 2026-03-29 00:55:10.314213 | orchestrator | 2026-03-29 00:55:10 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:55:13.341112 | orchestrator | 2026-03-29 00:55:13 | INFO  | Task 1a16439d-2e03-442a-b330-ac506537ee12 is in state STARTED 2026-03-29 00:55:13.342310 | orchestrator | 2026-03-29 00:55:13 | INFO  | Task 01da8d9b-15ae-411a-8ee0-f8beb9938160 is in state STARTED 2026-03-29 00:55:13.342856 | orchestrator | 2026-03-29 00:55:13 | INFO  | Task 016221e2-dc07-4c3c-86a3-db4b32d3328c is in state STARTED 2026-03-29 00:55:13.342900 | orchestrator | 2026-03-29 00:55:13 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:55:16.375937 | orchestrator | 2026-03-29 00:55:16 | INFO  | Task 1a16439d-2e03-442a-b330-ac506537ee12 is in state STARTED 2026-03-29 00:55:16.376951 | orchestrator | 2026-03-29 00:55:16 | INFO  | Task 01da8d9b-15ae-411a-8ee0-f8beb9938160 is in state STARTED 2026-03-29 00:55:16.378386 | orchestrator | 2026-03-29 00:55:16 | INFO  | Task 016221e2-dc07-4c3c-86a3-db4b32d3328c is in state STARTED 2026-03-29 00:55:16.378424 | orchestrator | 2026-03-29 00:55:16 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:55:19.418587 | orchestrator | 2026-03-29 00:55:19 | INFO  | Task 1a16439d-2e03-442a-b330-ac506537ee12 is in state STARTED 2026-03-29 00:55:19.418804 | orchestrator | 2026-03-29 00:55:19 | INFO  | Task 01da8d9b-15ae-411a-8ee0-f8beb9938160 is in state STARTED 2026-03-29 00:55:19.420457 | orchestrator | 2026-03-29 00:55:19 | INFO  | Task 016221e2-dc07-4c3c-86a3-db4b32d3328c is in state STARTED 2026-03-29 00:55:19.420489 | orchestrator | 2026-03-29 00:55:19 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:55:22.455702 | orchestrator | 2026-03-29 00:55:22 | INFO  | Task 1a16439d-2e03-442a-b330-ac506537ee12 is in state STARTED 2026-03-29 00:55:22.455983 | orchestrator | 2026-03-29 00:55:22 | INFO  | Task 01da8d9b-15ae-411a-8ee0-f8beb9938160 is in state STARTED 2026-03-29 00:55:22.457283 | orchestrator | 2026-03-29 00:55:22 | INFO  | Task 016221e2-dc07-4c3c-86a3-db4b32d3328c is in state STARTED 2026-03-29 00:55:22.457332 | orchestrator | 2026-03-29 00:55:22 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:55:25.481700 | orchestrator | 2026-03-29 00:55:25 | INFO  | Task 1a16439d-2e03-442a-b330-ac506537ee12 is in state STARTED 2026-03-29 00:55:25.481746 | orchestrator | 2026-03-29 00:55:25 | INFO  | Task 01da8d9b-15ae-411a-8ee0-f8beb9938160 is in state STARTED 2026-03-29 00:55:25.482793 | orchestrator | 2026-03-29 00:55:25 | INFO  | Task 016221e2-dc07-4c3c-86a3-db4b32d3328c is in state STARTED 2026-03-29 00:55:25.482819 | orchestrator | 2026-03-29 00:55:25 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:55:28.510750 | orchestrator | 2026-03-29 00:55:28 | INFO  | Task 1a16439d-2e03-442a-b330-ac506537ee12 is in state STARTED 2026-03-29 00:55:28.519116 | orchestrator | 2026-03-29 00:55:28 | INFO  | Task 01da8d9b-15ae-411a-8ee0-f8beb9938160 is in state STARTED 2026-03-29 00:55:28.521838 | orchestrator | 2026-03-29 00:55:28 | INFO  | Task 016221e2-dc07-4c3c-86a3-db4b32d3328c is in state STARTED 2026-03-29 00:55:28.521901 | orchestrator | 2026-03-29 00:55:28 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:55:31.567616 | orchestrator | 2026-03-29 00:55:31 | INFO  | Task 1a16439d-2e03-442a-b330-ac506537ee12 is in state STARTED 2026-03-29 00:55:31.569090 | orchestrator | 2026-03-29 00:55:31 | INFO  | Task 01da8d9b-15ae-411a-8ee0-f8beb9938160 is in state STARTED 2026-03-29 00:55:31.571239 | orchestrator | 2026-03-29 00:55:31 | INFO  | Task 016221e2-dc07-4c3c-86a3-db4b32d3328c is in state STARTED 2026-03-29 00:55:31.571556 | orchestrator | 2026-03-29 00:55:31 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:55:34.601575 | orchestrator | 2026-03-29 00:55:34 | INFO  | Task 1a16439d-2e03-442a-b330-ac506537ee12 is in state STARTED 2026-03-29 00:55:34.602361 | orchestrator | 2026-03-29 00:55:34 | INFO  | Task 01da8d9b-15ae-411a-8ee0-f8beb9938160 is in state STARTED 2026-03-29 00:55:34.604182 | orchestrator | 2026-03-29 00:55:34 | INFO  | Task 016221e2-dc07-4c3c-86a3-db4b32d3328c is in state STARTED 2026-03-29 00:55:34.604209 | orchestrator | 2026-03-29 00:55:34 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:55:37.637336 | orchestrator | 2026-03-29 00:55:37 | INFO  | Task 1a16439d-2e03-442a-b330-ac506537ee12 is in state STARTED 2026-03-29 00:55:37.638572 | orchestrator | 2026-03-29 00:55:37 | INFO  | Task 01da8d9b-15ae-411a-8ee0-f8beb9938160 is in state STARTED 2026-03-29 00:55:37.639802 | orchestrator | 2026-03-29 00:55:37 | INFO  | Task 016221e2-dc07-4c3c-86a3-db4b32d3328c is in state STARTED 2026-03-29 00:55:37.639842 | orchestrator | 2026-03-29 00:55:37 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:55:40.689555 | orchestrator | 2026-03-29 00:55:40 | INFO  | Task 1a16439d-2e03-442a-b330-ac506537ee12 is in state STARTED 2026-03-29 00:55:40.692059 | orchestrator | 2026-03-29 00:55:40 | INFO  | Task 01da8d9b-15ae-411a-8ee0-f8beb9938160 is in state STARTED 2026-03-29 00:55:40.694773 | orchestrator | 2026-03-29 00:55:40 | INFO  | Task 016221e2-dc07-4c3c-86a3-db4b32d3328c is in state STARTED 2026-03-29 00:55:40.694833 | orchestrator | 2026-03-29 00:55:40 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:55:43.738640 | orchestrator | 2026-03-29 00:55:43 | INFO  | Task 1a16439d-2e03-442a-b330-ac506537ee12 is in state STARTED 2026-03-29 00:55:43.741766 | orchestrator | 2026-03-29 00:55:43 | INFO  | Task 01da8d9b-15ae-411a-8ee0-f8beb9938160 is in state STARTED 2026-03-29 00:55:43.743408 | orchestrator | 2026-03-29 00:55:43 | INFO  | Task 016221e2-dc07-4c3c-86a3-db4b32d3328c is in state STARTED 2026-03-29 00:55:43.743709 | orchestrator | 2026-03-29 00:55:43 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:55:46.789000 | orchestrator | 2026-03-29 00:55:46 | INFO  | Task 1a16439d-2e03-442a-b330-ac506537ee12 is in state STARTED 2026-03-29 00:55:46.790065 | orchestrator | 2026-03-29 00:55:46 | INFO  | Task 01da8d9b-15ae-411a-8ee0-f8beb9938160 is in state STARTED 2026-03-29 00:55:46.791278 | orchestrator | 2026-03-29 00:55:46 | INFO  | Task 016221e2-dc07-4c3c-86a3-db4b32d3328c is in state STARTED 2026-03-29 00:55:46.791357 | orchestrator | 2026-03-29 00:55:46 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:55:49.834950 | orchestrator | 2026-03-29 00:55:49 | INFO  | Task 1a16439d-2e03-442a-b330-ac506537ee12 is in state STARTED 2026-03-29 00:55:49.838867 | orchestrator | 2026-03-29 00:55:49 | INFO  | Task 01da8d9b-15ae-411a-8ee0-f8beb9938160 is in state STARTED 2026-03-29 00:55:49.841090 | orchestrator | 2026-03-29 00:55:49 | INFO  | Task 016221e2-dc07-4c3c-86a3-db4b32d3328c is in state STARTED 2026-03-29 00:55:49.841202 | orchestrator | 2026-03-29 00:55:49 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:55:52.883976 | orchestrator | 2026-03-29 00:55:52 | INFO  | Task 1a16439d-2e03-442a-b330-ac506537ee12 is in state STARTED 2026-03-29 00:55:52.887718 | orchestrator | 2026-03-29 00:55:52 | INFO  | Task 01da8d9b-15ae-411a-8ee0-f8beb9938160 is in state STARTED 2026-03-29 00:55:52.890061 | orchestrator | 2026-03-29 00:55:52 | INFO  | Task 016221e2-dc07-4c3c-86a3-db4b32d3328c is in state STARTED 2026-03-29 00:55:52.890097 | orchestrator | 2026-03-29 00:55:52 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:55:55.937978 | orchestrator | 2026-03-29 00:55:55 | INFO  | Task 1a16439d-2e03-442a-b330-ac506537ee12 is in state STARTED 2026-03-29 00:55:55.940524 | orchestrator | 2026-03-29 00:55:55 | INFO  | Task 01da8d9b-15ae-411a-8ee0-f8beb9938160 is in state STARTED 2026-03-29 00:55:55.942945 | orchestrator | 2026-03-29 00:55:55 | INFO  | Task 016221e2-dc07-4c3c-86a3-db4b32d3328c is in state STARTED 2026-03-29 00:55:55.943039 | orchestrator | 2026-03-29 00:55:55 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:55:58.994561 | orchestrator | 2026-03-29 00:55:58 | INFO  | Task 1a16439d-2e03-442a-b330-ac506537ee12 is in state STARTED 2026-03-29 00:55:58.998781 | orchestrator | 2026-03-29 00:55:58 | INFO  | Task 01da8d9b-15ae-411a-8ee0-f8beb9938160 is in state STARTED 2026-03-29 00:55:59.001068 | orchestrator | 2026-03-29 00:55:59 | INFO  | Task 016221e2-dc07-4c3c-86a3-db4b32d3328c is in state STARTED 2026-03-29 00:55:59.001123 | orchestrator | 2026-03-29 00:55:59 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:56:02.063302 | orchestrator | 2026-03-29 00:56:02 | INFO  | Task 1a16439d-2e03-442a-b330-ac506537ee12 is in state STARTED 2026-03-29 00:56:02.066339 | orchestrator | 2026-03-29 00:56:02 | INFO  | Task 01da8d9b-15ae-411a-8ee0-f8beb9938160 is in state STARTED 2026-03-29 00:56:02.068203 | orchestrator | 2026-03-29 00:56:02 | INFO  | Task 016221e2-dc07-4c3c-86a3-db4b32d3328c is in state STARTED 2026-03-29 00:56:02.068772 | orchestrator | 2026-03-29 00:56:02 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:56:05.116652 | orchestrator | 2026-03-29 00:56:05 | INFO  | Task 1a16439d-2e03-442a-b330-ac506537ee12 is in state STARTED 2026-03-29 00:56:05.117968 | orchestrator | 2026-03-29 00:56:05 | INFO  | Task 01da8d9b-15ae-411a-8ee0-f8beb9938160 is in state STARTED 2026-03-29 00:56:05.118853 | orchestrator | 2026-03-29 00:56:05 | INFO  | Task 016221e2-dc07-4c3c-86a3-db4b32d3328c is in state STARTED 2026-03-29 00:56:05.118876 | orchestrator | 2026-03-29 00:56:05 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:56:08.166595 | orchestrator | 2026-03-29 00:56:08 | INFO  | Task 1a16439d-2e03-442a-b330-ac506537ee12 is in state STARTED 2026-03-29 00:56:08.167325 | orchestrator | 2026-03-29 00:56:08 | INFO  | Task 01da8d9b-15ae-411a-8ee0-f8beb9938160 is in state STARTED 2026-03-29 00:56:08.168838 | orchestrator | 2026-03-29 00:56:08 | INFO  | Task 016221e2-dc07-4c3c-86a3-db4b32d3328c is in state STARTED 2026-03-29 00:56:08.168882 | orchestrator | 2026-03-29 00:56:08 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:56:11.223123 | orchestrator | 2026-03-29 00:56:11 | INFO  | Task 1a16439d-2e03-442a-b330-ac506537ee12 is in state STARTED 2026-03-29 00:56:11.224092 | orchestrator | 2026-03-29 00:56:11 | INFO  | Task 01da8d9b-15ae-411a-8ee0-f8beb9938160 is in state STARTED 2026-03-29 00:56:11.225669 | orchestrator | 2026-03-29 00:56:11 | INFO  | Task 016221e2-dc07-4c3c-86a3-db4b32d3328c is in state STARTED 2026-03-29 00:56:11.225727 | orchestrator | 2026-03-29 00:56:11 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:56:14.254059 | orchestrator | 2026-03-29 00:56:14 | INFO  | Task 1a16439d-2e03-442a-b330-ac506537ee12 is in state STARTED 2026-03-29 00:56:14.255281 | orchestrator | 2026-03-29 00:56:14 | INFO  | Task 01da8d9b-15ae-411a-8ee0-f8beb9938160 is in state STARTED 2026-03-29 00:56:14.256884 | orchestrator | 2026-03-29 00:56:14 | INFO  | Task 016221e2-dc07-4c3c-86a3-db4b32d3328c is in state STARTED 2026-03-29 00:56:14.256932 | orchestrator | 2026-03-29 00:56:14 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:56:17.310991 | orchestrator | 2026-03-29 00:56:17 | INFO  | Task 1a16439d-2e03-442a-b330-ac506537ee12 is in state STARTED 2026-03-29 00:56:17.312744 | orchestrator | 2026-03-29 00:56:17 | INFO  | Task 01da8d9b-15ae-411a-8ee0-f8beb9938160 is in state STARTED 2026-03-29 00:56:17.315771 | orchestrator | 2026-03-29 00:56:17 | INFO  | Task 016221e2-dc07-4c3c-86a3-db4b32d3328c is in state STARTED 2026-03-29 00:56:17.316399 | orchestrator | 2026-03-29 00:56:17 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:56:20.366921 | orchestrator | 2026-03-29 00:56:20 | INFO  | Task 1a16439d-2e03-442a-b330-ac506537ee12 is in state STARTED 2026-03-29 00:56:20.367202 | orchestrator | 2026-03-29 00:56:20 | INFO  | Task 01da8d9b-15ae-411a-8ee0-f8beb9938160 is in state STARTED 2026-03-29 00:56:20.367963 | orchestrator | 2026-03-29 00:56:20 | INFO  | Task 016221e2-dc07-4c3c-86a3-db4b32d3328c is in state STARTED 2026-03-29 00:56:20.368011 | orchestrator | 2026-03-29 00:56:20 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:56:23.409957 | orchestrator | 2026-03-29 00:56:23 | INFO  | Task 1a16439d-2e03-442a-b330-ac506537ee12 is in state STARTED 2026-03-29 00:56:23.410356 | orchestrator | 2026-03-29 00:56:23 | INFO  | Task 01da8d9b-15ae-411a-8ee0-f8beb9938160 is in state STARTED 2026-03-29 00:56:23.411907 | orchestrator | 2026-03-29 00:56:23 | INFO  | Task 016221e2-dc07-4c3c-86a3-db4b32d3328c is in state STARTED 2026-03-29 00:56:23.411941 | orchestrator | 2026-03-29 00:56:23 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:56:26.472237 | orchestrator | 2026-03-29 00:56:26 | INFO  | Task 1a16439d-2e03-442a-b330-ac506537ee12 is in state STARTED 2026-03-29 00:56:26.474582 | orchestrator | 2026-03-29 00:56:26 | INFO  | Task 01da8d9b-15ae-411a-8ee0-f8beb9938160 is in state STARTED 2026-03-29 00:56:26.476478 | orchestrator | 2026-03-29 00:56:26 | INFO  | Task 016221e2-dc07-4c3c-86a3-db4b32d3328c is in state STARTED 2026-03-29 00:56:26.476581 | orchestrator | 2026-03-29 00:56:26 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:56:29.530157 | orchestrator | 2026-03-29 00:56:29 | INFO  | Task 1a16439d-2e03-442a-b330-ac506537ee12 is in state STARTED 2026-03-29 00:56:29.532144 | orchestrator | 2026-03-29 00:56:29 | INFO  | Task 01da8d9b-15ae-411a-8ee0-f8beb9938160 is in state STARTED 2026-03-29 00:56:29.534631 | orchestrator | 2026-03-29 00:56:29 | INFO  | Task 016221e2-dc07-4c3c-86a3-db4b32d3328c is in state STARTED 2026-03-29 00:56:29.534673 | orchestrator | 2026-03-29 00:56:29 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:56:32.581661 | orchestrator | 2026-03-29 00:56:32 | INFO  | Task 1a16439d-2e03-442a-b330-ac506537ee12 is in state STARTED 2026-03-29 00:56:32.583852 | orchestrator | 2026-03-29 00:56:32 | INFO  | Task 01da8d9b-15ae-411a-8ee0-f8beb9938160 is in state STARTED 2026-03-29 00:56:32.586252 | orchestrator | 2026-03-29 00:56:32 | INFO  | Task 016221e2-dc07-4c3c-86a3-db4b32d3328c is in state STARTED 2026-03-29 00:56:32.586723 | orchestrator | 2026-03-29 00:56:32 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:56:35.630656 | orchestrator | 2026-03-29 00:56:35 | INFO  | Task 1a16439d-2e03-442a-b330-ac506537ee12 is in state STARTED 2026-03-29 00:56:35.632851 | orchestrator | 2026-03-29 00:56:35 | INFO  | Task 01da8d9b-15ae-411a-8ee0-f8beb9938160 is in state STARTED 2026-03-29 00:56:35.634833 | orchestrator | 2026-03-29 00:56:35 | INFO  | Task 016221e2-dc07-4c3c-86a3-db4b32d3328c is in state STARTED 2026-03-29 00:56:35.634932 | orchestrator | 2026-03-29 00:56:35 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:56:38.680901 | orchestrator | 2026-03-29 00:56:38 | INFO  | Task 1a16439d-2e03-442a-b330-ac506537ee12 is in state STARTED 2026-03-29 00:56:38.682189 | orchestrator | 2026-03-29 00:56:38 | INFO  | Task 01da8d9b-15ae-411a-8ee0-f8beb9938160 is in state STARTED 2026-03-29 00:56:38.683566 | orchestrator | 2026-03-29 00:56:38 | INFO  | Task 016221e2-dc07-4c3c-86a3-db4b32d3328c is in state STARTED 2026-03-29 00:56:38.683658 | orchestrator | 2026-03-29 00:56:38 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:56:41.727548 | orchestrator | 2026-03-29 00:56:41 | INFO  | Task 1a16439d-2e03-442a-b330-ac506537ee12 is in state STARTED 2026-03-29 00:56:41.728865 | orchestrator | 2026-03-29 00:56:41 | INFO  | Task 01da8d9b-15ae-411a-8ee0-f8beb9938160 is in state STARTED 2026-03-29 00:56:41.730797 | orchestrator | 2026-03-29 00:56:41 | INFO  | Task 016221e2-dc07-4c3c-86a3-db4b32d3328c is in state STARTED 2026-03-29 00:56:41.731087 | orchestrator | 2026-03-29 00:56:41 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:56:44.780697 | orchestrator | 2026-03-29 00:56:44 | INFO  | Task 1a16439d-2e03-442a-b330-ac506537ee12 is in state STARTED 2026-03-29 00:56:44.785835 | orchestrator | 2026-03-29 00:56:44 | INFO  | Task 01da8d9b-15ae-411a-8ee0-f8beb9938160 is in state STARTED 2026-03-29 00:56:44.785906 | orchestrator | 2026-03-29 00:56:44 | INFO  | Task 016221e2-dc07-4c3c-86a3-db4b32d3328c is in state STARTED 2026-03-29 00:56:44.785913 | orchestrator | 2026-03-29 00:56:44 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:56:47.834409 | orchestrator | 2026-03-29 00:56:47 | INFO  | Task 1a16439d-2e03-442a-b330-ac506537ee12 is in state STARTED 2026-03-29 00:56:47.836810 | orchestrator | 2026-03-29 00:56:47 | INFO  | Task 01da8d9b-15ae-411a-8ee0-f8beb9938160 is in state STARTED 2026-03-29 00:56:47.838685 | orchestrator | 2026-03-29 00:56:47 | INFO  | Task 016221e2-dc07-4c3c-86a3-db4b32d3328c is in state STARTED 2026-03-29 00:56:47.838895 | orchestrator | 2026-03-29 00:56:47 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:56:50.894528 | orchestrator | 2026-03-29 00:56:50 | INFO  | Task 1f78857e-e9f0-4ef0-b3d3-37a20238e210 is in state STARTED 2026-03-29 00:56:50.900501 | orchestrator | 2026-03-29 00:56:50 | INFO  | Task 1a16439d-2e03-442a-b330-ac506537ee12 is in state SUCCESS 2026-03-29 00:56:50.901542 | orchestrator | 2026-03-29 00:56:50.901651 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-03-29 00:56:50.901663 | orchestrator | 2.16.14 2026-03-29 00:56:50.901671 | orchestrator | 2026-03-29 00:56:50.901682 | orchestrator | PLAY [Prepare deployment of Ceph services] ************************************* 2026-03-29 00:56:50.901693 | orchestrator | 2026-03-29 00:56:50.901702 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-29 00:56:50.901712 | orchestrator | Sunday 29 March 2026 00:46:40 +0000 (0:00:00.654) 0:00:00.654 ********** 2026-03-29 00:56:50.901724 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 00:56:50.901735 | orchestrator | 2026-03-29 00:56:50.901746 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-03-29 00:56:50.901801 | orchestrator | Sunday 29 March 2026 00:46:41 +0000 (0:00:01.062) 0:00:01.716 ********** 2026-03-29 00:56:50.901810 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:56:50.901834 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:56:50.901932 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:56:50.901940 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:56:50.901946 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:56:50.901952 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:56:50.901958 | orchestrator | 2026-03-29 00:56:50.901964 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-03-29 00:56:50.902008 | orchestrator | Sunday 29 March 2026 00:46:43 +0000 (0:00:01.596) 0:00:03.313 ********** 2026-03-29 00:56:50.902049 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:56:50.902056 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:56:50.902062 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:56:50.902068 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:56:50.902073 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:56:50.902079 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:56:50.902085 | orchestrator | 2026-03-29 00:56:50.902091 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-29 00:56:50.902108 | orchestrator | Sunday 29 March 2026 00:46:44 +0000 (0:00:00.644) 0:00:03.957 ********** 2026-03-29 00:56:50.902114 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:56:50.902130 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:56:50.902137 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:56:50.902151 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:56:50.902158 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:56:50.902165 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:56:50.902171 | orchestrator | 2026-03-29 00:56:50.902178 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-29 00:56:50.902192 | orchestrator | Sunday 29 March 2026 00:46:45 +0000 (0:00:01.234) 0:00:05.192 ********** 2026-03-29 00:56:50.902203 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:56:50.902225 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:56:50.902236 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:56:50.902245 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:56:50.902255 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:56:50.902312 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:56:50.902326 | orchestrator | 2026-03-29 00:56:50.902335 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-03-29 00:56:50.902344 | orchestrator | Sunday 29 March 2026 00:46:46 +0000 (0:00:01.029) 0:00:06.221 ********** 2026-03-29 00:56:50.902353 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:56:50.902373 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:56:50.902383 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:56:50.902393 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:56:50.902403 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:56:50.902412 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:56:50.902421 | orchestrator | 2026-03-29 00:56:50.902431 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-03-29 00:56:50.902550 | orchestrator | Sunday 29 March 2026 00:46:47 +0000 (0:00:00.793) 0:00:07.015 ********** 2026-03-29 00:56:50.902558 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:56:50.902564 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:56:50.902570 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:56:50.902576 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:56:50.902610 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:56:50.902616 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:56:50.902622 | orchestrator | 2026-03-29 00:56:50.902640 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-03-29 00:56:50.902647 | orchestrator | Sunday 29 March 2026 00:46:48 +0000 (0:00:01.586) 0:00:08.601 ********** 2026-03-29 00:56:50.902652 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:56:50.902659 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:56:50.902665 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:56:50.902671 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:50.902677 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:56:50.902683 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:56:50.902689 | orchestrator | 2026-03-29 00:56:50.902695 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-03-29 00:56:50.902711 | orchestrator | Sunday 29 March 2026 00:46:49 +0000 (0:00:00.602) 0:00:09.204 ********** 2026-03-29 00:56:50.902717 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:56:50.902723 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:56:50.902729 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:56:50.902734 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:56:50.902740 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:56:50.902746 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:56:50.902752 | orchestrator | 2026-03-29 00:56:50.902758 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-03-29 00:56:50.902763 | orchestrator | Sunday 29 March 2026 00:46:50 +0000 (0:00:01.150) 0:00:10.354 ********** 2026-03-29 00:56:50.902769 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-29 00:56:50.902775 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-29 00:56:50.902782 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-29 00:56:50.902787 | orchestrator | 2026-03-29 00:56:50.902793 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-03-29 00:56:50.902799 | orchestrator | Sunday 29 March 2026 00:46:51 +0000 (0:00:01.146) 0:00:11.500 ********** 2026-03-29 00:56:50.902805 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:56:50.902811 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:56:50.902816 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:56:50.902835 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:56:50.902841 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:56:50.902847 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:56:50.902863 | orchestrator | 2026-03-29 00:56:50.902869 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-03-29 00:56:50.902875 | orchestrator | Sunday 29 March 2026 00:46:53 +0000 (0:00:01.534) 0:00:13.034 ********** 2026-03-29 00:56:50.902881 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-29 00:56:50.902887 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-29 00:56:50.902893 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-29 00:56:50.902899 | orchestrator | 2026-03-29 00:56:50.902905 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-03-29 00:56:50.902910 | orchestrator | Sunday 29 March 2026 00:46:55 +0000 (0:00:02.658) 0:00:15.693 ********** 2026-03-29 00:56:50.902916 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-29 00:56:50.902922 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-29 00:56:50.902929 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-29 00:56:50.902934 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:56:50.902940 | orchestrator | 2026-03-29 00:56:50.902946 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-03-29 00:56:50.902952 | orchestrator | Sunday 29 March 2026 00:46:56 +0000 (0:00:00.734) 0:00:16.427 ********** 2026-03-29 00:56:50.902960 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-29 00:56:50.903003 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-29 00:56:50.903015 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-29 00:56:50.903025 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:56:50.903034 | orchestrator | 2026-03-29 00:56:50.903210 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-03-29 00:56:50.903226 | orchestrator | Sunday 29 March 2026 00:46:57 +0000 (0:00:00.929) 0:00:17.357 ********** 2026-03-29 00:56:50.903239 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-29 00:56:50.903252 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-29 00:56:50.903261 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-29 00:56:50.903296 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:56:50.903307 | orchestrator | 2026-03-29 00:56:50.903317 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-03-29 00:56:50.903327 | orchestrator | Sunday 29 March 2026 00:46:57 +0000 (0:00:00.185) 0:00:17.543 ********** 2026-03-29 00:56:50.903354 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-03-29 00:46:53.912126', 'end': '2026-03-29 00:46:53.999740', 'delta': '0:00:00.087614', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-29 00:56:50.903370 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-03-29 00:46:54.591016', 'end': '2026-03-29 00:46:54.665197', 'delta': '0:00:00.074181', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-29 00:56:50.903388 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-03-29 00:46:55.510538', 'end': '2026-03-29 00:46:55.578334', 'delta': '0:00:00.067796', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-29 00:56:50.903407 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:56:50.903417 | orchestrator | 2026-03-29 00:56:50.903426 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-03-29 00:56:50.903436 | orchestrator | Sunday 29 March 2026 00:46:58 +0000 (0:00:00.381) 0:00:17.924 ********** 2026-03-29 00:56:50.903446 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:56:50.903457 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:56:50.903463 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:56:50.903526 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:56:50.903536 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:56:50.903545 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:56:50.903642 | orchestrator | 2026-03-29 00:56:50.903684 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-03-29 00:56:50.903696 | orchestrator | Sunday 29 March 2026 00:47:01 +0000 (0:00:03.274) 0:00:21.199 ********** 2026-03-29 00:56:50.903706 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-29 00:56:50.903715 | orchestrator | 2026-03-29 00:56:50.903725 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-03-29 00:56:50.903733 | orchestrator | Sunday 29 March 2026 00:47:02 +0000 (0:00:01.071) 0:00:22.270 ********** 2026-03-29 00:56:50.903742 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:56:50.903750 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:56:50.903760 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:56:50.903769 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:50.903778 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:56:50.903787 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:56:50.903796 | orchestrator | 2026-03-29 00:56:50.903805 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-03-29 00:56:50.903814 | orchestrator | Sunday 29 March 2026 00:47:03 +0000 (0:00:01.098) 0:00:23.368 ********** 2026-03-29 00:56:50.903824 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:56:50.903834 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:56:50.903844 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:56:50.903854 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:50.903950 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:56:50.903962 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:56:50.903973 | orchestrator | 2026-03-29 00:56:50.903983 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-29 00:56:50.903994 | orchestrator | Sunday 29 March 2026 00:47:04 +0000 (0:00:01.304) 0:00:24.673 ********** 2026-03-29 00:56:50.904004 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:56:50.904015 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:56:50.904026 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:56:50.904036 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:50.904046 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:56:50.904056 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:56:50.904065 | orchestrator | 2026-03-29 00:56:50.904074 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-03-29 00:56:50.904084 | orchestrator | Sunday 29 March 2026 00:47:05 +0000 (0:00:00.768) 0:00:25.442 ********** 2026-03-29 00:56:50.904093 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:56:50.904103 | orchestrator | 2026-03-29 00:56:50.904113 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-03-29 00:56:50.904124 | orchestrator | Sunday 29 March 2026 00:47:05 +0000 (0:00:00.431) 0:00:25.874 ********** 2026-03-29 00:56:50.904149 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:56:50.904161 | orchestrator | 2026-03-29 00:56:50.904170 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-29 00:56:50.904180 | orchestrator | Sunday 29 March 2026 00:47:06 +0000 (0:00:00.455) 0:00:26.329 ********** 2026-03-29 00:56:50.904247 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:56:50.904259 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:56:50.904284 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:56:50.904330 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:50.904343 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:56:50.904353 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:56:50.904363 | orchestrator | 2026-03-29 00:56:50.904399 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-03-29 00:56:50.904408 | orchestrator | Sunday 29 March 2026 00:47:06 +0000 (0:00:00.553) 0:00:26.883 ********** 2026-03-29 00:56:50.904417 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:56:50.904423 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:56:50.904429 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:56:50.904435 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:50.904440 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:56:50.904446 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:56:50.904478 | orchestrator | 2026-03-29 00:56:50.904484 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-03-29 00:56:50.904491 | orchestrator | Sunday 29 March 2026 00:47:07 +0000 (0:00:00.769) 0:00:27.653 ********** 2026-03-29 00:56:50.904497 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:56:50.904528 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:56:50.904535 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:56:50.904543 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:50.904549 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:56:50.904555 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:56:50.904561 | orchestrator | 2026-03-29 00:56:50.904567 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-03-29 00:56:50.904573 | orchestrator | Sunday 29 March 2026 00:47:08 +0000 (0:00:00.602) 0:00:28.255 ********** 2026-03-29 00:56:50.904720 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:56:50.904754 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:56:50.904764 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:56:50.904773 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:50.904782 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:56:50.904790 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:56:50.904799 | orchestrator | 2026-03-29 00:56:50.904808 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-03-29 00:56:50.904934 | orchestrator | Sunday 29 March 2026 00:47:09 +0000 (0:00:00.662) 0:00:28.918 ********** 2026-03-29 00:56:50.904971 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:56:50.904981 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:56:50.905000 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:56:50.905009 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:50.905018 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:56:50.905027 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:56:50.905038 | orchestrator | 2026-03-29 00:56:50.905047 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-03-29 00:56:50.905057 | orchestrator | Sunday 29 March 2026 00:47:09 +0000 (0:00:00.539) 0:00:29.457 ********** 2026-03-29 00:56:50.905066 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:56:50.905075 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:56:50.905086 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:56:50.905149 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:50.905173 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:56:50.905186 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:56:50.905193 | orchestrator | 2026-03-29 00:56:50.905199 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-03-29 00:56:50.905206 | orchestrator | Sunday 29 March 2026 00:47:10 +0000 (0:00:00.736) 0:00:30.193 ********** 2026-03-29 00:56:50.905212 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:56:50.905219 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:56:50.905225 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:56:50.905230 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:50.905237 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:56:50.905252 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:56:50.905259 | orchestrator | 2026-03-29 00:56:50.905266 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-03-29 00:56:50.905272 | orchestrator | Sunday 29 March 2026 00:47:11 +0000 (0:00:01.222) 0:00:31.416 ********** 2026-03-29 00:56:50.905280 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--cb4f0063--6caa--55a9--9ed6--73f648958ae5-osd--block--cb4f0063--6caa--55a9--9ed6--73f648958ae5', 'dm-uuid-LVM-18FbCbvoBegBDNziKS3a5CeZ2dFoK2wu0N0E07gwNCXzlSASmyYj5WPMEdm7tBUd'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-29 00:56:50.905290 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--9db53e8f--4e16--545c--9934--db4b909c3b32-osd--block--9db53e8f--4e16--545c--9934--db4b909c3b32', 'dm-uuid-LVM-dFKc45nUf5iLu79iHhJ43d7H348x9NFjq3sa4hhA7pTFRvreSAL7kYcXRjShn3hY'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-29 00:56:50.905309 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-29 00:56:50.905319 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-29 00:56:50.905325 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-29 00:56:50.905337 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-29 00:56:50.905343 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-29 00:56:50.905350 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-29 00:56:50.905363 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-29 00:56:50.905369 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-29 00:56:50.905386 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7b7cca30-4cc6-46c3-a861-4239a25d0253', 'scsi-SQEMU_QEMU_HARDDISK_7b7cca30-4cc6-46c3-a861-4239a25d0253'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7b7cca30-4cc6-46c3-a861-4239a25d0253-part1', 'scsi-SQEMU_QEMU_HARDDISK_7b7cca30-4cc6-46c3-a861-4239a25d0253-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7b7cca30-4cc6-46c3-a861-4239a25d0253-part14', 'scsi-SQEMU_QEMU_HARDDISK_7b7cca30-4cc6-46c3-a861-4239a25d0253-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7b7cca30-4cc6-46c3-a861-4239a25d0253-part15', 'scsi-SQEMU_QEMU_HARDDISK_7b7cca30-4cc6-46c3-a861-4239a25d0253-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7b7cca30-4cc6-46c3-a861-4239a25d0253-part16', 'scsi-SQEMU_QEMU_HARDDISK_7b7cca30-4cc6-46c3-a861-4239a25d0253-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-29 00:56:50.905404 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--cb4f0063--6caa--55a9--9ed6--73f648958ae5-osd--block--cb4f0063--6caa--55a9--9ed6--73f648958ae5'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-09eM06-PD0h-wxVC-7dOo-u1c0-fl3j-382s19', 'scsi-0QEMU_QEMU_HARDDISK_cf707a58-c66d-4c72-840a-e00f4b50b6ac', 'scsi-SQEMU_QEMU_HARDDISK_cf707a58-c66d-4c72-840a-e00f4b50b6ac'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-29 00:56:50.905413 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--9db53e8f--4e16--545c--9934--db4b909c3b32-osd--block--9db53e8f--4e16--545c--9934--db4b909c3b32'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-cSlgWj-hXCs-N7CV-oNQq-3ad2-8oJB-B66ILb', 'scsi-0QEMU_QEMU_HARDDISK_756b3521-cc64-4337-8d74-551033403337', 'scsi-SQEMU_QEMU_HARDDISK_756b3521-cc64-4337-8d74-551033403337'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-29 00:56:50.905423 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_006c3921-cee3-45d1-95d5-34c501bc63f9', 'scsi-SQEMU_QEMU_HARDDISK_006c3921-cee3-45d1-95d5-34c501bc63f9'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-29 00:56:50.905431 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-29-00-03-49-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-29 00:56:50.905438 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:56:50.905449 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--ce40293b--1bc0--5558--a1b7--16c9a624d7c9-osd--block--ce40293b--1bc0--5558--a1b7--16c9a624d7c9', 'dm-uuid-LVM-6hbEsefhTAiYT2twgIRfBFKeXHhANdtLmZy7Xesck6f4vVy3CfM6Jyla6mlP71ci'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-29 00:56:50.905456 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--c9903f66--e17d--5d19--b140--42471f0a3aa8-osd--block--c9903f66--e17d--5d19--b140--42471f0a3aa8', 'dm-uuid-LVM-whOfwp51vxB6KSTsdyjJLvfitjjSuaFs23iFECqnqj2NhA3btC5jY7YWidpOeMfo'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-29 00:56:50.905466 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-29 00:56:50.905472 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--185c2dd0--6b1c--571f--b734--244d928106eb-osd--block--185c2dd0--6b1c--571f--b734--244d928106eb', 'dm-uuid-LVM-yt3wn1MfD3Yrl20FyTmocI3ouGdQngsND3KunRKngYF0iMv3GtbeAEjxSjIK3cWd'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-29 00:56:50.905483 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-29 00:56:50.905489 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--18721a71--2d87--5ab0--bec8--5e03a015e695-osd--block--18721a71--2d87--5ab0--bec8--5e03a015e695', 'dm-uuid-LVM-JKvUQZO2kAxAc4jJG9NJg9LeFxFWhhcLFpEl4OB8kAUPSVlpLZb6vpxxiz3mwLsG'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-29 00:56:50.905495 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-29 00:56:50.905502 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-29 00:56:50.905514 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-29 00:56:50.905520 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-29 00:56:50.905527 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-29 00:56:50.905536 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-29 00:56:50.905546 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-29 00:56:50.905552 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-29 00:56:50.905558 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-29 00:56:50.905565 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-29 00:56:50.905571 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-29 00:56:50.905639 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-29 00:56:50.905659 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7f3fbef7-4677-4949-824a-e0d60c532987', 'scsi-SQEMU_QEMU_HARDDISK_7f3fbef7-4677-4949-824a-e0d60c532987'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7f3fbef7-4677-4949-824a-e0d60c532987-part1', 'scsi-SQEMU_QEMU_HARDDISK_7f3fbef7-4677-4949-824a-e0d60c532987-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7f3fbef7-4677-4949-824a-e0d60c532987-part14', 'scsi-SQEMU_QEMU_HARDDISK_7f3fbef7-4677-4949-824a-e0d60c532987-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7f3fbef7-4677-4949-824a-e0d60c532987-part15', 'scsi-SQEMU_QEMU_HARDDISK_7f3fbef7-4677-4949-824a-e0d60c532987-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7f3fbef7-4677-4949-824a-e0d60c532987-part16', 'scsi-SQEMU_QEMU_HARDDISK_7f3fbef7-4677-4949-824a-e0d60c532987-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-29 00:56:50.905677 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-29 00:56:50.905687 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--185c2dd0--6b1c--571f--b734--244d928106eb-osd--block--185c2dd0--6b1c--571f--b734--244d928106eb'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-8NXN2d-8oWY-zl9N-hWCw-e0nf-McpG-E2aXC2', 'scsi-0QEMU_QEMU_HARDDISK_19b179cd-386f-4584-8a4b-106e5ad8592d', 'scsi-SQEMU_QEMU_HARDDISK_19b179cd-386f-4584-8a4b-106e5ad8592d'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-29 00:56:50.905697 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-29 00:56:50.905714 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-29 00:56:50.905724 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--18721a71--2d87--5ab0--bec8--5e03a015e695-osd--block--18721a71--2d87--5ab0--bec8--5e03a015e695'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-AolG9Q-XuJi-H8Ed-7JBm-KDZW-dSrk-GYSOpU', 'scsi-0QEMU_QEMU_HARDDISK_64dd44e8-56db-4990-9653-26f9a904c769', 'scsi-SQEMU_QEMU_HARDDISK_64dd44e8-56db-4990-9653-26f9a904c769'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-29 00:56:50.905735 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-29 00:56:50.905756 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-29 00:56:50.905764 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-29 00:56:50.905772 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-29 00:56:50.905782 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-29 00:56:50.905791 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-29 00:56:50.905801 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_66d732d5-e9a7-47c2-8d7a-ba89d690a00e', 'scsi-SQEMU_QEMU_HARDDISK_66d732d5-e9a7-47c2-8d7a-ba89d690a00e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-29 00:56:50.905816 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-29 00:56:50.905827 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-29-00-03-56-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-29 00:56:50.905849 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ad0abeaf-0bd4-438b-a52a-fa71680f00ed', 'scsi-SQEMU_QEMU_HARDDISK_ad0abeaf-0bd4-438b-a52a-fa71680f00ed'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ad0abeaf-0bd4-438b-a52a-fa71680f00ed-part1', 'scsi-SQEMU_QEMU_HARDDISK_ad0abeaf-0bd4-438b-a52a-fa71680f00ed-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ad0abeaf-0bd4-438b-a52a-fa71680f00ed-part14', 'scsi-SQEMU_QEMU_HARDDISK_ad0abeaf-0bd4-438b-a52a-fa71680f00ed-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ad0abeaf-0bd4-438b-a52a-fa71680f00ed-part15', 'scsi-SQEMU_QEMU_HARDDISK_ad0abeaf-0bd4-438b-a52a-fa71680f00ed-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ad0abeaf-0bd4-438b-a52a-fa71680f00ed-part16', 'scsi-SQEMU_QEMU_HARDDISK_ad0abeaf-0bd4-438b-a52a-fa71680f00ed-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-29 00:56:50.905863 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-29-00-03-37-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-29 00:56:50.905911 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_331a20ca-a2d6-4acb-b247-8df95204773a', 'scsi-SQEMU_QEMU_HARDDISK_331a20ca-a2d6-4acb-b247-8df95204773a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_331a20ca-a2d6-4acb-b247-8df95204773a-part1', 'scsi-SQEMU_QEMU_HARDDISK_331a20ca-a2d6-4acb-b247-8df95204773a-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_331a20ca-a2d6-4acb-b247-8df95204773a-part14', 'scsi-SQEMU_QEMU_HARDDISK_331a20ca-a2d6-4acb-b247-8df95204773a-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_331a20ca-a2d6-4acb-b247-8df95204773a-part15', 'scsi-SQEMU_QEMU_HARDDISK_331a20ca-a2d6-4acb-b247-8df95204773a-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_331a20ca-a2d6-4acb-b247-8df95204773a-part16', 'scsi-SQEMU_QEMU_HARDDISK_331a20ca-a2d6-4acb-b247-8df95204773a-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-29 00:56:50.905950 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-29 00:56:50.906005 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-29 00:56:50.906086 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--ce40293b--1bc0--5558--a1b7--16c9a624d7c9-osd--block--ce40293b--1bc0--5558--a1b7--16c9a624d7c9'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-1L2DIb-j926-6me5-KfCU-0DmO-6Hcl-HH1eUV', 'scsi-0QEMU_QEMU_HARDDISK_f8431fa8-afc6-4068-bff4-a67d5c0799f9', 'scsi-SQEMU_QEMU_HARDDISK_f8431fa8-afc6-4068-bff4-a67d5c0799f9'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-29 00:56:50.906108 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-29 00:56:50.906128 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-29 00:56:50.906166 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-29 00:56:50.906188 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-29 00:56:50.906222 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-29 00:56:50.906257 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--c9903f66--e17d--5d19--b140--42471f0a3aa8-osd--block--c9903f66--e17d--5d19--b140--42471f0a3aa8'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-9RD3yg-iigT-Wyq9-U1Cd-YSqQ-ePCr-skDTbW', 'scsi-0QEMU_QEMU_HARDDISK_08797191-4f26-4e13-8d53-ed6640c6fbd2', 'scsi-SQEMU_QEMU_HARDDISK_08797191-4f26-4e13-8d53-ed6640c6fbd2'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-29 00:56:50.906275 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-29 00:56:50.906314 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_42d62fbb-fef6-4bbb-9e64-d98a202adbe7', 'scsi-SQEMU_QEMU_HARDDISK_42d62fbb-fef6-4bbb-9e64-d98a202adbe7'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_42d62fbb-fef6-4bbb-9e64-d98a202adbe7-part1', 'scsi-SQEMU_QEMU_HARDDISK_42d62fbb-fef6-4bbb-9e64-d98a202adbe7-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_42d62fbb-fef6-4bbb-9e64-d98a202adbe7-part14', 'scsi-SQEMU_QEMU_HARDDISK_42d62fbb-fef6-4bbb-9e64-d98a202adbe7-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_42d62fbb-fef6-4bbb-9e64-d98a202adbe7-part15', 'scsi-SQEMU_QEMU_HARDDISK_42d62fbb-fef6-4bbb-9e64-d98a202adbe7-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_42d62fbb-fef6-4bbb-9e64-d98a202adbe7-part16', 'scsi-SQEMU_QEMU_HARDDISK_42d62fbb-fef6-4bbb-9e64-d98a202adbe7-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-29 00:56:50.906348 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-29-00-03-43-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-29 00:56:50.906380 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6eff12ff-972f-42e1-84ee-23c8e4926f48', 'scsi-SQEMU_QEMU_HARDDISK_6eff12ff-972f-42e1-84ee-23c8e4926f48'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-29 00:56:50.906401 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-29-00-03-31-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-29 00:56:50.906421 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:56:50.906439 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:50.906457 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:56:50.906477 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:56:50.906498 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-29 00:56:50.906519 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-29 00:56:50.906538 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-29 00:56:50.906633 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-29 00:56:50.906697 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-29 00:56:50.906742 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-29 00:56:50.906772 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-29 00:56:50.906808 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-29 00:56:50.906841 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8ca200e1-6960-4195-9342-d9b84c11b36e', 'scsi-SQEMU_QEMU_HARDDISK_8ca200e1-6960-4195-9342-d9b84c11b36e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8ca200e1-6960-4195-9342-d9b84c11b36e-part1', 'scsi-SQEMU_QEMU_HARDDISK_8ca200e1-6960-4195-9342-d9b84c11b36e-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8ca200e1-6960-4195-9342-d9b84c11b36e-part14', 'scsi-SQEMU_QEMU_HARDDISK_8ca200e1-6960-4195-9342-d9b84c11b36e-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8ca200e1-6960-4195-9342-d9b84c11b36e-part15', 'scsi-SQEMU_QEMU_HARDDISK_8ca200e1-6960-4195-9342-d9b84c11b36e-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8ca200e1-6960-4195-9342-d9b84c11b36e-part16', 'scsi-SQEMU_QEMU_HARDDISK_8ca200e1-6960-4195-9342-d9b84c11b36e-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-29 00:56:50.906892 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-29-00-04-01-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-29 00:56:50.906912 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:56:50.906922 | orchestrator | 2026-03-29 00:56:50.906931 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-03-29 00:56:50.906942 | orchestrator | Sunday 29 March 2026 00:47:13 +0000 (0:00:01.693) 0:00:33.110 ********** 2026-03-29 00:56:50.906952 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--cb4f0063--6caa--55a9--9ed6--73f648958ae5-osd--block--cb4f0063--6caa--55a9--9ed6--73f648958ae5', 'dm-uuid-LVM-18FbCbvoBegBDNziKS3a5CeZ2dFoK2wu0N0E07gwNCXzlSASmyYj5WPMEdm7tBUd'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 00:56:50.908410 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--9db53e8f--4e16--545c--9934--db4b909c3b32-osd--block--9db53e8f--4e16--545c--9934--db4b909c3b32', 'dm-uuid-LVM-dFKc45nUf5iLu79iHhJ43d7H348x9NFjq3sa4hhA7pTFRvreSAL7kYcXRjShn3hY'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 00:56:50.908449 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 00:56:50.908456 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 00:56:50.908462 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 00:56:50.908477 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--ce40293b--1bc0--5558--a1b7--16c9a624d7c9-osd--block--ce40293b--1bc0--5558--a1b7--16c9a624d7c9', 'dm-uuid-LVM-6hbEsefhTAiYT2twgIRfBFKeXHhANdtLmZy7Xesck6f4vVy3CfM6Jyla6mlP71ci'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 00:56:50.908483 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 00:56:50.908543 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--c9903f66--e17d--5d19--b140--42471f0a3aa8-osd--block--c9903f66--e17d--5d19--b140--42471f0a3aa8', 'dm-uuid-LVM-whOfwp51vxB6KSTsdyjJLvfitjjSuaFs23iFECqnqj2NhA3btC5jY7YWidpOeMfo'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 00:56:50.908552 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 00:56:50.908558 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 00:56:50.908565 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 00:56:50.908576 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--185c2dd0--6b1c--571f--b734--244d928106eb-osd--block--185c2dd0--6b1c--571f--b734--244d928106eb', 'dm-uuid-LVM-yt3wn1MfD3Yrl20FyTmocI3ouGdQngsND3KunRKngYF0iMv3GtbeAEjxSjIK3cWd'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 00:56:50.908637 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--18721a71--2d87--5ab0--bec8--5e03a015e695-osd--block--18721a71--2d87--5ab0--bec8--5e03a015e695', 'dm-uuid-LVM-JKvUQZO2kAxAc4jJG9NJg9LeFxFWhhcLFpEl4OB8kAUPSVlpLZb6vpxxiz3mwLsG'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 00:56:50.908696 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 00:56:50.908705 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 00:56:50.908711 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 00:56:50.908717 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 00:56:50.908728 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 00:56:50.908735 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 00:56:50.908770 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 00:56:50.908820 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 00:56:50.908830 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7b7cca30-4cc6-46c3-a861-4239a25d0253', 'scsi-SQEMU_QEMU_HARDDISK_7b7cca30-4cc6-46c3-a861-4239a25d0253'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7b7cca30-4cc6-46c3-a861-4239a25d0253-part1', 'scsi-SQEMU_QEMU_HARDDISK_7b7cca30-4cc6-46c3-a861-4239a25d0253-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7b7cca30-4cc6-46c3-a861-4239a25d0253-part14', 'scsi-SQEMU_QEMU_HARDDISK_7b7cca30-4cc6-46c3-a861-4239a25d0253-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7b7cca30-4cc6-46c3-a861-4239a25d0253-part15', 'scsi-SQEMU_QEMU_HARDDISK_7b7cca30-4cc6-46c3-a861-4239a25d0253-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7b7cca30-4cc6-46c3-a861-4239a25d0253-part16', 'scsi-SQEMU_QEMU_HARDDISK_7b7cca30-4cc6-46c3-a861-4239a25d0253-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 00:56:50.908842 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 00:56:50.908852 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 00:56:50.908898 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 00:56:50.908906 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 00:56:50.908913 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--cb4f0063--6caa--55a9--9ed6--73f648958ae5-osd--block--cb4f0063--6caa--55a9--9ed6--73f648958ae5'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-09eM06-PD0h-wxVC-7dOo-u1c0-fl3j-382s19', 'scsi-0QEMU_QEMU_HARDDISK_cf707a58-c66d-4c72-840a-e00f4b50b6ac', 'scsi-SQEMU_QEMU_HARDDISK_cf707a58-c66d-4c72-840a-e00f4b50b6ac'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 00:56:50.908924 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 00:56:50.908930 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 00:56:50.908984 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--9db53e8f--4e16--545c--9934--db4b909c3b32-osd--block--9db53e8f--4e16--545c--9934--db4b909c3b32'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-cSlgWj-hXCs-N7CV-oNQq-3ad2-8oJB-B66ILb', 'scsi-0QEMU_QEMU_HARDDISK_756b3521-cc64-4337-8d74-551033403337', 'scsi-SQEMU_QEMU_HARDDISK_756b3521-cc64-4337-8d74-551033403337'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 00:56:50.908993 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 00:56:50.908999 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 00:56:50.909005 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 00:56:50.909016 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 00:56:50.909022 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 00:56:50.909033 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_006c3921-cee3-45d1-95d5-34c501bc63f9', 'scsi-SQEMU_QEMU_HARDDISK_006c3921-cee3-45d1-95d5-34c501bc63f9'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 00:56:50.909078 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 00:56:50.909086 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 00:56:50.909092 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 00:56:50.909102 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 00:56:50.909108 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 00:56:50.909113 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-29-00-03-49-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 00:56:50.909168 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_331a20ca-a2d6-4acb-b247-8df95204773a', 'scsi-SQEMU_QEMU_HARDDISK_331a20ca-a2d6-4acb-b247-8df95204773a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_331a20ca-a2d6-4acb-b247-8df95204773a-part1', 'scsi-SQEMU_QEMU_HARDDISK_331a20ca-a2d6-4acb-b247-8df95204773a-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_331a20ca-a2d6-4acb-b247-8df95204773a-part14', 'scsi-SQEMU_QEMU_HARDDISK_331a20ca-a2d6-4acb-b247-8df95204773a-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_331a20ca-a2d6-4acb-b247-8df95204773a-part15', 'scsi-SQEMU_QEMU_HARDDISK_331a20ca-a2d6-4acb-b247-8df95204773a-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_331a20ca-a2d6-4acb-b247-8df95204773a-part16', 'scsi-SQEMU_QEMU_HARDDISK_331a20ca-a2d6-4acb-b247-8df95204773a-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 00:56:50.909215 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ad0abeaf-0bd4-438b-a52a-fa71680f00ed', 'scsi-SQEMU_QEMU_HARDDISK_ad0abeaf-0bd4-438b-a52a-fa71680f00ed'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ad0abeaf-0bd4-438b-a52a-fa71680f00ed-part1', 'scsi-SQEMU_QEMU_HARDDISK_ad0abeaf-0bd4-438b-a52a-fa71680f00ed-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ad0abeaf-0bd4-438b-a52a-fa71680f00ed-part14', 'scsi-SQEMU_QEMU_HARDDISK_ad0abeaf-0bd4-438b-a52a-fa71680f00ed-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ad0abeaf-0bd4-438b-a52a-fa71680f00ed-part15', 'scsi-SQEMU_QEMU_HARDDISK_ad0abeaf-0bd4-438b-a52a-fa71680f00ed-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ad0abeaf-0bd4-438b-a52a-fa71680f00ed-part16', 'scsi-SQEMU_QEMU_HARDDISK_ad0abeaf-0bd4-438b-a52a-fa71680f00ed-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 00:56:50.909224 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-29-00-03-37-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 00:56:50.909230 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--ce40293b--1bc0--5558--a1b7--16c9a624d7c9-osd--block--ce40293b--1bc0--5558--a1b7--16c9a624d7c9'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-1L2DIb-j926-6me5-KfCU-0DmO-6Hcl-HH1eUV', 'scsi-0QEMU_QEMU_HARDDISK_f8431fa8-afc6-4068-bff4-a67d5c0799f9', 'scsi-SQEMU_QEMU_HARDDISK_f8431fa8-afc6-4068-bff4-a67d5c0799f9'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 00:56:50.909240 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:56:50.909246 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 00:56:50.909252 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--c9903f66--e17d--5d19--b140--42471f0a3aa8-osd--block--c9903f66--e17d--5d19--b140--42471f0a3aa8'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-9RD3yg-iigT-Wyq9-U1Cd-YSqQ-ePCr-skDTbW', 'scsi-0QEMU_QEMU_HARDDISK_08797191-4f26-4e13-8d53-ed6640c6fbd2', 'scsi-SQEMU_QEMU_HARDDISK_08797191-4f26-4e13-8d53-ed6640c6fbd2'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 00:56:50.909288 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7f3fbef7-4677-4949-824a-e0d60c532987', 'scsi-SQEMU_QEMU_HARDDISK_7f3fbef7-4677-4949-824a-e0d60c532987'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7f3fbef7-4677-4949-824a-e0d60c532987-part1', 'scsi-SQEMU_QEMU_HARDDISK_7f3fbef7-4677-4949-824a-e0d60c532987-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7f3fbef7-4677-4949-824a-e0d60c532987-part14', 'scsi-SQEMU_QEMU_HARDDISK_7f3fbef7-4677-4949-824a-e0d60c532987-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7f3fbef7-4677-4949-824a-e0d60c532987-part15', 'scsi-SQEMU_QEMU_HARDDISK_7f3fbef7-4677-4949-824a-e0d60c532987-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7f3fbef7-4677-4949-824a-e0d60c532987-part16', 'scsi-SQEMU_QEMU_HARDDISK_7f3fbef7-4677-4949-824a-e0d60c532987-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 00:56:50.909299 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6eff12ff-972f-42e1-84ee-23c8e4926f48', 'scsi-SQEMU_QEMU_HARDDISK_6eff12ff-972f-42e1-84ee-23c8e4926f48'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 00:56:50.909305 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 00:56:50.909314 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--185c2dd0--6b1c--571f--b734--244d928106eb-osd--block--185c2dd0--6b1c--571f--b734--244d928106eb'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-8NXN2d-8oWY-zl9N-hWCw-e0nf-McpG-E2aXC2', 'scsi-0QEMU_QEMU_HARDDISK_19b179cd-386f-4584-8a4b-106e5ad8592d', 'scsi-SQEMU_QEMU_HARDDISK_19b179cd-386f-4584-8a4b-106e5ad8592d'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 00:56:50.909380 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 00:56:50.909388 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 00:56:50.909399 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 00:56:50.909404 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 00:56:50.909410 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 00:56:50.909419 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--18721a71--2d87--5ab0--bec8--5e03a015e695-osd--block--18721a71--2d87--5ab0--bec8--5e03a015e695'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-AolG9Q-XuJi-H8Ed-7JBm-KDZW-dSrk-GYSOpU', 'scsi-0QEMU_QEMU_HARDDISK_64dd44e8-56db-4990-9653-26f9a904c769', 'scsi-SQEMU_QEMU_HARDDISK_64dd44e8-56db-4990-9653-26f9a904c769'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 00:56:50.909461 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-29-00-03-31-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 00:56:50.909473 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 00:56:50.909479 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 00:56:50.909529 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_42d62fbb-fef6-4bbb-9e64-d98a202adbe7', 'scsi-SQEMU_QEMU_HARDDISK_42d62fbb-fef6-4bbb-9e64-d98a202adbe7'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_42d62fbb-fef6-4bbb-9e64-d98a202adbe7-part1', 'scsi-SQEMU_QEMU_HARDDISK_42d62fbb-fef6-4bbb-9e64-d98a202adbe7-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_42d62fbb-fef6-4bbb-9e64-d98a202adbe7-part14', 'scsi-SQEMU_QEMU_HARDDISK_42d62fbb-fef6-4bbb-9e64-d98a202adbe7-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_42d62fbb-fef6-4bbb-9e64-d98a202adbe7-part15', 'scsi-SQEMU_QEMU_HARDDISK_42d62fbb-fef6-4bbb-9e64-d98a202adbe7-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_42d62fbb-fef6-4bbb-9e64-d98a202adbe7-part16', 'scsi-SQEMU_QEMU_HARDDISK_42d62fbb-fef6-4bbb-9e64-d98a202adbe7-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 00:56:50.909539 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_66d732d5-e9a7-47c2-8d7a-ba89d690a00e', 'scsi-SQEMU_QEMU_HARDDISK_66d732d5-e9a7-47c2-8d7a-ba89d690a00e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 00:56:50.909550 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-29-00-03-43-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 00:56:50.909576 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-29-00-03-56-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 00:56:50.909607 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:50.909613 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:56:50.909618 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:56:50.909624 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:56:50.909629 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 00:56:50.909635 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 00:56:50.909685 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 00:56:50.909705 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 00:56:50.909710 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 00:56:50.909716 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 00:56:50.909722 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 00:56:50.909727 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 00:56:50.909787 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8ca200e1-6960-4195-9342-d9b84c11b36e', 'scsi-SQEMU_QEMU_HARDDISK_8ca200e1-6960-4195-9342-d9b84c11b36e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8ca200e1-6960-4195-9342-d9b84c11b36e-part1', 'scsi-SQEMU_QEMU_HARDDISK_8ca200e1-6960-4195-9342-d9b84c11b36e-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8ca200e1-6960-4195-9342-d9b84c11b36e-part14', 'scsi-SQEMU_QEMU_HARDDISK_8ca200e1-6960-4195-9342-d9b84c11b36e-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8ca200e1-6960-4195-9342-d9b84c11b36e-part15', 'scsi-SQEMU_QEMU_HARDDISK_8ca200e1-6960-4195-9342-d9b84c11b36e-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8ca200e1-6960-4195-9342-d9b84c11b36e-part16', 'scsi-SQEMU_QEMU_HARDDISK_8ca200e1-6960-4195-9342-d9b84c11b36e-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 00:56:50.909802 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-29-00-04-01-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 00:56:50.909808 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:56:50.909813 | orchestrator | 2026-03-29 00:56:50.909828 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-03-29 00:56:50.909834 | orchestrator | Sunday 29 March 2026 00:47:15 +0000 (0:00:02.311) 0:00:35.422 ********** 2026-03-29 00:56:50.909840 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:56:50.909846 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:56:50.909851 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:56:50.909856 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:56:50.909862 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:56:50.909867 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:56:50.909872 | orchestrator | 2026-03-29 00:56:50.909878 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-03-29 00:56:50.909883 | orchestrator | Sunday 29 March 2026 00:47:17 +0000 (0:00:01.796) 0:00:37.218 ********** 2026-03-29 00:56:50.909889 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:56:50.909894 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:56:50.909899 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:56:50.909905 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:56:50.909910 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:56:50.909915 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:56:50.909920 | orchestrator | 2026-03-29 00:56:50.909926 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-29 00:56:50.909931 | orchestrator | Sunday 29 March 2026 00:47:18 +0000 (0:00:01.229) 0:00:38.447 ********** 2026-03-29 00:56:50.909937 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:56:50.909942 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:56:50.909947 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:56:50.909957 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:50.909963 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:56:50.909968 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:56:50.909973 | orchestrator | 2026-03-29 00:56:50.909979 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-29 00:56:50.909984 | orchestrator | Sunday 29 March 2026 00:47:20 +0000 (0:00:01.679) 0:00:40.127 ********** 2026-03-29 00:56:50.909990 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:56:50.909995 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:56:50.910001 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:56:50.910006 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:50.910053 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:56:50.910061 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:56:50.910066 | orchestrator | 2026-03-29 00:56:50.910072 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-29 00:56:50.910129 | orchestrator | Sunday 29 March 2026 00:47:21 +0000 (0:00:00.922) 0:00:41.050 ********** 2026-03-29 00:56:50.910138 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:56:50.910144 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:56:50.910149 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:56:50.910154 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:50.910160 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:56:50.910165 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:56:50.910170 | orchestrator | 2026-03-29 00:56:50.910176 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-29 00:56:50.910181 | orchestrator | Sunday 29 March 2026 00:47:22 +0000 (0:00:01.476) 0:00:42.527 ********** 2026-03-29 00:56:50.910187 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:56:50.910192 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:56:50.910198 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:56:50.910203 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:50.910208 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:56:50.910214 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:56:50.910219 | orchestrator | 2026-03-29 00:56:50.910224 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-03-29 00:56:50.910230 | orchestrator | Sunday 29 March 2026 00:47:23 +0000 (0:00:00.939) 0:00:43.467 ********** 2026-03-29 00:56:50.910235 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-03-29 00:56:50.910241 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-03-29 00:56:50.910246 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-03-29 00:56:50.910252 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-03-29 00:56:50.910257 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-03-29 00:56:50.910263 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-29 00:56:50.910268 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2026-03-29 00:56:50.910274 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-03-29 00:56:50.910279 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-03-29 00:56:50.910284 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2026-03-29 00:56:50.910290 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-03-29 00:56:50.910295 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-03-29 00:56:50.910301 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-03-29 00:56:50.910306 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2026-03-29 00:56:50.910311 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-03-29 00:56:50.910317 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2026-03-29 00:56:50.910322 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-03-29 00:56:50.910328 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-03-29 00:56:50.910333 | orchestrator | 2026-03-29 00:56:50.910339 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-03-29 00:56:50.910350 | orchestrator | Sunday 29 March 2026 00:47:27 +0000 (0:00:03.838) 0:00:47.305 ********** 2026-03-29 00:56:50.910355 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-29 00:56:50.910361 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-29 00:56:50.910366 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-29 00:56:50.910371 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:56:50.910378 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-03-29 00:56:50.910387 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-03-29 00:56:50.910395 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-03-29 00:56:50.910403 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:56:50.910413 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-03-29 00:56:50.910420 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-03-29 00:56:50.910434 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-03-29 00:56:50.910446 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:56:50.910453 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-29 00:56:50.910463 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-29 00:56:50.910471 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-29 00:56:50.910479 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:50.910487 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-03-29 00:56:50.910495 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-03-29 00:56:50.910503 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-03-29 00:56:50.910511 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:56:50.910519 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-03-29 00:56:50.910527 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-03-29 00:56:50.910536 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-03-29 00:56:50.910545 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:56:50.910552 | orchestrator | 2026-03-29 00:56:50.910561 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-03-29 00:56:50.910570 | orchestrator | Sunday 29 March 2026 00:47:28 +0000 (0:00:00.912) 0:00:48.217 ********** 2026-03-29 00:56:50.910628 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:50.910641 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:56:50.910649 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:56:50.910658 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-29 00:56:50.910667 | orchestrator | 2026-03-29 00:56:50.910682 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-29 00:56:50.910691 | orchestrator | Sunday 29 March 2026 00:47:29 +0000 (0:00:01.444) 0:00:49.661 ********** 2026-03-29 00:56:50.910699 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:56:50.910709 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:56:50.910766 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:56:50.910779 | orchestrator | 2026-03-29 00:56:50.910789 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-29 00:56:50.910799 | orchestrator | Sunday 29 March 2026 00:47:30 +0000 (0:00:00.648) 0:00:50.310 ********** 2026-03-29 00:56:50.910809 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:56:50.910818 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:56:50.910827 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:56:50.910834 | orchestrator | 2026-03-29 00:56:50.910840 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-29 00:56:50.910846 | orchestrator | Sunday 29 March 2026 00:47:30 +0000 (0:00:00.372) 0:00:50.682 ********** 2026-03-29 00:56:50.910853 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:56:50.910867 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:56:50.910873 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:56:50.910879 | orchestrator | 2026-03-29 00:56:50.910885 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-29 00:56:50.910892 | orchestrator | Sunday 29 March 2026 00:47:31 +0000 (0:00:00.301) 0:00:50.984 ********** 2026-03-29 00:56:50.910898 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:56:50.910905 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:56:50.910912 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:56:50.910918 | orchestrator | 2026-03-29 00:56:50.910924 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-29 00:56:50.910931 | orchestrator | Sunday 29 March 2026 00:47:32 +0000 (0:00:00.979) 0:00:51.963 ********** 2026-03-29 00:56:50.910937 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-29 00:56:50.910944 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-29 00:56:50.910950 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-29 00:56:50.910956 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:56:50.910962 | orchestrator | 2026-03-29 00:56:50.910968 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-29 00:56:50.910974 | orchestrator | Sunday 29 March 2026 00:47:32 +0000 (0:00:00.318) 0:00:52.282 ********** 2026-03-29 00:56:50.910982 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-29 00:56:50.910991 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-29 00:56:50.911000 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-29 00:56:50.911008 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:56:50.911016 | orchestrator | 2026-03-29 00:56:50.911024 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-29 00:56:50.911032 | orchestrator | Sunday 29 March 2026 00:47:32 +0000 (0:00:00.510) 0:00:52.793 ********** 2026-03-29 00:56:50.911039 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-29 00:56:50.911047 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-29 00:56:50.911056 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-29 00:56:50.911065 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:56:50.911072 | orchestrator | 2026-03-29 00:56:50.911080 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-29 00:56:50.911088 | orchestrator | Sunday 29 March 2026 00:47:33 +0000 (0:00:00.443) 0:00:53.236 ********** 2026-03-29 00:56:50.911097 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:56:50.911105 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:56:50.911113 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:56:50.911122 | orchestrator | 2026-03-29 00:56:50.911131 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-29 00:56:50.911139 | orchestrator | Sunday 29 March 2026 00:47:33 +0000 (0:00:00.437) 0:00:53.673 ********** 2026-03-29 00:56:50.911147 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-03-29 00:56:50.911155 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-03-29 00:56:50.911164 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-03-29 00:56:50.911173 | orchestrator | 2026-03-29 00:56:50.911181 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-03-29 00:56:50.911189 | orchestrator | Sunday 29 March 2026 00:47:34 +0000 (0:00:00.866) 0:00:54.539 ********** 2026-03-29 00:56:50.911196 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-29 00:56:50.911206 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-29 00:56:50.911211 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-29 00:56:50.911216 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-03-29 00:56:50.911221 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-29 00:56:50.911232 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-29 00:56:50.911237 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-29 00:56:50.911242 | orchestrator | 2026-03-29 00:56:50.911247 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-03-29 00:56:50.911252 | orchestrator | Sunday 29 March 2026 00:47:35 +0000 (0:00:01.158) 0:00:55.698 ********** 2026-03-29 00:56:50.911256 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-29 00:56:50.911261 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-29 00:56:50.911266 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-29 00:56:50.911270 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-03-29 00:56:50.911281 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-29 00:56:50.911286 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-29 00:56:50.911291 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-29 00:56:50.911296 | orchestrator | 2026-03-29 00:56:50.911332 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-29 00:56:50.911337 | orchestrator | Sunday 29 March 2026 00:47:37 +0000 (0:00:01.665) 0:00:57.364 ********** 2026-03-29 00:56:50.911343 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 00:56:50.911349 | orchestrator | 2026-03-29 00:56:50.911354 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-29 00:56:50.911359 | orchestrator | Sunday 29 March 2026 00:47:38 +0000 (0:00:01.026) 0:00:58.391 ********** 2026-03-29 00:56:50.911364 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 00:56:50.911369 | orchestrator | 2026-03-29 00:56:50.911374 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-29 00:56:50.911379 | orchestrator | Sunday 29 March 2026 00:47:39 +0000 (0:00:01.144) 0:00:59.536 ********** 2026-03-29 00:56:50.911384 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:56:50.911389 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:56:50.911396 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:56:50.911404 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:56:50.911413 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:56:50.911426 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:56:50.911433 | orchestrator | 2026-03-29 00:56:50.911440 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-29 00:56:50.911448 | orchestrator | Sunday 29 March 2026 00:47:41 +0000 (0:00:01.387) 0:01:00.924 ********** 2026-03-29 00:56:50.911455 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:50.911463 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:56:50.911470 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:56:50.911476 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:56:50.911483 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:56:50.911491 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:56:50.911499 | orchestrator | 2026-03-29 00:56:50.911507 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-29 00:56:50.911514 | orchestrator | Sunday 29 March 2026 00:47:41 +0000 (0:00:00.853) 0:01:01.777 ********** 2026-03-29 00:56:50.911523 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:50.911531 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:56:50.911539 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:56:50.911547 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:56:50.911554 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:56:50.911561 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:56:50.911577 | orchestrator | 2026-03-29 00:56:50.911604 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-29 00:56:50.911609 | orchestrator | Sunday 29 March 2026 00:47:42 +0000 (0:00:00.751) 0:01:02.529 ********** 2026-03-29 00:56:50.911614 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:56:50.911619 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:50.911624 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:56:50.911632 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:56:50.911639 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:56:50.911646 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:56:50.911654 | orchestrator | 2026-03-29 00:56:50.911661 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-29 00:56:50.911667 | orchestrator | Sunday 29 March 2026 00:47:43 +0000 (0:00:01.234) 0:01:03.763 ********** 2026-03-29 00:56:50.911674 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:56:50.911681 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:56:50.911687 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:56:50.911695 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:56:50.911702 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:56:50.911710 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:56:50.911718 | orchestrator | 2026-03-29 00:56:50.911726 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-29 00:56:50.911733 | orchestrator | Sunday 29 March 2026 00:47:45 +0000 (0:00:01.181) 0:01:04.945 ********** 2026-03-29 00:56:50.911740 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:56:50.911749 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:56:50.911757 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:56:50.911764 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:50.911772 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:56:50.911779 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:56:50.911787 | orchestrator | 2026-03-29 00:56:50.911794 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-29 00:56:50.911800 | orchestrator | Sunday 29 March 2026 00:47:45 +0000 (0:00:00.759) 0:01:05.705 ********** 2026-03-29 00:56:50.911807 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:56:50.911814 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:56:50.911821 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:56:50.911828 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:50.911836 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:56:50.911843 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:56:50.911850 | orchestrator | 2026-03-29 00:56:50.911857 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-29 00:56:50.911864 | orchestrator | Sunday 29 March 2026 00:47:46 +0000 (0:00:00.808) 0:01:06.514 ********** 2026-03-29 00:56:50.911873 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:56:50.911881 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:56:50.911887 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:56:50.911894 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:56:50.911900 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:56:50.911907 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:56:50.911914 | orchestrator | 2026-03-29 00:56:50.911921 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-29 00:56:50.911928 | orchestrator | Sunday 29 March 2026 00:47:49 +0000 (0:00:02.622) 0:01:09.136 ********** 2026-03-29 00:56:50.911936 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:56:50.911952 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:56:50.911959 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:56:50.911966 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:56:50.911974 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:56:50.911981 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:56:50.911988 | orchestrator | 2026-03-29 00:56:50.912052 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-29 00:56:50.912064 | orchestrator | Sunday 29 March 2026 00:47:50 +0000 (0:00:01.419) 0:01:10.555 ********** 2026-03-29 00:56:50.912072 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:56:50.912090 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:56:50.912098 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:56:50.912105 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:50.912113 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:56:50.912120 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:56:50.912128 | orchestrator | 2026-03-29 00:56:50.912135 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-29 00:56:50.912142 | orchestrator | Sunday 29 March 2026 00:47:51 +0000 (0:00:00.874) 0:01:11.430 ********** 2026-03-29 00:56:50.912150 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:56:50.912157 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:56:50.912168 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:56:50.912180 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:56:50.912187 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:56:50.912194 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:56:50.912201 | orchestrator | 2026-03-29 00:56:50.912208 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-29 00:56:50.912215 | orchestrator | Sunday 29 March 2026 00:47:52 +0000 (0:00:00.986) 0:01:12.416 ********** 2026-03-29 00:56:50.912222 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:56:50.912229 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:56:50.912237 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:56:50.912244 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:50.912251 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:56:50.912258 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:56:50.912265 | orchestrator | 2026-03-29 00:56:50.912272 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-29 00:56:50.912280 | orchestrator | Sunday 29 March 2026 00:47:54 +0000 (0:00:01.996) 0:01:14.413 ********** 2026-03-29 00:56:50.912288 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:56:50.912295 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:56:50.912305 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:50.912316 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:56:50.912324 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:56:50.912331 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:56:50.912339 | orchestrator | 2026-03-29 00:56:50.912347 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-29 00:56:50.912355 | orchestrator | Sunday 29 March 2026 00:47:55 +0000 (0:00:00.797) 0:01:15.210 ********** 2026-03-29 00:56:50.912361 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:56:50.912366 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:56:50.912371 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:56:50.912376 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:50.912380 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:56:50.912385 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:56:50.912390 | orchestrator | 2026-03-29 00:56:50.912395 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-29 00:56:50.912399 | orchestrator | Sunday 29 March 2026 00:47:56 +0000 (0:00:00.855) 0:01:16.066 ********** 2026-03-29 00:56:50.912404 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:56:50.912409 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:56:50.912414 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:56:50.912419 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:50.912423 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:56:50.912428 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:56:50.912433 | orchestrator | 2026-03-29 00:56:50.912438 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-29 00:56:50.912442 | orchestrator | Sunday 29 March 2026 00:47:56 +0000 (0:00:00.629) 0:01:16.695 ********** 2026-03-29 00:56:50.912447 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:56:50.912452 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:56:50.912457 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:56:50.912462 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:50.912470 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:56:50.912490 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:56:50.912499 | orchestrator | 2026-03-29 00:56:50.912508 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-29 00:56:50.912516 | orchestrator | Sunday 29 March 2026 00:47:57 +0000 (0:00:00.723) 0:01:17.418 ********** 2026-03-29 00:56:50.912525 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:56:50.912530 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:56:50.912535 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:56:50.912540 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:56:50.912545 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:56:50.912550 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:56:50.912554 | orchestrator | 2026-03-29 00:56:50.912559 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-29 00:56:50.912564 | orchestrator | Sunday 29 March 2026 00:47:58 +0000 (0:00:00.683) 0:01:18.102 ********** 2026-03-29 00:56:50.912569 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:56:50.912574 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:56:50.912605 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:56:50.912611 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:56:50.912616 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:56:50.912621 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:56:50.912625 | orchestrator | 2026-03-29 00:56:50.912630 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-29 00:56:50.912635 | orchestrator | Sunday 29 March 2026 00:47:59 +0000 (0:00:00.860) 0:01:18.963 ********** 2026-03-29 00:56:50.912640 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:56:50.912645 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:56:50.912650 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:56:50.912656 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:56:50.912661 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:56:50.912667 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:56:50.912672 | orchestrator | 2026-03-29 00:56:50.912677 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-03-29 00:56:50.912683 | orchestrator | Sunday 29 March 2026 00:48:00 +0000 (0:00:01.878) 0:01:20.841 ********** 2026-03-29 00:56:50.912696 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:56:50.912701 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:56:50.912707 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:56:50.912712 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:56:50.912718 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:56:50.912723 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:56:50.912729 | orchestrator | 2026-03-29 00:56:50.912771 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-03-29 00:56:50.912777 | orchestrator | Sunday 29 March 2026 00:48:03 +0000 (0:00:02.176) 0:01:23.018 ********** 2026-03-29 00:56:50.912783 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:56:50.912789 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:56:50.912794 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:56:50.912800 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:56:50.912805 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:56:50.912811 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:56:50.912816 | orchestrator | 2026-03-29 00:56:50.912822 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-03-29 00:56:50.912828 | orchestrator | Sunday 29 March 2026 00:48:05 +0000 (0:00:02.875) 0:01:25.893 ********** 2026-03-29 00:56:50.912835 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 00:56:50.912842 | orchestrator | 2026-03-29 00:56:50.912847 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-03-29 00:56:50.912852 | orchestrator | Sunday 29 March 2026 00:48:06 +0000 (0:00:00.955) 0:01:26.848 ********** 2026-03-29 00:56:50.912856 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:56:50.912861 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:56:50.912877 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:56:50.912882 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:50.912887 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:56:50.912891 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:56:50.912896 | orchestrator | 2026-03-29 00:56:50.912901 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-03-29 00:56:50.912906 | orchestrator | Sunday 29 March 2026 00:48:07 +0000 (0:00:00.638) 0:01:27.487 ********** 2026-03-29 00:56:50.912911 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:56:50.912915 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:56:50.912920 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:56:50.912925 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:50.912930 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:56:50.912934 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:56:50.912940 | orchestrator | 2026-03-29 00:56:50.912945 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-03-29 00:56:50.912949 | orchestrator | Sunday 29 March 2026 00:48:08 +0000 (0:00:00.638) 0:01:28.125 ********** 2026-03-29 00:56:50.912954 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-29 00:56:50.912959 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-29 00:56:50.912964 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-29 00:56:50.912968 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-29 00:56:50.912975 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-29 00:56:50.912983 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-29 00:56:50.912991 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-29 00:56:50.912999 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-29 00:56:50.913006 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-29 00:56:50.913013 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-29 00:56:50.913021 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-29 00:56:50.913028 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-29 00:56:50.913036 | orchestrator | 2026-03-29 00:56:50.913043 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-03-29 00:56:50.913051 | orchestrator | Sunday 29 March 2026 00:48:09 +0000 (0:00:01.344) 0:01:29.470 ********** 2026-03-29 00:56:50.913059 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:56:50.913070 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:56:50.913081 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:56:50.913088 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:56:50.913096 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:56:50.913104 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:56:50.913112 | orchestrator | 2026-03-29 00:56:50.913120 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-03-29 00:56:50.913128 | orchestrator | Sunday 29 March 2026 00:48:11 +0000 (0:00:01.460) 0:01:30.931 ********** 2026-03-29 00:56:50.913136 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:56:50.913144 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:56:50.913151 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:56:50.913159 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:50.913167 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:56:50.913175 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:56:50.913183 | orchestrator | 2026-03-29 00:56:50.913190 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-03-29 00:56:50.913194 | orchestrator | Sunday 29 March 2026 00:48:11 +0000 (0:00:00.850) 0:01:31.782 ********** 2026-03-29 00:56:50.913206 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:56:50.913211 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:56:50.913215 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:56:50.913220 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:50.913225 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:56:50.913230 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:56:50.913234 | orchestrator | 2026-03-29 00:56:50.913244 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-03-29 00:56:50.913249 | orchestrator | Sunday 29 March 2026 00:48:12 +0000 (0:00:01.064) 0:01:32.846 ********** 2026-03-29 00:56:50.913254 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:56:50.913285 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:56:50.913291 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:56:50.913295 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:50.913300 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:56:50.913305 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:56:50.913310 | orchestrator | 2026-03-29 00:56:50.913314 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-03-29 00:56:50.913319 | orchestrator | Sunday 29 March 2026 00:48:13 +0000 (0:00:00.661) 0:01:33.508 ********** 2026-03-29 00:56:50.913324 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 00:56:50.913330 | orchestrator | 2026-03-29 00:56:50.913335 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-03-29 00:56:50.913339 | orchestrator | Sunday 29 March 2026 00:48:14 +0000 (0:00:01.247) 0:01:34.756 ********** 2026-03-29 00:56:50.913344 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:56:50.913349 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:56:50.913354 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:56:50.913359 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:56:50.913364 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:56:50.913369 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:56:50.913373 | orchestrator | 2026-03-29 00:56:50.913378 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-03-29 00:56:50.913383 | orchestrator | Sunday 29 March 2026 00:49:06 +0000 (0:00:51.566) 0:02:26.322 ********** 2026-03-29 00:56:50.913388 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-29 00:56:50.913393 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-29 00:56:50.913398 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-29 00:56:50.913403 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-29 00:56:50.913408 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-29 00:56:50.913412 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-29 00:56:50.913417 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:56:50.913422 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:56:50.913427 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-29 00:56:50.913432 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-29 00:56:50.913437 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-29 00:56:50.913441 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:56:50.913446 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-29 00:56:50.913451 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-29 00:56:50.913456 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-29 00:56:50.913461 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:56:50.913465 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-29 00:56:50.913475 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-29 00:56:50.913480 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-29 00:56:50.913485 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:50.913490 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-29 00:56:50.913494 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-29 00:56:50.913499 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-29 00:56:50.913504 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:56:50.913509 | orchestrator | 2026-03-29 00:56:50.913514 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-03-29 00:56:50.913518 | orchestrator | Sunday 29 March 2026 00:49:07 +0000 (0:00:00.829) 0:02:27.151 ********** 2026-03-29 00:56:50.913523 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:56:50.913528 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:56:50.913533 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:56:50.913538 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:50.913542 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:56:50.913547 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:56:50.913552 | orchestrator | 2026-03-29 00:56:50.913557 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-03-29 00:56:50.913561 | orchestrator | Sunday 29 March 2026 00:49:08 +0000 (0:00:00.862) 0:02:28.014 ********** 2026-03-29 00:56:50.913566 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:56:50.913571 | orchestrator | 2026-03-29 00:56:50.913576 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-03-29 00:56:50.913627 | orchestrator | Sunday 29 March 2026 00:49:08 +0000 (0:00:00.145) 0:02:28.159 ********** 2026-03-29 00:56:50.913634 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:56:50.913638 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:56:50.913643 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:56:50.913648 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:50.913653 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:56:50.913658 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:56:50.913662 | orchestrator | 2026-03-29 00:56:50.913667 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-03-29 00:56:50.913677 | orchestrator | Sunday 29 March 2026 00:49:08 +0000 (0:00:00.695) 0:02:28.855 ********** 2026-03-29 00:56:50.913682 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:56:50.913687 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:56:50.913692 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:56:50.913697 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:50.913721 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:56:50.913726 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:56:50.913731 | orchestrator | 2026-03-29 00:56:50.913736 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-03-29 00:56:50.913741 | orchestrator | Sunday 29 March 2026 00:49:09 +0000 (0:00:00.732) 0:02:29.588 ********** 2026-03-29 00:56:50.913746 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:56:50.913751 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:56:50.913755 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:56:50.913760 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:50.913765 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:56:50.913770 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:56:50.913775 | orchestrator | 2026-03-29 00:56:50.913779 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-03-29 00:56:50.913784 | orchestrator | Sunday 29 March 2026 00:49:10 +0000 (0:00:00.608) 0:02:30.196 ********** 2026-03-29 00:56:50.913789 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:56:50.913794 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:56:50.913799 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:56:50.913808 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:56:50.913813 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:56:50.913818 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:56:50.913822 | orchestrator | 2026-03-29 00:56:50.913827 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-03-29 00:56:50.913832 | orchestrator | Sunday 29 March 2026 00:49:12 +0000 (0:00:01.918) 0:02:32.115 ********** 2026-03-29 00:56:50.913837 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:56:50.913842 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:56:50.913847 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:56:50.913851 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:56:50.913856 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:56:50.913861 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:56:50.913866 | orchestrator | 2026-03-29 00:56:50.913870 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-03-29 00:56:50.913875 | orchestrator | Sunday 29 March 2026 00:49:12 +0000 (0:00:00.513) 0:02:32.629 ********** 2026-03-29 00:56:50.913881 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 00:56:50.913887 | orchestrator | 2026-03-29 00:56:50.913892 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-03-29 00:56:50.913897 | orchestrator | Sunday 29 March 2026 00:49:13 +0000 (0:00:01.104) 0:02:33.734 ********** 2026-03-29 00:56:50.913902 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:56:50.913907 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:56:50.913912 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:56:50.913916 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:50.913921 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:56:50.913926 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:56:50.913931 | orchestrator | 2026-03-29 00:56:50.913936 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-03-29 00:56:50.913940 | orchestrator | Sunday 29 March 2026 00:49:14 +0000 (0:00:00.621) 0:02:34.356 ********** 2026-03-29 00:56:50.913945 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:56:50.913950 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:56:50.913955 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:56:50.913960 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:50.913964 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:56:50.913969 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:56:50.913974 | orchestrator | 2026-03-29 00:56:50.913979 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-03-29 00:56:50.913983 | orchestrator | Sunday 29 March 2026 00:49:15 +0000 (0:00:00.857) 0:02:35.214 ********** 2026-03-29 00:56:50.913988 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:56:50.913993 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:56:50.913998 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:56:50.914003 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:50.914007 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:56:50.914061 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:56:50.914068 | orchestrator | 2026-03-29 00:56:50.914073 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-03-29 00:56:50.914078 | orchestrator | Sunday 29 March 2026 00:49:15 +0000 (0:00:00.650) 0:02:35.864 ********** 2026-03-29 00:56:50.914082 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:56:50.914087 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:56:50.914092 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:56:50.914097 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:50.914101 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:56:50.914106 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:56:50.914111 | orchestrator | 2026-03-29 00:56:50.914116 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-03-29 00:56:50.914121 | orchestrator | Sunday 29 March 2026 00:49:16 +0000 (0:00:00.833) 0:02:36.698 ********** 2026-03-29 00:56:50.914131 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:56:50.914135 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:56:50.914140 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:56:50.914144 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:50.914149 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:56:50.914153 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:56:50.914158 | orchestrator | 2026-03-29 00:56:50.914163 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-03-29 00:56:50.914167 | orchestrator | Sunday 29 March 2026 00:49:17 +0000 (0:00:00.738) 0:02:37.437 ********** 2026-03-29 00:56:50.914172 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:56:50.914176 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:56:50.914181 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:50.914185 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:56:50.914190 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:56:50.914194 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:56:50.914199 | orchestrator | 2026-03-29 00:56:50.914207 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-03-29 00:56:50.914211 | orchestrator | Sunday 29 March 2026 00:49:18 +0000 (0:00:01.331) 0:02:38.769 ********** 2026-03-29 00:56:50.914216 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:56:50.914220 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:56:50.914242 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:56:50.914247 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:50.914252 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:56:50.914256 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:56:50.914261 | orchestrator | 2026-03-29 00:56:50.914266 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-03-29 00:56:50.914270 | orchestrator | Sunday 29 March 2026 00:49:19 +0000 (0:00:00.659) 0:02:39.429 ********** 2026-03-29 00:56:50.914275 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:56:50.914279 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:56:50.914284 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:56:50.914289 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:50.914293 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:56:50.914298 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:56:50.914302 | orchestrator | 2026-03-29 00:56:50.914307 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-03-29 00:56:50.914311 | orchestrator | Sunday 29 March 2026 00:49:20 +0000 (0:00:00.916) 0:02:40.346 ********** 2026-03-29 00:56:50.914316 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:56:50.914321 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:56:50.914325 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:56:50.914330 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:56:50.914335 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:56:50.914339 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:56:50.914343 | orchestrator | 2026-03-29 00:56:50.914348 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-03-29 00:56:50.914353 | orchestrator | Sunday 29 March 2026 00:49:21 +0000 (0:00:01.086) 0:02:41.432 ********** 2026-03-29 00:56:50.914358 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 00:56:50.914362 | orchestrator | 2026-03-29 00:56:50.914367 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-03-29 00:56:50.914371 | orchestrator | Sunday 29 March 2026 00:49:22 +0000 (0:00:01.081) 0:02:42.513 ********** 2026-03-29 00:56:50.914376 | orchestrator | changed: [testbed-node-4] => (item=/etc/ceph) 2026-03-29 00:56:50.914381 | orchestrator | changed: [testbed-node-3] => (item=/etc/ceph) 2026-03-29 00:56:50.914386 | orchestrator | changed: [testbed-node-5] => (item=/etc/ceph) 2026-03-29 00:56:50.914390 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph) 2026-03-29 00:56:50.914395 | orchestrator | changed: [testbed-node-1] => (item=/etc/ceph) 2026-03-29 00:56:50.914404 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/) 2026-03-29 00:56:50.914409 | orchestrator | changed: [testbed-node-2] => (item=/etc/ceph) 2026-03-29 00:56:50.914414 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/) 2026-03-29 00:56:50.914418 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/) 2026-03-29 00:56:50.914423 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/) 2026-03-29 00:56:50.914427 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/) 2026-03-29 00:56:50.914432 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mon) 2026-03-29 00:56:50.914436 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/) 2026-03-29 00:56:50.914441 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mon) 2026-03-29 00:56:50.914445 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mon) 2026-03-29 00:56:50.914450 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mon) 2026-03-29 00:56:50.914455 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mon) 2026-03-29 00:56:50.914459 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/osd) 2026-03-29 00:56:50.914464 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/osd) 2026-03-29 00:56:50.914468 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mon) 2026-03-29 00:56:50.914474 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/osd) 2026-03-29 00:56:50.914481 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/osd) 2026-03-29 00:56:50.914489 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/osd) 2026-03-29 00:56:50.914495 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds) 2026-03-29 00:56:50.914502 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds) 2026-03-29 00:56:50.914509 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds) 2026-03-29 00:56:50.914516 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/osd) 2026-03-29 00:56:50.914522 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mds) 2026-03-29 00:56:50.914530 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mds) 2026-03-29 00:56:50.914536 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2026-03-29 00:56:50.914543 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2026-03-29 00:56:50.914550 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mds) 2026-03-29 00:56:50.914556 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2026-03-29 00:56:50.914563 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2026-03-29 00:56:50.914571 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2026-03-29 00:56:50.914595 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/crash) 2026-03-29 00:56:50.914603 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/crash) 2026-03-29 00:56:50.914610 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2026-03-29 00:56:50.914617 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/crash) 2026-03-29 00:56:50.914628 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/crash) 2026-03-29 00:56:50.914636 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/crash) 2026-03-29 00:56:50.914644 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2026-03-29 00:56:50.914677 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/crash) 2026-03-29 00:56:50.914686 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2026-03-29 00:56:50.914693 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2026-03-29 00:56:50.914700 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2026-03-29 00:56:50.914707 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2026-03-29 00:56:50.914713 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-29 00:56:50.914729 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2026-03-29 00:56:50.914736 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-29 00:56:50.914744 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-29 00:56:50.914751 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-29 00:56:50.914816 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-29 00:56:50.914825 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-29 00:56:50.914830 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-29 00:56:50.914835 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-29 00:56:50.914839 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-29 00:56:50.914844 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-29 00:56:50.914848 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-29 00:56:50.914853 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-29 00:56:50.914857 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-29 00:56:50.914862 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-29 00:56:50.914866 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-29 00:56:50.914871 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-29 00:56:50.914876 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-29 00:56:50.914880 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-29 00:56:50.914884 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-29 00:56:50.914889 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-29 00:56:50.914893 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-29 00:56:50.914898 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-29 00:56:50.914904 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-29 00:56:50.914911 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-29 00:56:50.914918 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-29 00:56:50.914928 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-29 00:56:50.914937 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-29 00:56:50.914946 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-29 00:56:50.914953 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-29 00:56:50.914961 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-29 00:56:50.914969 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-29 00:56:50.914976 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-29 00:56:50.914983 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-29 00:56:50.914990 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-29 00:56:50.914997 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-29 00:56:50.915005 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-29 00:56:50.915012 | orchestrator | changed: [testbed-node-4] => (item=/var/run/ceph) 2026-03-29 00:56:50.915019 | orchestrator | changed: [testbed-node-5] => (item=/var/run/ceph) 2026-03-29 00:56:50.915026 | orchestrator | changed: [testbed-node-0] => (item=/var/run/ceph) 2026-03-29 00:56:50.915033 | orchestrator | changed: [testbed-node-2] => (item=/var/run/ceph) 2026-03-29 00:56:50.915046 | orchestrator | changed: [testbed-node-3] => (item=/var/run/ceph) 2026-03-29 00:56:50.915052 | orchestrator | changed: [testbed-node-1] => (item=/var/run/ceph) 2026-03-29 00:56:50.915058 | orchestrator | changed: [testbed-node-4] => (item=/var/log/ceph) 2026-03-29 00:56:50.915066 | orchestrator | changed: [testbed-node-5] => (item=/var/log/ceph) 2026-03-29 00:56:50.915073 | orchestrator | changed: [testbed-node-0] => (item=/var/log/ceph) 2026-03-29 00:56:50.915080 | orchestrator | changed: [testbed-node-3] => (item=/var/log/ceph) 2026-03-29 00:56:50.915086 | orchestrator | changed: [testbed-node-2] => (item=/var/log/ceph) 2026-03-29 00:56:50.915093 | orchestrator | changed: [testbed-node-1] => (item=/var/log/ceph) 2026-03-29 00:56:50.915099 | orchestrator | 2026-03-29 00:56:50.915111 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-03-29 00:56:50.915118 | orchestrator | Sunday 29 March 2026 00:49:28 +0000 (0:00:06.306) 0:02:48.820 ********** 2026-03-29 00:56:50.915124 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:50.915131 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:56:50.915175 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:56:50.915184 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-29 00:56:50.915192 | orchestrator | 2026-03-29 00:56:50.915199 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-03-29 00:56:50.915207 | orchestrator | Sunday 29 March 2026 00:49:30 +0000 (0:00:01.650) 0:02:50.470 ********** 2026-03-29 00:56:50.915214 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-29 00:56:50.915222 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-29 00:56:50.915229 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-29 00:56:50.915236 | orchestrator | 2026-03-29 00:56:50.915244 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-03-29 00:56:50.915249 | orchestrator | Sunday 29 March 2026 00:49:31 +0000 (0:00:00.823) 0:02:51.294 ********** 2026-03-29 00:56:50.915254 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-29 00:56:50.915258 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-29 00:56:50.915263 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-29 00:56:50.915267 | orchestrator | 2026-03-29 00:56:50.915272 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-03-29 00:56:50.915276 | orchestrator | Sunday 29 March 2026 00:49:32 +0000 (0:00:01.374) 0:02:52.669 ********** 2026-03-29 00:56:50.915281 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:56:50.915286 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:56:50.915290 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:56:50.915295 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:50.915299 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:56:50.915304 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:56:50.915308 | orchestrator | 2026-03-29 00:56:50.915313 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-03-29 00:56:50.915317 | orchestrator | Sunday 29 March 2026 00:49:33 +0000 (0:00:00.751) 0:02:53.421 ********** 2026-03-29 00:56:50.915322 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:56:50.915326 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:56:50.915331 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:56:50.915335 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:50.915340 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:56:50.915355 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:56:50.915360 | orchestrator | 2026-03-29 00:56:50.915364 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-03-29 00:56:50.915369 | orchestrator | Sunday 29 March 2026 00:49:34 +0000 (0:00:00.628) 0:02:54.049 ********** 2026-03-29 00:56:50.915374 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:56:50.915378 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:56:50.915382 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:56:50.915387 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:50.915392 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:56:50.915396 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:56:50.915401 | orchestrator | 2026-03-29 00:56:50.915405 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-03-29 00:56:50.915410 | orchestrator | Sunday 29 March 2026 00:49:35 +0000 (0:00:01.421) 0:02:55.470 ********** 2026-03-29 00:56:50.915414 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:56:50.915419 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:56:50.915423 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:56:50.915428 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:50.915432 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:56:50.915437 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:56:50.915441 | orchestrator | 2026-03-29 00:56:50.915447 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-03-29 00:56:50.915454 | orchestrator | Sunday 29 March 2026 00:49:36 +0000 (0:00:00.725) 0:02:56.196 ********** 2026-03-29 00:56:50.915461 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:56:50.915468 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:56:50.915488 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:56:50.915498 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:50.915505 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:56:50.915512 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:56:50.915519 | orchestrator | 2026-03-29 00:56:50.915526 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-03-29 00:56:50.915534 | orchestrator | Sunday 29 March 2026 00:49:37 +0000 (0:00:01.012) 0:02:57.209 ********** 2026-03-29 00:56:50.915541 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:56:50.915549 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:56:50.915555 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:56:50.915562 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:50.915568 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:56:50.915574 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:56:50.915601 | orchestrator | 2026-03-29 00:56:50.915609 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-03-29 00:56:50.915616 | orchestrator | Sunday 29 March 2026 00:49:38 +0000 (0:00:00.706) 0:02:57.916 ********** 2026-03-29 00:56:50.915628 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:56:50.915636 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:56:50.915643 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:56:50.915650 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:50.915657 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:56:50.915665 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:56:50.915672 | orchestrator | 2026-03-29 00:56:50.915709 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-03-29 00:56:50.915714 | orchestrator | Sunday 29 March 2026 00:49:39 +0000 (0:00:01.110) 0:02:59.027 ********** 2026-03-29 00:56:50.915719 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:56:50.915724 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:56:50.915728 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:56:50.915733 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:50.915737 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:56:50.915742 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:56:50.915746 | orchestrator | 2026-03-29 00:56:50.915762 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-03-29 00:56:50.915766 | orchestrator | Sunday 29 March 2026 00:49:39 +0000 (0:00:00.604) 0:02:59.631 ********** 2026-03-29 00:56:50.915771 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:50.915775 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:56:50.915780 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:56:50.915784 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:56:50.915789 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:56:50.915793 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:56:50.915798 | orchestrator | 2026-03-29 00:56:50.915802 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-03-29 00:56:50.915807 | orchestrator | Sunday 29 March 2026 00:49:41 +0000 (0:00:01.918) 0:03:01.550 ********** 2026-03-29 00:56:50.915811 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:56:50.915816 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:56:50.915820 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:56:50.915825 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:50.915829 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:56:50.915834 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:56:50.915838 | orchestrator | 2026-03-29 00:56:50.915843 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-03-29 00:56:50.915848 | orchestrator | Sunday 29 March 2026 00:49:42 +0000 (0:00:00.530) 0:03:02.080 ********** 2026-03-29 00:56:50.915852 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:56:50.915857 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:56:50.915861 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:56:50.915866 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:50.915870 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:56:50.915875 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:56:50.915880 | orchestrator | 2026-03-29 00:56:50.915884 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-03-29 00:56:50.915889 | orchestrator | Sunday 29 March 2026 00:49:43 +0000 (0:00:00.935) 0:03:03.015 ********** 2026-03-29 00:56:50.915893 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:56:50.915898 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:56:50.915902 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:56:50.915907 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:50.915911 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:56:50.915916 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:56:50.915920 | orchestrator | 2026-03-29 00:56:50.915925 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-03-29 00:56:50.915929 | orchestrator | Sunday 29 March 2026 00:49:43 +0000 (0:00:00.737) 0:03:03.753 ********** 2026-03-29 00:56:50.915934 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-29 00:56:50.915938 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-29 00:56:50.915943 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:50.915948 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-29 00:56:50.915952 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:56:50.915957 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:56:50.915961 | orchestrator | 2026-03-29 00:56:50.915966 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-03-29 00:56:50.915970 | orchestrator | Sunday 29 March 2026 00:49:45 +0000 (0:00:01.170) 0:03:04.924 ********** 2026-03-29 00:56:50.915977 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}])  2026-03-29 00:56:50.915989 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}])  2026-03-29 00:56:50.915995 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:56:50.915999 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}])  2026-03-29 00:56:50.916008 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}])  2026-03-29 00:56:50.916028 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:50.916034 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:56:50.916038 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:56:50.916042 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:56:50.916047 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}])  2026-03-29 00:56:50.916052 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}])  2026-03-29 00:56:50.916057 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:56:50.916061 | orchestrator | 2026-03-29 00:56:50.916066 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-03-29 00:56:50.916070 | orchestrator | Sunday 29 March 2026 00:49:46 +0000 (0:00:01.244) 0:03:06.168 ********** 2026-03-29 00:56:50.916075 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:56:50.916079 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:56:50.916084 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:56:50.916088 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:56:50.916093 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:50.916097 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:56:50.916102 | orchestrator | 2026-03-29 00:56:50.916106 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-03-29 00:56:50.916111 | orchestrator | Sunday 29 March 2026 00:49:47 +0000 (0:00:01.040) 0:03:07.209 ********** 2026-03-29 00:56:50.916115 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:56:50.916120 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:56:50.916124 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:56:50.916129 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:50.916133 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:56:50.916137 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:56:50.916142 | orchestrator | 2026-03-29 00:56:50.916147 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-29 00:56:50.916151 | orchestrator | Sunday 29 March 2026 00:49:47 +0000 (0:00:00.524) 0:03:07.733 ********** 2026-03-29 00:56:50.916156 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:56:50.916160 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:56:50.916165 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:56:50.916169 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:50.916173 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:56:50.916178 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:56:50.916187 | orchestrator | 2026-03-29 00:56:50.916192 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-29 00:56:50.916197 | orchestrator | Sunday 29 March 2026 00:49:48 +0000 (0:00:00.689) 0:03:08.422 ********** 2026-03-29 00:56:50.916201 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:56:50.916206 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:56:50.916210 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:56:50.916215 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:50.916219 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:56:50.916223 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:56:50.916228 | orchestrator | 2026-03-29 00:56:50.916233 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-29 00:56:50.916237 | orchestrator | Sunday 29 March 2026 00:49:49 +0000 (0:00:00.645) 0:03:09.068 ********** 2026-03-29 00:56:50.916242 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:56:50.916246 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:56:50.916251 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:56:50.916255 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:50.916259 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:56:50.916264 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:56:50.916268 | orchestrator | 2026-03-29 00:56:50.916273 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-29 00:56:50.916277 | orchestrator | Sunday 29 March 2026 00:49:49 +0000 (0:00:00.792) 0:03:09.860 ********** 2026-03-29 00:56:50.916282 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:56:50.916287 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:56:50.916291 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:56:50.916295 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:50.916300 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:56:50.916304 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:56:50.916309 | orchestrator | 2026-03-29 00:56:50.916313 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-29 00:56:50.916318 | orchestrator | Sunday 29 March 2026 00:49:50 +0000 (0:00:00.811) 0:03:10.671 ********** 2026-03-29 00:56:50.916322 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-29 00:56:50.916327 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-29 00:56:50.916331 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-29 00:56:50.916336 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:56:50.916340 | orchestrator | 2026-03-29 00:56:50.916345 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-29 00:56:50.916349 | orchestrator | Sunday 29 March 2026 00:49:51 +0000 (0:00:00.340) 0:03:11.011 ********** 2026-03-29 00:56:50.916354 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-29 00:56:50.916359 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-29 00:56:50.916366 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-29 00:56:50.916370 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:56:50.916377 | orchestrator | 2026-03-29 00:56:50.916384 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-29 00:56:50.916411 | orchestrator | Sunday 29 March 2026 00:49:51 +0000 (0:00:00.528) 0:03:11.539 ********** 2026-03-29 00:56:50.916419 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-29 00:56:50.916426 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-29 00:56:50.916434 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-29 00:56:50.916441 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:56:50.916449 | orchestrator | 2026-03-29 00:56:50.916457 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-29 00:56:50.916465 | orchestrator | Sunday 29 March 2026 00:49:52 +0000 (0:00:00.606) 0:03:12.146 ********** 2026-03-29 00:56:50.916473 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:56:50.916480 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:56:50.916493 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:56:50.916499 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:50.916504 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:56:50.916509 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:56:50.916513 | orchestrator | 2026-03-29 00:56:50.916518 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-29 00:56:50.916522 | orchestrator | Sunday 29 March 2026 00:49:53 +0000 (0:00:00.851) 0:03:12.997 ********** 2026-03-29 00:56:50.916527 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-03-29 00:56:50.916531 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-03-29 00:56:50.916536 | orchestrator | skipping: [testbed-node-0] => (item=0)  2026-03-29 00:56:50.916541 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-03-29 00:56:50.916545 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:50.916550 | orchestrator | skipping: [testbed-node-1] => (item=0)  2026-03-29 00:56:50.916554 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:56:50.916559 | orchestrator | skipping: [testbed-node-2] => (item=0)  2026-03-29 00:56:50.916563 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:56:50.916568 | orchestrator | 2026-03-29 00:56:50.916572 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-03-29 00:56:50.916577 | orchestrator | Sunday 29 March 2026 00:49:54 +0000 (0:00:01.658) 0:03:14.656 ********** 2026-03-29 00:56:50.916602 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:56:50.916609 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:56:50.916614 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:56:50.916618 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:56:50.916622 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:56:50.916627 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:56:50.916631 | orchestrator | 2026-03-29 00:56:50.916636 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-29 00:56:50.916640 | orchestrator | Sunday 29 March 2026 00:49:57 +0000 (0:00:02.326) 0:03:16.983 ********** 2026-03-29 00:56:50.916645 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:56:50.916649 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:56:50.916654 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:56:50.916658 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:56:50.916662 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:56:50.916667 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:56:50.916671 | orchestrator | 2026-03-29 00:56:50.916676 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-03-29 00:56:50.916680 | orchestrator | Sunday 29 March 2026 00:49:58 +0000 (0:00:01.267) 0:03:18.251 ********** 2026-03-29 00:56:50.916685 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:56:50.916689 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:56:50.916694 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:56:50.916699 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 00:56:50.916703 | orchestrator | 2026-03-29 00:56:50.916708 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-03-29 00:56:50.916712 | orchestrator | Sunday 29 March 2026 00:49:59 +0000 (0:00:00.811) 0:03:19.062 ********** 2026-03-29 00:56:50.916717 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:56:50.916721 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:56:50.916726 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:56:50.916730 | orchestrator | 2026-03-29 00:56:50.916735 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-03-29 00:56:50.916739 | orchestrator | Sunday 29 March 2026 00:49:59 +0000 (0:00:00.238) 0:03:19.301 ********** 2026-03-29 00:56:50.916744 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:56:50.916749 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:56:50.916753 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:56:50.916758 | orchestrator | 2026-03-29 00:56:50.916762 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-03-29 00:56:50.916771 | orchestrator | Sunday 29 March 2026 00:50:00 +0000 (0:00:01.304) 0:03:20.605 ********** 2026-03-29 00:56:50.916776 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-29 00:56:50.916780 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-29 00:56:50.916785 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-29 00:56:50.916789 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:50.916794 | orchestrator | 2026-03-29 00:56:50.916798 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-03-29 00:56:50.916803 | orchestrator | Sunday 29 March 2026 00:50:01 +0000 (0:00:00.674) 0:03:21.280 ********** 2026-03-29 00:56:50.916807 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:56:50.916812 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:56:50.916816 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:56:50.916821 | orchestrator | 2026-03-29 00:56:50.916825 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-03-29 00:56:50.916830 | orchestrator | Sunday 29 March 2026 00:50:01 +0000 (0:00:00.324) 0:03:21.605 ********** 2026-03-29 00:56:50.916834 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:50.916838 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:56:50.916843 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:56:50.916851 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-29 00:56:50.916856 | orchestrator | 2026-03-29 00:56:50.916861 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-03-29 00:56:50.916884 | orchestrator | Sunday 29 March 2026 00:50:02 +0000 (0:00:01.015) 0:03:22.621 ********** 2026-03-29 00:56:50.916892 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-29 00:56:50.916899 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-29 00:56:50.916906 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-29 00:56:50.916914 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:56:50.916922 | orchestrator | 2026-03-29 00:56:50.916930 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-03-29 00:56:50.916937 | orchestrator | Sunday 29 March 2026 00:50:03 +0000 (0:00:00.389) 0:03:23.011 ********** 2026-03-29 00:56:50.916944 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:56:50.916951 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:56:50.916958 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:56:50.916963 | orchestrator | 2026-03-29 00:56:50.916968 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-03-29 00:56:50.916972 | orchestrator | Sunday 29 March 2026 00:50:03 +0000 (0:00:00.356) 0:03:23.368 ********** 2026-03-29 00:56:50.916977 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:56:50.916981 | orchestrator | 2026-03-29 00:56:50.916986 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-03-29 00:56:50.916990 | orchestrator | Sunday 29 March 2026 00:50:03 +0000 (0:00:00.509) 0:03:23.877 ********** 2026-03-29 00:56:50.916995 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:56:50.916999 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:56:50.917004 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:56:50.917008 | orchestrator | 2026-03-29 00:56:50.917013 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-03-29 00:56:50.917017 | orchestrator | Sunday 29 March 2026 00:50:04 +0000 (0:00:00.271) 0:03:24.149 ********** 2026-03-29 00:56:50.917022 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:56:50.917026 | orchestrator | 2026-03-29 00:56:50.917031 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-03-29 00:56:50.917035 | orchestrator | Sunday 29 March 2026 00:50:04 +0000 (0:00:00.182) 0:03:24.331 ********** 2026-03-29 00:56:50.917040 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:56:50.917044 | orchestrator | 2026-03-29 00:56:50.917049 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-03-29 00:56:50.917058 | orchestrator | Sunday 29 March 2026 00:50:04 +0000 (0:00:00.188) 0:03:24.519 ********** 2026-03-29 00:56:50.917062 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:56:50.917067 | orchestrator | 2026-03-29 00:56:50.917071 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-03-29 00:56:50.917076 | orchestrator | Sunday 29 March 2026 00:50:04 +0000 (0:00:00.098) 0:03:24.618 ********** 2026-03-29 00:56:50.917080 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:56:50.917085 | orchestrator | 2026-03-29 00:56:50.917089 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-03-29 00:56:50.917094 | orchestrator | Sunday 29 March 2026 00:50:04 +0000 (0:00:00.203) 0:03:24.822 ********** 2026-03-29 00:56:50.917098 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:56:50.917103 | orchestrator | 2026-03-29 00:56:50.917107 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-03-29 00:56:50.917112 | orchestrator | Sunday 29 March 2026 00:50:05 +0000 (0:00:00.201) 0:03:25.023 ********** 2026-03-29 00:56:50.917116 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-29 00:56:50.917121 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-29 00:56:50.917125 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-29 00:56:50.917130 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:56:50.917135 | orchestrator | 2026-03-29 00:56:50.917139 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-03-29 00:56:50.917144 | orchestrator | Sunday 29 March 2026 00:50:05 +0000 (0:00:00.370) 0:03:25.394 ********** 2026-03-29 00:56:50.917148 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:56:50.917153 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:56:50.917157 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:56:50.917162 | orchestrator | 2026-03-29 00:56:50.917166 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-03-29 00:56:50.917171 | orchestrator | Sunday 29 March 2026 00:50:05 +0000 (0:00:00.415) 0:03:25.809 ********** 2026-03-29 00:56:50.917175 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:56:50.917180 | orchestrator | 2026-03-29 00:56:50.917184 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-03-29 00:56:50.917189 | orchestrator | Sunday 29 March 2026 00:50:06 +0000 (0:00:00.243) 0:03:26.053 ********** 2026-03-29 00:56:50.917193 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:56:50.917198 | orchestrator | 2026-03-29 00:56:50.917202 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-03-29 00:56:50.917207 | orchestrator | Sunday 29 March 2026 00:50:06 +0000 (0:00:00.227) 0:03:26.283 ********** 2026-03-29 00:56:50.917211 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:50.917216 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:56:50.917220 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:56:50.917225 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-29 00:56:50.917229 | orchestrator | 2026-03-29 00:56:50.917234 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-03-29 00:56:50.917238 | orchestrator | Sunday 29 March 2026 00:50:07 +0000 (0:00:00.779) 0:03:27.063 ********** 2026-03-29 00:56:50.917243 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:56:50.917247 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:56:50.917252 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:56:50.917256 | orchestrator | 2026-03-29 00:56:50.917261 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-03-29 00:56:50.917269 | orchestrator | Sunday 29 March 2026 00:50:07 +0000 (0:00:00.698) 0:03:27.762 ********** 2026-03-29 00:56:50.917274 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:56:50.917278 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:56:50.917283 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:56:50.917287 | orchestrator | 2026-03-29 00:56:50.917310 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-03-29 00:56:50.917319 | orchestrator | Sunday 29 March 2026 00:50:08 +0000 (0:00:01.137) 0:03:28.899 ********** 2026-03-29 00:56:50.917324 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-29 00:56:50.917329 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-29 00:56:50.917333 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-29 00:56:50.917338 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:56:50.917342 | orchestrator | 2026-03-29 00:56:50.917350 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-03-29 00:56:50.917357 | orchestrator | Sunday 29 March 2026 00:50:09 +0000 (0:00:00.640) 0:03:29.540 ********** 2026-03-29 00:56:50.917365 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:56:50.917373 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:56:50.917380 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:56:50.917387 | orchestrator | 2026-03-29 00:56:50.917395 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-03-29 00:56:50.917401 | orchestrator | Sunday 29 March 2026 00:50:10 +0000 (0:00:00.447) 0:03:29.987 ********** 2026-03-29 00:56:50.917405 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:50.917410 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:56:50.917414 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:56:50.917419 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-29 00:56:50.917423 | orchestrator | 2026-03-29 00:56:50.917428 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-03-29 00:56:50.917432 | orchestrator | Sunday 29 March 2026 00:50:11 +0000 (0:00:01.459) 0:03:31.446 ********** 2026-03-29 00:56:50.917437 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:56:50.917441 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:56:50.917445 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:56:50.917450 | orchestrator | 2026-03-29 00:56:50.917454 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-03-29 00:56:50.917459 | orchestrator | Sunday 29 March 2026 00:50:11 +0000 (0:00:00.419) 0:03:31.866 ********** 2026-03-29 00:56:50.917465 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:56:50.917472 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:56:50.917482 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:56:50.917490 | orchestrator | 2026-03-29 00:56:50.917498 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-03-29 00:56:50.917504 | orchestrator | Sunday 29 March 2026 00:50:13 +0000 (0:00:01.522) 0:03:33.388 ********** 2026-03-29 00:56:50.917511 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-29 00:56:50.917518 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-29 00:56:50.917525 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-29 00:56:50.917531 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:56:50.917538 | orchestrator | 2026-03-29 00:56:50.917545 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-03-29 00:56:50.917552 | orchestrator | Sunday 29 March 2026 00:50:14 +0000 (0:00:01.068) 0:03:34.457 ********** 2026-03-29 00:56:50.917559 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:56:50.917566 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:56:50.917573 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:56:50.917651 | orchestrator | 2026-03-29 00:56:50.917661 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2026-03-29 00:56:50.917666 | orchestrator | Sunday 29 March 2026 00:50:14 +0000 (0:00:00.326) 0:03:34.784 ********** 2026-03-29 00:56:50.917670 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:56:50.917675 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:56:50.917680 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:56:50.917684 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:50.917689 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:56:50.917693 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:56:50.917698 | orchestrator | 2026-03-29 00:56:50.917709 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-03-29 00:56:50.917714 | orchestrator | Sunday 29 March 2026 00:50:15 +0000 (0:00:00.580) 0:03:35.365 ********** 2026-03-29 00:56:50.917719 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:56:50.917723 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:56:50.917728 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:56:50.917732 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 00:56:50.917737 | orchestrator | 2026-03-29 00:56:50.917742 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-03-29 00:56:50.917746 | orchestrator | Sunday 29 March 2026 00:50:16 +0000 (0:00:00.859) 0:03:36.224 ********** 2026-03-29 00:56:50.917751 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:56:50.917755 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:56:50.917760 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:56:50.917765 | orchestrator | 2026-03-29 00:56:50.917772 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-03-29 00:56:50.917779 | orchestrator | Sunday 29 March 2026 00:50:16 +0000 (0:00:00.287) 0:03:36.512 ********** 2026-03-29 00:56:50.917786 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:56:50.917798 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:56:50.917805 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:56:50.917813 | orchestrator | 2026-03-29 00:56:50.917822 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-03-29 00:56:50.917828 | orchestrator | Sunday 29 March 2026 00:50:17 +0000 (0:00:01.092) 0:03:37.604 ********** 2026-03-29 00:56:50.917835 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-29 00:56:50.917842 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-29 00:56:50.917849 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-29 00:56:50.917855 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:50.917862 | orchestrator | 2026-03-29 00:56:50.917874 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-03-29 00:56:50.917881 | orchestrator | Sunday 29 March 2026 00:50:18 +0000 (0:00:01.193) 0:03:38.798 ********** 2026-03-29 00:56:50.917888 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:56:50.917931 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:56:50.917940 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:56:50.917947 | orchestrator | 2026-03-29 00:56:50.917953 | orchestrator | PLAY [Apply role ceph-mon] ***************************************************** 2026-03-29 00:56:50.917961 | orchestrator | 2026-03-29 00:56:50.917968 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-29 00:56:50.917974 | orchestrator | Sunday 29 March 2026 00:50:19 +0000 (0:00:00.632) 0:03:39.431 ********** 2026-03-29 00:56:50.917981 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 00:56:50.917989 | orchestrator | 2026-03-29 00:56:50.917996 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-29 00:56:50.918004 | orchestrator | Sunday 29 March 2026 00:50:20 +0000 (0:00:00.738) 0:03:40.169 ********** 2026-03-29 00:56:50.918045 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 00:56:50.918052 | orchestrator | 2026-03-29 00:56:50.918056 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-29 00:56:50.918061 | orchestrator | Sunday 29 March 2026 00:50:20 +0000 (0:00:00.495) 0:03:40.665 ********** 2026-03-29 00:56:50.918066 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:56:50.918070 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:56:50.918075 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:56:50.918079 | orchestrator | 2026-03-29 00:56:50.918084 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-29 00:56:50.918088 | orchestrator | Sunday 29 March 2026 00:50:21 +0000 (0:00:00.710) 0:03:41.375 ********** 2026-03-29 00:56:50.918099 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:50.918104 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:56:50.918109 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:56:50.918113 | orchestrator | 2026-03-29 00:56:50.918118 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-29 00:56:50.918122 | orchestrator | Sunday 29 March 2026 00:50:21 +0000 (0:00:00.290) 0:03:41.666 ********** 2026-03-29 00:56:50.918127 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:50.918131 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:56:50.918136 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:56:50.918140 | orchestrator | 2026-03-29 00:56:50.918145 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-29 00:56:50.918149 | orchestrator | Sunday 29 March 2026 00:50:22 +0000 (0:00:00.414) 0:03:42.081 ********** 2026-03-29 00:56:50.918154 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:50.918158 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:56:50.918163 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:56:50.918167 | orchestrator | 2026-03-29 00:56:50.918171 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-29 00:56:50.918176 | orchestrator | Sunday 29 March 2026 00:50:22 +0000 (0:00:00.266) 0:03:42.347 ********** 2026-03-29 00:56:50.918180 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:56:50.918184 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:56:50.918188 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:56:50.918192 | orchestrator | 2026-03-29 00:56:50.918196 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-29 00:56:50.918200 | orchestrator | Sunday 29 March 2026 00:50:23 +0000 (0:00:00.653) 0:03:43.000 ********** 2026-03-29 00:56:50.918204 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:50.918208 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:56:50.918212 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:56:50.918216 | orchestrator | 2026-03-29 00:56:50.918221 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-29 00:56:50.918225 | orchestrator | Sunday 29 March 2026 00:50:23 +0000 (0:00:00.289) 0:03:43.290 ********** 2026-03-29 00:56:50.918229 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:50.918233 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:56:50.918237 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:56:50.918241 | orchestrator | 2026-03-29 00:56:50.918245 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-29 00:56:50.918249 | orchestrator | Sunday 29 March 2026 00:50:23 +0000 (0:00:00.453) 0:03:43.743 ********** 2026-03-29 00:56:50.918253 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:56:50.918257 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:56:50.918262 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:56:50.918268 | orchestrator | 2026-03-29 00:56:50.918275 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-29 00:56:50.918282 | orchestrator | Sunday 29 March 2026 00:50:24 +0000 (0:00:00.665) 0:03:44.409 ********** 2026-03-29 00:56:50.918288 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:56:50.918295 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:56:50.918301 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:56:50.918307 | orchestrator | 2026-03-29 00:56:50.918314 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-29 00:56:50.918319 | orchestrator | Sunday 29 March 2026 00:50:25 +0000 (0:00:00.709) 0:03:45.119 ********** 2026-03-29 00:56:50.918325 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:50.918331 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:56:50.918337 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:56:50.918343 | orchestrator | 2026-03-29 00:56:50.918349 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-29 00:56:50.918354 | orchestrator | Sunday 29 March 2026 00:50:25 +0000 (0:00:00.346) 0:03:45.465 ********** 2026-03-29 00:56:50.918360 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:56:50.918372 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:56:50.918378 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:56:50.918384 | orchestrator | 2026-03-29 00:56:50.918390 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-29 00:56:50.918396 | orchestrator | Sunday 29 March 2026 00:50:26 +0000 (0:00:00.605) 0:03:46.071 ********** 2026-03-29 00:56:50.918401 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:50.918417 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:56:50.918424 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:56:50.918430 | orchestrator | 2026-03-29 00:56:50.918436 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-29 00:56:50.918476 | orchestrator | Sunday 29 March 2026 00:50:26 +0000 (0:00:00.362) 0:03:46.433 ********** 2026-03-29 00:56:50.918483 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:50.918489 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:56:50.918496 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:56:50.918502 | orchestrator | 2026-03-29 00:56:50.918508 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-29 00:56:50.918514 | orchestrator | Sunday 29 March 2026 00:50:26 +0000 (0:00:00.432) 0:03:46.866 ********** 2026-03-29 00:56:50.918521 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:50.918527 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:56:50.918533 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:56:50.918539 | orchestrator | 2026-03-29 00:56:50.918545 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-29 00:56:50.918552 | orchestrator | Sunday 29 March 2026 00:50:27 +0000 (0:00:00.362) 0:03:47.229 ********** 2026-03-29 00:56:50.918559 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:50.918565 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:56:50.918571 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:56:50.918602 | orchestrator | 2026-03-29 00:56:50.918610 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-29 00:56:50.918616 | orchestrator | Sunday 29 March 2026 00:50:27 +0000 (0:00:00.600) 0:03:47.829 ********** 2026-03-29 00:56:50.918622 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:50.918629 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:56:50.918635 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:56:50.918641 | orchestrator | 2026-03-29 00:56:50.918647 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-29 00:56:50.918653 | orchestrator | Sunday 29 March 2026 00:50:28 +0000 (0:00:00.326) 0:03:48.156 ********** 2026-03-29 00:56:50.918659 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:56:50.918664 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:56:50.918670 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:56:50.918676 | orchestrator | 2026-03-29 00:56:50.918682 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-29 00:56:50.918688 | orchestrator | Sunday 29 March 2026 00:50:28 +0000 (0:00:00.408) 0:03:48.564 ********** 2026-03-29 00:56:50.918695 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:56:50.918701 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:56:50.918707 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:56:50.918713 | orchestrator | 2026-03-29 00:56:50.918719 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-29 00:56:50.918726 | orchestrator | Sunday 29 March 2026 00:50:28 +0000 (0:00:00.302) 0:03:48.867 ********** 2026-03-29 00:56:50.918732 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:56:50.918739 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:56:50.918745 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:56:50.918752 | orchestrator | 2026-03-29 00:56:50.918759 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2026-03-29 00:56:50.918768 | orchestrator | Sunday 29 March 2026 00:50:29 +0000 (0:00:00.706) 0:03:49.574 ********** 2026-03-29 00:56:50.918772 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:56:50.918776 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:56:50.918780 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:56:50.918793 | orchestrator | 2026-03-29 00:56:50.918797 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2026-03-29 00:56:50.918801 | orchestrator | Sunday 29 March 2026 00:50:29 +0000 (0:00:00.300) 0:03:49.875 ********** 2026-03-29 00:56:50.918806 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 00:56:50.918810 | orchestrator | 2026-03-29 00:56:50.918814 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2026-03-29 00:56:50.918819 | orchestrator | Sunday 29 March 2026 00:50:30 +0000 (0:00:00.592) 0:03:50.467 ********** 2026-03-29 00:56:50.918823 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:50.918827 | orchestrator | 2026-03-29 00:56:50.918831 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2026-03-29 00:56:50.918835 | orchestrator | Sunday 29 March 2026 00:50:30 +0000 (0:00:00.261) 0:03:50.729 ********** 2026-03-29 00:56:50.918839 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-03-29 00:56:50.918844 | orchestrator | 2026-03-29 00:56:50.918848 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2026-03-29 00:56:50.918852 | orchestrator | Sunday 29 March 2026 00:50:31 +0000 (0:00:00.926) 0:03:51.655 ********** 2026-03-29 00:56:50.918856 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:56:50.918860 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:56:50.918864 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:56:50.918868 | orchestrator | 2026-03-29 00:56:50.918872 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2026-03-29 00:56:50.918876 | orchestrator | Sunday 29 March 2026 00:50:32 +0000 (0:00:00.281) 0:03:51.937 ********** 2026-03-29 00:56:50.918880 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:56:50.918885 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:56:50.918889 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:56:50.918893 | orchestrator | 2026-03-29 00:56:50.918897 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2026-03-29 00:56:50.918901 | orchestrator | Sunday 29 March 2026 00:50:32 +0000 (0:00:00.392) 0:03:52.330 ********** 2026-03-29 00:56:50.918905 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:56:50.918909 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:56:50.918913 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:56:50.918917 | orchestrator | 2026-03-29 00:56:50.918922 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2026-03-29 00:56:50.918926 | orchestrator | Sunday 29 March 2026 00:50:33 +0000 (0:00:01.060) 0:03:53.390 ********** 2026-03-29 00:56:50.918930 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:56:50.918934 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:56:50.918938 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:56:50.918942 | orchestrator | 2026-03-29 00:56:50.918946 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2026-03-29 00:56:50.918957 | orchestrator | Sunday 29 March 2026 00:50:34 +0000 (0:00:01.183) 0:03:54.574 ********** 2026-03-29 00:56:50.918964 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:56:50.918970 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:56:50.918978 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:56:50.918988 | orchestrator | 2026-03-29 00:56:50.919035 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2026-03-29 00:56:50.919043 | orchestrator | Sunday 29 March 2026 00:50:35 +0000 (0:00:00.859) 0:03:55.433 ********** 2026-03-29 00:56:50.919049 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:56:50.919055 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:56:50.919061 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:56:50.919068 | orchestrator | 2026-03-29 00:56:50.919073 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2026-03-29 00:56:50.919080 | orchestrator | Sunday 29 March 2026 00:50:36 +0000 (0:00:00.774) 0:03:56.208 ********** 2026-03-29 00:56:50.919086 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:56:50.919093 | orchestrator | 2026-03-29 00:56:50.919099 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2026-03-29 00:56:50.919113 | orchestrator | Sunday 29 March 2026 00:50:37 +0000 (0:00:01.673) 0:03:57.882 ********** 2026-03-29 00:56:50.919119 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:56:50.919126 | orchestrator | 2026-03-29 00:56:50.919133 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2026-03-29 00:56:50.919139 | orchestrator | Sunday 29 March 2026 00:50:38 +0000 (0:00:00.635) 0:03:58.518 ********** 2026-03-29 00:56:50.919146 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-29 00:56:50.919151 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-29 00:56:50.919155 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-29 00:56:50.919159 | orchestrator | changed: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-29 00:56:50.919164 | orchestrator | ok: [testbed-node-1] => (item=None) 2026-03-29 00:56:50.919168 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-29 00:56:50.919172 | orchestrator | changed: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-29 00:56:50.919176 | orchestrator | changed: [testbed-node-0 -> {{ item }}] 2026-03-29 00:56:50.919182 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-29 00:56:50.919189 | orchestrator | ok: [testbed-node-1 -> {{ item }}] 2026-03-29 00:56:50.919195 | orchestrator | ok: [testbed-node-2] => (item=None) 2026-03-29 00:56:50.919202 | orchestrator | ok: [testbed-node-2 -> {{ item }}] 2026-03-29 00:56:50.919208 | orchestrator | 2026-03-29 00:56:50.919214 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2026-03-29 00:56:50.919221 | orchestrator | Sunday 29 March 2026 00:50:41 +0000 (0:00:03.290) 0:04:01.808 ********** 2026-03-29 00:56:50.919228 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:56:50.919235 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:56:50.919242 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:56:50.919249 | orchestrator | 2026-03-29 00:56:50.919256 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2026-03-29 00:56:50.919261 | orchestrator | Sunday 29 March 2026 00:50:43 +0000 (0:00:01.884) 0:04:03.693 ********** 2026-03-29 00:56:50.919268 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:56:50.919274 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:56:50.919281 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:56:50.919288 | orchestrator | 2026-03-29 00:56:50.919295 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2026-03-29 00:56:50.919302 | orchestrator | Sunday 29 March 2026 00:50:44 +0000 (0:00:00.284) 0:04:03.977 ********** 2026-03-29 00:56:50.919309 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:56:50.919317 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:56:50.919323 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:56:50.919330 | orchestrator | 2026-03-29 00:56:50.919337 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2026-03-29 00:56:50.919344 | orchestrator | Sunday 29 March 2026 00:50:44 +0000 (0:00:00.437) 0:04:04.415 ********** 2026-03-29 00:56:50.919348 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:56:50.919352 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:56:50.919356 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:56:50.919360 | orchestrator | 2026-03-29 00:56:50.919364 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2026-03-29 00:56:50.919368 | orchestrator | Sunday 29 March 2026 00:50:47 +0000 (0:00:02.890) 0:04:07.306 ********** 2026-03-29 00:56:50.919373 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:56:50.919377 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:56:50.919381 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:56:50.919385 | orchestrator | 2026-03-29 00:56:50.919389 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2026-03-29 00:56:50.919393 | orchestrator | Sunday 29 March 2026 00:50:48 +0000 (0:00:01.448) 0:04:08.754 ********** 2026-03-29 00:56:50.919402 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:50.919406 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:56:50.919410 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:56:50.919414 | orchestrator | 2026-03-29 00:56:50.919418 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2026-03-29 00:56:50.919422 | orchestrator | Sunday 29 March 2026 00:50:49 +0000 (0:00:00.285) 0:04:09.040 ********** 2026-03-29 00:56:50.919426 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 00:56:50.919430 | orchestrator | 2026-03-29 00:56:50.919435 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2026-03-29 00:56:50.919439 | orchestrator | Sunday 29 March 2026 00:50:49 +0000 (0:00:00.464) 0:04:09.504 ********** 2026-03-29 00:56:50.919443 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:50.919447 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:56:50.919451 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:56:50.919455 | orchestrator | 2026-03-29 00:56:50.919459 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2026-03-29 00:56:50.919463 | orchestrator | Sunday 29 March 2026 00:50:50 +0000 (0:00:00.402) 0:04:09.907 ********** 2026-03-29 00:56:50.919472 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:50.919477 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:56:50.919483 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:56:50.919492 | orchestrator | 2026-03-29 00:56:50.919502 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2026-03-29 00:56:50.919547 | orchestrator | Sunday 29 March 2026 00:50:50 +0000 (0:00:00.306) 0:04:10.214 ********** 2026-03-29 00:56:50.919557 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 00:56:50.919562 | orchestrator | 2026-03-29 00:56:50.919566 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2026-03-29 00:56:50.919570 | orchestrator | Sunday 29 March 2026 00:50:50 +0000 (0:00:00.498) 0:04:10.712 ********** 2026-03-29 00:56:50.919574 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:56:50.919602 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:56:50.919607 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:56:50.919611 | orchestrator | 2026-03-29 00:56:50.919615 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2026-03-29 00:56:50.919619 | orchestrator | Sunday 29 March 2026 00:50:54 +0000 (0:00:03.302) 0:04:14.015 ********** 2026-03-29 00:56:50.919623 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:56:50.919627 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:56:50.919631 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:56:50.919635 | orchestrator | 2026-03-29 00:56:50.919639 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2026-03-29 00:56:50.919644 | orchestrator | Sunday 29 March 2026 00:50:55 +0000 (0:00:01.455) 0:04:15.471 ********** 2026-03-29 00:56:50.919648 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:56:50.919652 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:56:50.919656 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:56:50.919660 | orchestrator | 2026-03-29 00:56:50.919664 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2026-03-29 00:56:50.919668 | orchestrator | Sunday 29 March 2026 00:50:57 +0000 (0:00:02.277) 0:04:17.749 ********** 2026-03-29 00:56:50.919675 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:56:50.919682 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:56:50.919688 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:56:50.919695 | orchestrator | 2026-03-29 00:56:50.919701 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2026-03-29 00:56:50.919707 | orchestrator | Sunday 29 March 2026 00:51:00 +0000 (0:00:02.295) 0:04:20.045 ********** 2026-03-29 00:56:50.919713 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 00:56:50.919719 | orchestrator | 2026-03-29 00:56:50.919732 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2026-03-29 00:56:50.919739 | orchestrator | Sunday 29 March 2026 00:51:01 +0000 (0:00:00.966) 0:04:21.012 ********** 2026-03-29 00:56:50.919747 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for the monitor(s) to form the quorum... (10 retries left). 2026-03-29 00:56:50.919754 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:56:50.919761 | orchestrator | 2026-03-29 00:56:50.919766 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2026-03-29 00:56:50.919770 | orchestrator | Sunday 29 March 2026 00:51:22 +0000 (0:00:21.393) 0:04:42.405 ********** 2026-03-29 00:56:50.919774 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:56:50.919778 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:56:50.919782 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:56:50.919786 | orchestrator | 2026-03-29 00:56:50.919790 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2026-03-29 00:56:50.919794 | orchestrator | Sunday 29 March 2026 00:51:29 +0000 (0:00:06.654) 0:04:49.060 ********** 2026-03-29 00:56:50.919799 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:50.919803 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:56:50.919807 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:56:50.919811 | orchestrator | 2026-03-29 00:56:50.919815 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2026-03-29 00:56:50.919819 | orchestrator | Sunday 29 March 2026 00:51:29 +0000 (0:00:00.261) 0:04:49.321 ********** 2026-03-29 00:56:50.919825 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__e54b0084bec83c244ddd77b3836d245154d04d6d'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-03-29 00:56:50.919831 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__e54b0084bec83c244ddd77b3836d245154d04d6d'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-03-29 00:56:50.919837 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__e54b0084bec83c244ddd77b3836d245154d04d6d'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-03-29 00:56:50.919847 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__e54b0084bec83c244ddd77b3836d245154d04d6d'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-03-29 00:56:50.919872 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__e54b0084bec83c244ddd77b3836d245154d04d6d'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-03-29 00:56:50.919879 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__e54b0084bec83c244ddd77b3836d245154d04d6d'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__e54b0084bec83c244ddd77b3836d245154d04d6d'}])  2026-03-29 00:56:50.919894 | orchestrator | 2026-03-29 00:56:50.919898 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-29 00:56:50.919902 | orchestrator | Sunday 29 March 2026 00:51:39 +0000 (0:00:10.191) 0:04:59.512 ********** 2026-03-29 00:56:50.919907 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:50.919911 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:56:50.919915 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:56:50.919919 | orchestrator | 2026-03-29 00:56:50.919923 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-03-29 00:56:50.919927 | orchestrator | Sunday 29 March 2026 00:51:39 +0000 (0:00:00.289) 0:04:59.802 ********** 2026-03-29 00:56:50.919931 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 00:56:50.919935 | orchestrator | 2026-03-29 00:56:50.919939 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-03-29 00:56:50.919943 | orchestrator | Sunday 29 March 2026 00:51:40 +0000 (0:00:00.516) 0:05:00.318 ********** 2026-03-29 00:56:50.919947 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:56:50.919952 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:56:50.919956 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:56:50.919960 | orchestrator | 2026-03-29 00:56:50.919964 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-03-29 00:56:50.919968 | orchestrator | Sunday 29 March 2026 00:51:40 +0000 (0:00:00.475) 0:05:00.794 ********** 2026-03-29 00:56:50.919972 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:50.919976 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:56:50.919980 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:56:50.919984 | orchestrator | 2026-03-29 00:56:50.919988 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-03-29 00:56:50.919992 | orchestrator | Sunday 29 March 2026 00:51:41 +0000 (0:00:00.281) 0:05:01.075 ********** 2026-03-29 00:56:50.919996 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-29 00:56:50.920001 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-29 00:56:50.920005 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-29 00:56:50.920009 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:50.920013 | orchestrator | 2026-03-29 00:56:50.920017 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-03-29 00:56:50.920021 | orchestrator | Sunday 29 March 2026 00:51:41 +0000 (0:00:00.529) 0:05:01.604 ********** 2026-03-29 00:56:50.920029 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:56:50.920035 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:56:50.920042 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:56:50.920048 | orchestrator | 2026-03-29 00:56:50.920054 | orchestrator | PLAY [Apply role ceph-mgr] ***************************************************** 2026-03-29 00:56:50.920060 | orchestrator | 2026-03-29 00:56:50.920067 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-29 00:56:50.920075 | orchestrator | Sunday 29 March 2026 00:51:42 +0000 (0:00:00.644) 0:05:02.249 ********** 2026-03-29 00:56:50.920082 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 00:56:50.920091 | orchestrator | 2026-03-29 00:56:50.920095 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-29 00:56:50.920099 | orchestrator | Sunday 29 March 2026 00:51:42 +0000 (0:00:00.439) 0:05:02.689 ********** 2026-03-29 00:56:50.920103 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 00:56:50.920107 | orchestrator | 2026-03-29 00:56:50.920111 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-29 00:56:50.920115 | orchestrator | Sunday 29 March 2026 00:51:43 +0000 (0:00:00.453) 0:05:03.143 ********** 2026-03-29 00:56:50.920119 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:56:50.920123 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:56:50.920163 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:56:50.920167 | orchestrator | 2026-03-29 00:56:50.920171 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-29 00:56:50.920175 | orchestrator | Sunday 29 March 2026 00:51:44 +0000 (0:00:00.959) 0:05:04.103 ********** 2026-03-29 00:56:50.920179 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:50.920183 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:56:50.920188 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:56:50.920192 | orchestrator | 2026-03-29 00:56:50.920196 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-29 00:56:50.920204 | orchestrator | Sunday 29 March 2026 00:51:44 +0000 (0:00:00.325) 0:05:04.428 ********** 2026-03-29 00:56:50.920208 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:50.920212 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:56:50.920216 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:56:50.920220 | orchestrator | 2026-03-29 00:56:50.920244 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-29 00:56:50.920249 | orchestrator | Sunday 29 March 2026 00:51:44 +0000 (0:00:00.273) 0:05:04.702 ********** 2026-03-29 00:56:50.920253 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:50.920257 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:56:50.920261 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:56:50.920265 | orchestrator | 2026-03-29 00:56:50.920269 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-29 00:56:50.920274 | orchestrator | Sunday 29 March 2026 00:51:45 +0000 (0:00:00.274) 0:05:04.976 ********** 2026-03-29 00:56:50.920278 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:56:50.920282 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:56:50.920286 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:56:50.920290 | orchestrator | 2026-03-29 00:56:50.920294 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-29 00:56:50.920298 | orchestrator | Sunday 29 March 2026 00:51:45 +0000 (0:00:00.873) 0:05:05.850 ********** 2026-03-29 00:56:50.920303 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:50.920307 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:56:50.920311 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:56:50.920315 | orchestrator | 2026-03-29 00:56:50.920319 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-29 00:56:50.920323 | orchestrator | Sunday 29 March 2026 00:51:46 +0000 (0:00:00.282) 0:05:06.133 ********** 2026-03-29 00:56:50.920327 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:50.920332 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:56:50.920336 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:56:50.920342 | orchestrator | 2026-03-29 00:56:50.920350 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-29 00:56:50.920356 | orchestrator | Sunday 29 March 2026 00:51:46 +0000 (0:00:00.257) 0:05:06.390 ********** 2026-03-29 00:56:50.920363 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:56:50.920369 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:56:50.920375 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:56:50.920382 | orchestrator | 2026-03-29 00:56:50.920388 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-29 00:56:50.920394 | orchestrator | Sunday 29 March 2026 00:51:47 +0000 (0:00:00.744) 0:05:07.134 ********** 2026-03-29 00:56:50.920400 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:56:50.920406 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:56:50.920412 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:56:50.920434 | orchestrator | 2026-03-29 00:56:50.920439 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-29 00:56:50.920443 | orchestrator | Sunday 29 March 2026 00:51:48 +0000 (0:00:01.110) 0:05:08.244 ********** 2026-03-29 00:56:50.920447 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:50.920452 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:56:50.920456 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:56:50.920460 | orchestrator | 2026-03-29 00:56:50.920469 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-29 00:56:50.920473 | orchestrator | Sunday 29 March 2026 00:51:48 +0000 (0:00:00.390) 0:05:08.635 ********** 2026-03-29 00:56:50.920478 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:56:50.920485 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:56:50.920491 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:56:50.920497 | orchestrator | 2026-03-29 00:56:50.920504 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-29 00:56:50.920510 | orchestrator | Sunday 29 March 2026 00:51:49 +0000 (0:00:00.374) 0:05:09.011 ********** 2026-03-29 00:56:50.920517 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:50.920524 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:56:50.920532 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:56:50.920536 | orchestrator | 2026-03-29 00:56:50.920540 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-29 00:56:50.920544 | orchestrator | Sunday 29 March 2026 00:51:49 +0000 (0:00:00.328) 0:05:09.339 ********** 2026-03-29 00:56:50.920548 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:50.920552 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:56:50.920556 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:56:50.920560 | orchestrator | 2026-03-29 00:56:50.920564 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-29 00:56:50.920568 | orchestrator | Sunday 29 March 2026 00:51:49 +0000 (0:00:00.518) 0:05:09.858 ********** 2026-03-29 00:56:50.920573 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:50.920577 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:56:50.920626 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:56:50.920631 | orchestrator | 2026-03-29 00:56:50.920635 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-29 00:56:50.920639 | orchestrator | Sunday 29 March 2026 00:51:50 +0000 (0:00:00.366) 0:05:10.224 ********** 2026-03-29 00:56:50.920643 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:50.920647 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:56:50.920651 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:56:50.920655 | orchestrator | 2026-03-29 00:56:50.920659 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-29 00:56:50.920663 | orchestrator | Sunday 29 March 2026 00:51:50 +0000 (0:00:00.246) 0:05:10.470 ********** 2026-03-29 00:56:50.920667 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:50.920673 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:56:50.920679 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:56:50.920685 | orchestrator | 2026-03-29 00:56:50.920694 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-29 00:56:50.920704 | orchestrator | Sunday 29 March 2026 00:51:50 +0000 (0:00:00.269) 0:05:10.739 ********** 2026-03-29 00:56:50.920710 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:56:50.920716 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:56:50.920722 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:56:50.920728 | orchestrator | 2026-03-29 00:56:50.920735 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-29 00:56:50.920746 | orchestrator | Sunday 29 March 2026 00:51:51 +0000 (0:00:00.310) 0:05:11.050 ********** 2026-03-29 00:56:50.920753 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:56:50.920759 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:56:50.920766 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:56:50.920773 | orchestrator | 2026-03-29 00:56:50.920780 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-29 00:56:50.920817 | orchestrator | Sunday 29 March 2026 00:51:51 +0000 (0:00:00.569) 0:05:11.620 ********** 2026-03-29 00:56:50.920825 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:56:50.920832 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:56:50.920838 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:56:50.920845 | orchestrator | 2026-03-29 00:56:50.920851 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-03-29 00:56:50.920863 | orchestrator | Sunday 29 March 2026 00:51:52 +0000 (0:00:00.522) 0:05:12.142 ********** 2026-03-29 00:56:50.920867 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-29 00:56:50.920872 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-29 00:56:50.920876 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-29 00:56:50.920880 | orchestrator | 2026-03-29 00:56:50.920884 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-03-29 00:56:50.920888 | orchestrator | Sunday 29 March 2026 00:51:52 +0000 (0:00:00.717) 0:05:12.860 ********** 2026-03-29 00:56:50.920893 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 00:56:50.920897 | orchestrator | 2026-03-29 00:56:50.920901 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2026-03-29 00:56:50.920905 | orchestrator | Sunday 29 March 2026 00:51:53 +0000 (0:00:00.714) 0:05:13.574 ********** 2026-03-29 00:56:50.920909 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:56:50.920914 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:56:50.920918 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:56:50.920922 | orchestrator | 2026-03-29 00:56:50.920926 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2026-03-29 00:56:50.920930 | orchestrator | Sunday 29 March 2026 00:51:54 +0000 (0:00:00.719) 0:05:14.293 ********** 2026-03-29 00:56:50.920934 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:50.920938 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:56:50.920942 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:56:50.920946 | orchestrator | 2026-03-29 00:56:50.920950 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2026-03-29 00:56:50.920954 | orchestrator | Sunday 29 March 2026 00:51:54 +0000 (0:00:00.315) 0:05:14.609 ********** 2026-03-29 00:56:50.920958 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-29 00:56:50.920963 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-29 00:56:50.920967 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-29 00:56:50.920971 | orchestrator | changed: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2026-03-29 00:56:50.920975 | orchestrator | 2026-03-29 00:56:50.920979 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2026-03-29 00:56:50.920983 | orchestrator | Sunday 29 March 2026 00:52:03 +0000 (0:00:08.296) 0:05:22.906 ********** 2026-03-29 00:56:50.920987 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:56:50.920992 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:56:50.920996 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:56:50.921000 | orchestrator | 2026-03-29 00:56:50.921004 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2026-03-29 00:56:50.921008 | orchestrator | Sunday 29 March 2026 00:52:03 +0000 (0:00:00.479) 0:05:23.386 ********** 2026-03-29 00:56:50.921012 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-03-29 00:56:50.921016 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-03-29 00:56:50.921020 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-03-29 00:56:50.921025 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-03-29 00:56:50.921029 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-29 00:56:50.921033 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-29 00:56:50.921037 | orchestrator | 2026-03-29 00:56:50.921043 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2026-03-29 00:56:50.921049 | orchestrator | Sunday 29 March 2026 00:52:05 +0000 (0:00:01.680) 0:05:25.067 ********** 2026-03-29 00:56:50.921055 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-03-29 00:56:50.921060 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-03-29 00:56:50.921066 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-03-29 00:56:50.921071 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-03-29 00:56:50.921082 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-29 00:56:50.921088 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-03-29 00:56:50.921095 | orchestrator | 2026-03-29 00:56:50.921101 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2026-03-29 00:56:50.921108 | orchestrator | Sunday 29 March 2026 00:52:06 +0000 (0:00:01.506) 0:05:26.573 ********** 2026-03-29 00:56:50.921113 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:56:50.921117 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:56:50.921120 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:56:50.921126 | orchestrator | 2026-03-29 00:56:50.921132 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2026-03-29 00:56:50.921138 | orchestrator | Sunday 29 March 2026 00:52:07 +0000 (0:00:00.723) 0:05:27.297 ********** 2026-03-29 00:56:50.921143 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:56:50.921150 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:50.921156 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:56:50.921162 | orchestrator | 2026-03-29 00:56:50.921168 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-03-29 00:56:50.921175 | orchestrator | Sunday 29 March 2026 00:52:07 +0000 (0:00:00.593) 0:05:27.890 ********** 2026-03-29 00:56:50.921179 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:50.921182 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:56:50.921186 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:56:50.921190 | orchestrator | 2026-03-29 00:56:50.921201 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-03-29 00:56:50.921205 | orchestrator | Sunday 29 March 2026 00:52:08 +0000 (0:00:00.277) 0:05:28.168 ********** 2026-03-29 00:56:50.921229 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 00:56:50.921233 | orchestrator | 2026-03-29 00:56:50.921237 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2026-03-29 00:56:50.921241 | orchestrator | Sunday 29 March 2026 00:52:08 +0000 (0:00:00.545) 0:05:28.713 ********** 2026-03-29 00:56:50.921245 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:56:50.921248 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:56:50.921252 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:50.921256 | orchestrator | 2026-03-29 00:56:50.921260 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2026-03-29 00:56:50.921263 | orchestrator | Sunday 29 March 2026 00:52:09 +0000 (0:00:00.349) 0:05:29.063 ********** 2026-03-29 00:56:50.921267 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:50.921271 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:56:50.921275 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:56:50.921278 | orchestrator | 2026-03-29 00:56:50.921282 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2026-03-29 00:56:50.921286 | orchestrator | Sunday 29 March 2026 00:52:09 +0000 (0:00:00.625) 0:05:29.688 ********** 2026-03-29 00:56:50.921289 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 00:56:50.921293 | orchestrator | 2026-03-29 00:56:50.921297 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2026-03-29 00:56:50.921301 | orchestrator | Sunday 29 March 2026 00:52:10 +0000 (0:00:00.550) 0:05:30.239 ********** 2026-03-29 00:56:50.921305 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:56:50.921308 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:56:50.921312 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:56:50.921316 | orchestrator | 2026-03-29 00:56:50.921319 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2026-03-29 00:56:50.921323 | orchestrator | Sunday 29 March 2026 00:52:11 +0000 (0:00:01.332) 0:05:31.572 ********** 2026-03-29 00:56:50.921327 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:56:50.921331 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:56:50.921334 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:56:50.921342 | orchestrator | 2026-03-29 00:56:50.921346 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2026-03-29 00:56:50.921350 | orchestrator | Sunday 29 March 2026 00:52:13 +0000 (0:00:01.523) 0:05:33.095 ********** 2026-03-29 00:56:50.921353 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:56:50.921357 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:56:50.921361 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:56:50.921365 | orchestrator | 2026-03-29 00:56:50.921368 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2026-03-29 00:56:50.921372 | orchestrator | Sunday 29 March 2026 00:52:15 +0000 (0:00:01.927) 0:05:35.023 ********** 2026-03-29 00:56:50.921376 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:56:50.921380 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:56:50.921383 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:56:50.921388 | orchestrator | 2026-03-29 00:56:50.921394 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-03-29 00:56:50.921400 | orchestrator | Sunday 29 March 2026 00:52:17 +0000 (0:00:02.039) 0:05:37.063 ********** 2026-03-29 00:56:50.921405 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:50.921411 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:56:50.921417 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2026-03-29 00:56:50.921423 | orchestrator | 2026-03-29 00:56:50.921430 | orchestrator | TASK [ceph-mgr : Wait for all mgr to be up] ************************************ 2026-03-29 00:56:50.921434 | orchestrator | Sunday 29 March 2026 00:52:17 +0000 (0:00:00.416) 0:05:37.479 ********** 2026-03-29 00:56:50.921438 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (30 retries left). 2026-03-29 00:56:50.921442 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (29 retries left). 2026-03-29 00:56:50.921446 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-03-29 00:56:50.921452 | orchestrator | 2026-03-29 00:56:50.921458 | orchestrator | TASK [ceph-mgr : Get enabled modules from ceph-mgr] **************************** 2026-03-29 00:56:50.921463 | orchestrator | Sunday 29 March 2026 00:52:31 +0000 (0:00:13.641) 0:05:51.121 ********** 2026-03-29 00:56:50.921472 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-03-29 00:56:50.921480 | orchestrator | 2026-03-29 00:56:50.921485 | orchestrator | TASK [ceph-mgr : Set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2026-03-29 00:56:50.921491 | orchestrator | Sunday 29 March 2026 00:52:32 +0000 (0:00:01.300) 0:05:52.421 ********** 2026-03-29 00:56:50.921496 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:56:50.921502 | orchestrator | 2026-03-29 00:56:50.921507 | orchestrator | TASK [ceph-mgr : Set _disabled_ceph_mgr_modules fact] ************************** 2026-03-29 00:56:50.921513 | orchestrator | Sunday 29 March 2026 00:52:32 +0000 (0:00:00.292) 0:05:52.714 ********** 2026-03-29 00:56:50.921519 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:56:50.921525 | orchestrator | 2026-03-29 00:56:50.921530 | orchestrator | TASK [ceph-mgr : Disable ceph mgr enabled modules] ***************************** 2026-03-29 00:56:50.921536 | orchestrator | Sunday 29 March 2026 00:52:32 +0000 (0:00:00.118) 0:05:52.832 ********** 2026-03-29 00:56:50.921543 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=iostat) 2026-03-29 00:56:50.921549 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=nfs) 2026-03-29 00:56:50.921558 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=restful) 2026-03-29 00:56:50.921566 | orchestrator | 2026-03-29 00:56:50.921577 | orchestrator | TASK [ceph-mgr : Add modules to ceph-mgr] ************************************** 2026-03-29 00:56:50.921600 | orchestrator | Sunday 29 March 2026 00:52:39 +0000 (0:00:06.086) 0:05:58.919 ********** 2026-03-29 00:56:50.921606 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2026-03-29 00:56:50.921639 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=dashboard) 2026-03-29 00:56:50.921647 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=prometheus) 2026-03-29 00:56:50.921659 | orchestrator | skipping: [testbed-node-2] => (item=status)  2026-03-29 00:56:50.921666 | orchestrator | 2026-03-29 00:56:50.921670 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-29 00:56:50.921674 | orchestrator | Sunday 29 March 2026 00:52:43 +0000 (0:00:04.645) 0:06:03.565 ********** 2026-03-29 00:56:50.921678 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:56:50.921682 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:56:50.921686 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:56:50.921689 | orchestrator | 2026-03-29 00:56:50.921693 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-03-29 00:56:50.921697 | orchestrator | Sunday 29 March 2026 00:52:44 +0000 (0:00:00.861) 0:06:04.427 ********** 2026-03-29 00:56:50.921700 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 00:56:50.921704 | orchestrator | 2026-03-29 00:56:50.921708 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-03-29 00:56:50.921711 | orchestrator | Sunday 29 March 2026 00:52:44 +0000 (0:00:00.453) 0:06:04.880 ********** 2026-03-29 00:56:50.921715 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:56:50.921719 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:56:50.921723 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:56:50.921726 | orchestrator | 2026-03-29 00:56:50.921730 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-03-29 00:56:50.921734 | orchestrator | Sunday 29 March 2026 00:52:45 +0000 (0:00:00.269) 0:06:05.149 ********** 2026-03-29 00:56:50.921737 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:56:50.921741 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:56:50.921745 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:56:50.921749 | orchestrator | 2026-03-29 00:56:50.921752 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-03-29 00:56:50.921756 | orchestrator | Sunday 29 March 2026 00:52:46 +0000 (0:00:01.222) 0:06:06.371 ********** 2026-03-29 00:56:50.921760 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-29 00:56:50.921764 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-29 00:56:50.921767 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-29 00:56:50.921771 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:50.921775 | orchestrator | 2026-03-29 00:56:50.921779 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-03-29 00:56:50.921782 | orchestrator | Sunday 29 March 2026 00:52:47 +0000 (0:00:00.545) 0:06:06.917 ********** 2026-03-29 00:56:50.921786 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:56:50.921790 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:56:50.921793 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:56:50.921797 | orchestrator | 2026-03-29 00:56:50.921801 | orchestrator | PLAY [Apply role ceph-osd] ***************************************************** 2026-03-29 00:56:50.921805 | orchestrator | 2026-03-29 00:56:50.921808 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-29 00:56:50.921812 | orchestrator | Sunday 29 March 2026 00:52:47 +0000 (0:00:00.531) 0:06:07.448 ********** 2026-03-29 00:56:50.921816 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-29 00:56:50.921820 | orchestrator | 2026-03-29 00:56:50.921824 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-29 00:56:50.921827 | orchestrator | Sunday 29 March 2026 00:52:48 +0000 (0:00:00.773) 0:06:08.222 ********** 2026-03-29 00:56:50.921831 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-29 00:56:50.921835 | orchestrator | 2026-03-29 00:56:50.921838 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-29 00:56:50.921842 | orchestrator | Sunday 29 March 2026 00:52:48 +0000 (0:00:00.543) 0:06:08.765 ********** 2026-03-29 00:56:50.921849 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:56:50.921853 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:56:50.921857 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:56:50.921861 | orchestrator | 2026-03-29 00:56:50.921864 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-29 00:56:50.921868 | orchestrator | Sunday 29 March 2026 00:52:49 +0000 (0:00:00.373) 0:06:09.138 ********** 2026-03-29 00:56:50.921872 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:56:50.921875 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:56:50.921879 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:56:50.921883 | orchestrator | 2026-03-29 00:56:50.921887 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-29 00:56:50.921890 | orchestrator | Sunday 29 March 2026 00:52:50 +0000 (0:00:01.071) 0:06:10.210 ********** 2026-03-29 00:56:50.921894 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:56:50.921898 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:56:50.921901 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:56:50.921905 | orchestrator | 2026-03-29 00:56:50.921909 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-29 00:56:50.921912 | orchestrator | Sunday 29 March 2026 00:52:51 +0000 (0:00:00.732) 0:06:10.942 ********** 2026-03-29 00:56:50.921916 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:56:50.921920 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:56:50.921924 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:56:50.921927 | orchestrator | 2026-03-29 00:56:50.921931 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-29 00:56:50.921935 | orchestrator | Sunday 29 March 2026 00:52:51 +0000 (0:00:00.760) 0:06:11.702 ********** 2026-03-29 00:56:50.921938 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:56:50.921942 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:56:50.921949 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:56:50.921953 | orchestrator | 2026-03-29 00:56:50.921956 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-29 00:56:50.921960 | orchestrator | Sunday 29 March 2026 00:52:52 +0000 (0:00:00.314) 0:06:12.018 ********** 2026-03-29 00:56:50.921977 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:56:50.921982 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:56:50.921985 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:56:50.921989 | orchestrator | 2026-03-29 00:56:50.921993 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-29 00:56:50.921997 | orchestrator | Sunday 29 March 2026 00:52:52 +0000 (0:00:00.757) 0:06:12.775 ********** 2026-03-29 00:56:50.922000 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:56:50.922004 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:56:50.922008 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:56:50.922042 | orchestrator | 2026-03-29 00:56:50.922047 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-29 00:56:50.922051 | orchestrator | Sunday 29 March 2026 00:52:53 +0000 (0:00:00.323) 0:06:13.099 ********** 2026-03-29 00:56:50.922054 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:56:50.922058 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:56:50.922062 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:56:50.922066 | orchestrator | 2026-03-29 00:56:50.922069 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-29 00:56:50.922073 | orchestrator | Sunday 29 March 2026 00:52:54 +0000 (0:00:00.843) 0:06:13.942 ********** 2026-03-29 00:56:50.922077 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:56:50.922081 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:56:50.922084 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:56:50.922088 | orchestrator | 2026-03-29 00:56:50.922092 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-29 00:56:50.922095 | orchestrator | Sunday 29 March 2026 00:52:54 +0000 (0:00:00.864) 0:06:14.807 ********** 2026-03-29 00:56:50.922099 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:56:50.922106 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:56:50.922110 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:56:50.922114 | orchestrator | 2026-03-29 00:56:50.922118 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-29 00:56:50.922122 | orchestrator | Sunday 29 March 2026 00:52:55 +0000 (0:00:00.783) 0:06:15.590 ********** 2026-03-29 00:56:50.922125 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:56:50.922129 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:56:50.922133 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:56:50.922137 | orchestrator | 2026-03-29 00:56:50.922140 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-29 00:56:50.922144 | orchestrator | Sunday 29 March 2026 00:52:56 +0000 (0:00:00.313) 0:06:15.904 ********** 2026-03-29 00:56:50.922148 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:56:50.922152 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:56:50.922155 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:56:50.922159 | orchestrator | 2026-03-29 00:56:50.922163 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-29 00:56:50.922166 | orchestrator | Sunday 29 March 2026 00:52:56 +0000 (0:00:00.326) 0:06:16.230 ********** 2026-03-29 00:56:50.922170 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:56:50.922174 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:56:50.922178 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:56:50.922181 | orchestrator | 2026-03-29 00:56:50.922185 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-29 00:56:50.922189 | orchestrator | Sunday 29 March 2026 00:52:56 +0000 (0:00:00.412) 0:06:16.642 ********** 2026-03-29 00:56:50.922193 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:56:50.922196 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:56:50.922200 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:56:50.922204 | orchestrator | 2026-03-29 00:56:50.922208 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-29 00:56:50.922211 | orchestrator | Sunday 29 March 2026 00:52:57 +0000 (0:00:00.727) 0:06:17.369 ********** 2026-03-29 00:56:50.922215 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:56:50.922219 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:56:50.922223 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:56:50.922226 | orchestrator | 2026-03-29 00:56:50.922230 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-29 00:56:50.922234 | orchestrator | Sunday 29 March 2026 00:52:57 +0000 (0:00:00.403) 0:06:17.773 ********** 2026-03-29 00:56:50.922238 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:56:50.922241 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:56:50.922245 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:56:50.922249 | orchestrator | 2026-03-29 00:56:50.922253 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-29 00:56:50.922256 | orchestrator | Sunday 29 March 2026 00:52:58 +0000 (0:00:00.417) 0:06:18.190 ********** 2026-03-29 00:56:50.922260 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:56:50.922264 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:56:50.922268 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:56:50.922271 | orchestrator | 2026-03-29 00:56:50.922275 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-29 00:56:50.922279 | orchestrator | Sunday 29 March 2026 00:52:58 +0000 (0:00:00.334) 0:06:18.525 ********** 2026-03-29 00:56:50.922282 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:56:50.922286 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:56:50.922290 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:56:50.922294 | orchestrator | 2026-03-29 00:56:50.922297 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-29 00:56:50.922301 | orchestrator | Sunday 29 March 2026 00:52:59 +0000 (0:00:00.690) 0:06:19.215 ********** 2026-03-29 00:56:50.922305 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:56:50.922309 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:56:50.922312 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:56:50.922319 | orchestrator | 2026-03-29 00:56:50.922323 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2026-03-29 00:56:50.922327 | orchestrator | Sunday 29 March 2026 00:52:59 +0000 (0:00:00.548) 0:06:19.763 ********** 2026-03-29 00:56:50.922331 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:56:50.922334 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:56:50.922338 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:56:50.922342 | orchestrator | 2026-03-29 00:56:50.922349 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2026-03-29 00:56:50.922353 | orchestrator | Sunday 29 March 2026 00:53:00 +0000 (0:00:00.390) 0:06:20.154 ********** 2026-03-29 00:56:50.922357 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-29 00:56:50.922364 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-29 00:56:50.922368 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-29 00:56:50.922372 | orchestrator | 2026-03-29 00:56:50.922376 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2026-03-29 00:56:50.922379 | orchestrator | Sunday 29 March 2026 00:53:01 +0000 (0:00:00.945) 0:06:21.099 ********** 2026-03-29 00:56:50.922383 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-29 00:56:50.922387 | orchestrator | 2026-03-29 00:56:50.922391 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2026-03-29 00:56:50.922394 | orchestrator | Sunday 29 March 2026 00:53:01 +0000 (0:00:00.779) 0:06:21.879 ********** 2026-03-29 00:56:50.922398 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:56:50.922402 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:56:50.922406 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:56:50.922409 | orchestrator | 2026-03-29 00:56:50.922413 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2026-03-29 00:56:50.922417 | orchestrator | Sunday 29 March 2026 00:53:02 +0000 (0:00:00.311) 0:06:22.190 ********** 2026-03-29 00:56:50.922421 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:56:50.922424 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:56:50.922428 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:56:50.922432 | orchestrator | 2026-03-29 00:56:50.922436 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2026-03-29 00:56:50.922439 | orchestrator | Sunday 29 March 2026 00:53:02 +0000 (0:00:00.326) 0:06:22.517 ********** 2026-03-29 00:56:50.922443 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:56:50.922447 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:56:50.922450 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:56:50.922454 | orchestrator | 2026-03-29 00:56:50.922458 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2026-03-29 00:56:50.922462 | orchestrator | Sunday 29 March 2026 00:53:03 +0000 (0:00:00.932) 0:06:23.449 ********** 2026-03-29 00:56:50.922465 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:56:50.922469 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:56:50.922473 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:56:50.922476 | orchestrator | 2026-03-29 00:56:50.922480 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2026-03-29 00:56:50.922484 | orchestrator | Sunday 29 March 2026 00:53:03 +0000 (0:00:00.342) 0:06:23.792 ********** 2026-03-29 00:56:50.922487 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-03-29 00:56:50.922491 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-03-29 00:56:50.922495 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-03-29 00:56:50.922499 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-03-29 00:56:50.922502 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-03-29 00:56:50.922510 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-03-29 00:56:50.922514 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-03-29 00:56:50.922517 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-03-29 00:56:50.922521 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-03-29 00:56:50.922525 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-03-29 00:56:50.922529 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-03-29 00:56:50.922532 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-03-29 00:56:50.922536 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-03-29 00:56:50.922540 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-03-29 00:56:50.922543 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-03-29 00:56:50.922547 | orchestrator | 2026-03-29 00:56:50.922551 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2026-03-29 00:56:50.922554 | orchestrator | Sunday 29 March 2026 00:53:08 +0000 (0:00:04.237) 0:06:28.029 ********** 2026-03-29 00:56:50.922558 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:56:50.922562 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:56:50.922566 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:56:50.922569 | orchestrator | 2026-03-29 00:56:50.922573 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2026-03-29 00:56:50.922577 | orchestrator | Sunday 29 March 2026 00:53:08 +0000 (0:00:00.318) 0:06:28.347 ********** 2026-03-29 00:56:50.922598 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-29 00:56:50.922602 | orchestrator | 2026-03-29 00:56:50.922605 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2026-03-29 00:56:50.922609 | orchestrator | Sunday 29 March 2026 00:53:09 +0000 (0:00:00.757) 0:06:29.105 ********** 2026-03-29 00:56:50.922615 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2026-03-29 00:56:50.922619 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2026-03-29 00:56:50.922623 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2026-03-29 00:56:50.922633 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2026-03-29 00:56:50.922637 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2026-03-29 00:56:50.922640 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2026-03-29 00:56:50.922644 | orchestrator | 2026-03-29 00:56:50.922648 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2026-03-29 00:56:50.922652 | orchestrator | Sunday 29 March 2026 00:53:10 +0000 (0:00:01.068) 0:06:30.174 ********** 2026-03-29 00:56:50.922655 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-29 00:56:50.922659 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-29 00:56:50.922663 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-29 00:56:50.922666 | orchestrator | 2026-03-29 00:56:50.922670 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2026-03-29 00:56:50.922674 | orchestrator | Sunday 29 March 2026 00:53:12 +0000 (0:00:01.869) 0:06:32.044 ********** 2026-03-29 00:56:50.922678 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-29 00:56:50.922681 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-29 00:56:50.922686 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:56:50.922693 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-29 00:56:50.922699 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-03-29 00:56:50.922705 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:56:50.922715 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-29 00:56:50.922721 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-03-29 00:56:50.922727 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:56:50.922733 | orchestrator | 2026-03-29 00:56:50.922738 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2026-03-29 00:56:50.922744 | orchestrator | Sunday 29 March 2026 00:53:13 +0000 (0:00:01.171) 0:06:33.215 ********** 2026-03-29 00:56:50.922751 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-29 00:56:50.922757 | orchestrator | 2026-03-29 00:56:50.922763 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2026-03-29 00:56:50.922770 | orchestrator | Sunday 29 March 2026 00:53:15 +0000 (0:00:02.533) 0:06:35.749 ********** 2026-03-29 00:56:50.922774 | orchestrator | included: /ansible/roles/ceph-osd/tasks/scenarios/lvm.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-29 00:56:50.922778 | orchestrator | 2026-03-29 00:56:50.922782 | orchestrator | TASK [ceph-osd : Use ceph-volume to create osds] ******************************* 2026-03-29 00:56:50.922785 | orchestrator | Sunday 29 March 2026 00:53:16 +0000 (0:00:00.577) 0:06:36.326 ********** 2026-03-29 00:56:50.922789 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-cb4f0063-6caa-55a9-9ed6-73f648958ae5', 'data_vg': 'ceph-cb4f0063-6caa-55a9-9ed6-73f648958ae5'}) 2026-03-29 00:56:50.922795 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-185c2dd0-6b1c-571f-b734-244d928106eb', 'data_vg': 'ceph-185c2dd0-6b1c-571f-b734-244d928106eb'}) 2026-03-29 00:56:50.922799 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-ce40293b-1bc0-5558-a1b7-16c9a624d7c9', 'data_vg': 'ceph-ce40293b-1bc0-5558-a1b7-16c9a624d7c9'}) 2026-03-29 00:56:50.922802 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-9db53e8f-4e16-545c-9934-db4b909c3b32', 'data_vg': 'ceph-9db53e8f-4e16-545c-9934-db4b909c3b32'}) 2026-03-29 00:56:50.922806 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-18721a71-2d87-5ab0-bec8-5e03a015e695', 'data_vg': 'ceph-18721a71-2d87-5ab0-bec8-5e03a015e695'}) 2026-03-29 00:56:50.922810 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-c9903f66-e17d-5d19-b140-42471f0a3aa8', 'data_vg': 'ceph-c9903f66-e17d-5d19-b140-42471f0a3aa8'}) 2026-03-29 00:56:50.922814 | orchestrator | 2026-03-29 00:56:50.922817 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2026-03-29 00:56:50.922821 | orchestrator | Sunday 29 March 2026 00:53:50 +0000 (0:00:33.963) 0:07:10.290 ********** 2026-03-29 00:56:50.922825 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:56:50.922828 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:56:50.922832 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:56:50.922836 | orchestrator | 2026-03-29 00:56:50.922839 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2026-03-29 00:56:50.922843 | orchestrator | Sunday 29 March 2026 00:53:50 +0000 (0:00:00.547) 0:07:10.837 ********** 2026-03-29 00:56:50.922847 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-29 00:56:50.922851 | orchestrator | 2026-03-29 00:56:50.922854 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2026-03-29 00:56:50.922858 | orchestrator | Sunday 29 March 2026 00:53:51 +0000 (0:00:00.511) 0:07:11.348 ********** 2026-03-29 00:56:50.922862 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:56:50.922867 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:56:50.922873 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:56:50.922879 | orchestrator | 2026-03-29 00:56:50.922886 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2026-03-29 00:56:50.922894 | orchestrator | Sunday 29 March 2026 00:53:52 +0000 (0:00:00.658) 0:07:12.007 ********** 2026-03-29 00:56:50.922900 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:56:50.922906 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:56:50.922912 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:56:50.922924 | orchestrator | 2026-03-29 00:56:50.922933 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2026-03-29 00:56:50.922940 | orchestrator | Sunday 29 March 2026 00:53:53 +0000 (0:00:01.816) 0:07:13.823 ********** 2026-03-29 00:56:50.922945 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-29 00:56:50.922951 | orchestrator | 2026-03-29 00:56:50.922962 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2026-03-29 00:56:50.922968 | orchestrator | Sunday 29 March 2026 00:53:54 +0000 (0:00:00.581) 0:07:14.404 ********** 2026-03-29 00:56:50.922974 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:56:50.922979 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:56:50.922985 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:56:50.922991 | orchestrator | 2026-03-29 00:56:50.922996 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2026-03-29 00:56:50.923001 | orchestrator | Sunday 29 March 2026 00:53:55 +0000 (0:00:01.249) 0:07:15.654 ********** 2026-03-29 00:56:50.923007 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:56:50.923013 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:56:50.923019 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:56:50.923025 | orchestrator | 2026-03-29 00:56:50.923031 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2026-03-29 00:56:50.923037 | orchestrator | Sunday 29 March 2026 00:53:57 +0000 (0:00:01.618) 0:07:17.272 ********** 2026-03-29 00:56:50.923044 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:56:50.923050 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:56:50.923056 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:56:50.923062 | orchestrator | 2026-03-29 00:56:50.923067 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2026-03-29 00:56:50.923073 | orchestrator | Sunday 29 March 2026 00:53:59 +0000 (0:00:01.923) 0:07:19.196 ********** 2026-03-29 00:56:50.923079 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:56:50.923084 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:56:50.923089 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:56:50.923094 | orchestrator | 2026-03-29 00:56:50.923100 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2026-03-29 00:56:50.923105 | orchestrator | Sunday 29 March 2026 00:53:59 +0000 (0:00:00.321) 0:07:19.517 ********** 2026-03-29 00:56:50.923110 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:56:50.923115 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:56:50.923121 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:56:50.923126 | orchestrator | 2026-03-29 00:56:50.923131 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2026-03-29 00:56:50.923136 | orchestrator | Sunday 29 March 2026 00:53:59 +0000 (0:00:00.305) 0:07:19.823 ********** 2026-03-29 00:56:50.923142 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-03-29 00:56:50.923147 | orchestrator | ok: [testbed-node-4] => (item=1) 2026-03-29 00:56:50.923153 | orchestrator | ok: [testbed-node-5] => (item=2) 2026-03-29 00:56:50.923158 | orchestrator | ok: [testbed-node-3] => (item=3) 2026-03-29 00:56:50.923164 | orchestrator | ok: [testbed-node-4] => (item=5) 2026-03-29 00:56:50.923169 | orchestrator | ok: [testbed-node-5] => (item=4) 2026-03-29 00:56:50.923176 | orchestrator | 2026-03-29 00:56:50.923182 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2026-03-29 00:56:50.923188 | orchestrator | Sunday 29 March 2026 00:54:01 +0000 (0:00:01.412) 0:07:21.235 ********** 2026-03-29 00:56:50.923193 | orchestrator | changed: [testbed-node-3] => (item=0) 2026-03-29 00:56:50.923199 | orchestrator | changed: [testbed-node-4] => (item=1) 2026-03-29 00:56:50.923205 | orchestrator | changed: [testbed-node-5] => (item=2) 2026-03-29 00:56:50.923210 | orchestrator | changed: [testbed-node-3] => (item=3) 2026-03-29 00:56:50.923216 | orchestrator | changed: [testbed-node-4] => (item=5) 2026-03-29 00:56:50.923223 | orchestrator | changed: [testbed-node-5] => (item=4) 2026-03-29 00:56:50.923228 | orchestrator | 2026-03-29 00:56:50.923241 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2026-03-29 00:56:50.923247 | orchestrator | Sunday 29 March 2026 00:54:03 +0000 (0:00:02.099) 0:07:23.334 ********** 2026-03-29 00:56:50.923253 | orchestrator | changed: [testbed-node-4] => (item=1) 2026-03-29 00:56:50.923258 | orchestrator | changed: [testbed-node-3] => (item=0) 2026-03-29 00:56:50.923267 | orchestrator | changed: [testbed-node-5] => (item=2) 2026-03-29 00:56:50.923274 | orchestrator | changed: [testbed-node-4] => (item=5) 2026-03-29 00:56:50.923280 | orchestrator | changed: [testbed-node-3] => (item=3) 2026-03-29 00:56:50.923286 | orchestrator | changed: [testbed-node-5] => (item=4) 2026-03-29 00:56:50.923292 | orchestrator | 2026-03-29 00:56:50.923298 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2026-03-29 00:56:50.923303 | orchestrator | Sunday 29 March 2026 00:54:07 +0000 (0:00:03.679) 0:07:27.014 ********** 2026-03-29 00:56:50.923308 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:56:50.923313 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:56:50.923319 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-03-29 00:56:50.923324 | orchestrator | 2026-03-29 00:56:50.923329 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2026-03-29 00:56:50.923335 | orchestrator | Sunday 29 March 2026 00:54:09 +0000 (0:00:02.609) 0:07:29.623 ********** 2026-03-29 00:56:50.923340 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:56:50.923345 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:56:50.923351 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Wait for all osd to be up (60 retries left). 2026-03-29 00:56:50.923357 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-03-29 00:56:50.923363 | orchestrator | 2026-03-29 00:56:50.923368 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2026-03-29 00:56:50.923374 | orchestrator | Sunday 29 March 2026 00:54:23 +0000 (0:00:13.545) 0:07:43.169 ********** 2026-03-29 00:56:50.923379 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:56:50.923384 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:56:50.923390 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:56:50.923396 | orchestrator | 2026-03-29 00:56:50.923401 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-29 00:56:50.923407 | orchestrator | Sunday 29 March 2026 00:54:24 +0000 (0:00:00.912) 0:07:44.081 ********** 2026-03-29 00:56:50.923424 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:56:50.923431 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:56:50.923437 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:56:50.923443 | orchestrator | 2026-03-29 00:56:50.923450 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-03-29 00:56:50.923461 | orchestrator | Sunday 29 March 2026 00:54:24 +0000 (0:00:00.634) 0:07:44.716 ********** 2026-03-29 00:56:50.923465 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-29 00:56:50.923469 | orchestrator | 2026-03-29 00:56:50.923473 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-03-29 00:56:50.923477 | orchestrator | Sunday 29 March 2026 00:54:25 +0000 (0:00:00.582) 0:07:45.299 ********** 2026-03-29 00:56:50.923480 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-29 00:56:50.923484 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-29 00:56:50.923488 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-29 00:56:50.923493 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:56:50.923499 | orchestrator | 2026-03-29 00:56:50.923505 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-03-29 00:56:50.923514 | orchestrator | Sunday 29 March 2026 00:54:25 +0000 (0:00:00.415) 0:07:45.714 ********** 2026-03-29 00:56:50.923522 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:56:50.923529 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:56:50.923540 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:56:50.923546 | orchestrator | 2026-03-29 00:56:50.923552 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-03-29 00:56:50.923558 | orchestrator | Sunday 29 March 2026 00:54:26 +0000 (0:00:00.320) 0:07:46.035 ********** 2026-03-29 00:56:50.923564 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:56:50.923569 | orchestrator | 2026-03-29 00:56:50.923574 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-03-29 00:56:50.923632 | orchestrator | Sunday 29 March 2026 00:54:26 +0000 (0:00:00.237) 0:07:46.272 ********** 2026-03-29 00:56:50.923639 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:56:50.923645 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:56:50.923651 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:56:50.923657 | orchestrator | 2026-03-29 00:56:50.923663 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-03-29 00:56:50.923669 | orchestrator | Sunday 29 March 2026 00:54:27 +0000 (0:00:00.646) 0:07:46.919 ********** 2026-03-29 00:56:50.923675 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:56:50.923681 | orchestrator | 2026-03-29 00:56:50.923685 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-03-29 00:56:50.923689 | orchestrator | Sunday 29 March 2026 00:54:27 +0000 (0:00:00.228) 0:07:47.148 ********** 2026-03-29 00:56:50.923693 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:56:50.923696 | orchestrator | 2026-03-29 00:56:50.923700 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-03-29 00:56:50.923704 | orchestrator | Sunday 29 March 2026 00:54:27 +0000 (0:00:00.229) 0:07:47.378 ********** 2026-03-29 00:56:50.923707 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:56:50.923711 | orchestrator | 2026-03-29 00:56:50.923715 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-03-29 00:56:50.923718 | orchestrator | Sunday 29 March 2026 00:54:27 +0000 (0:00:00.149) 0:07:47.527 ********** 2026-03-29 00:56:50.923723 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:56:50.923729 | orchestrator | 2026-03-29 00:56:50.923739 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-03-29 00:56:50.923746 | orchestrator | Sunday 29 March 2026 00:54:27 +0000 (0:00:00.274) 0:07:47.801 ********** 2026-03-29 00:56:50.923751 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:56:50.923757 | orchestrator | 2026-03-29 00:56:50.923762 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-03-29 00:56:50.923768 | orchestrator | Sunday 29 March 2026 00:54:28 +0000 (0:00:00.237) 0:07:48.038 ********** 2026-03-29 00:56:50.923774 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-29 00:56:50.923779 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-29 00:56:50.923785 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-29 00:56:50.923791 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:56:50.923796 | orchestrator | 2026-03-29 00:56:50.923802 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-03-29 00:56:50.923808 | orchestrator | Sunday 29 March 2026 00:54:28 +0000 (0:00:00.370) 0:07:48.409 ********** 2026-03-29 00:56:50.923813 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:56:50.923818 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:56:50.923825 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:56:50.923831 | orchestrator | 2026-03-29 00:56:50.923836 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-03-29 00:56:50.923842 | orchestrator | Sunday 29 March 2026 00:54:28 +0000 (0:00:00.310) 0:07:48.719 ********** 2026-03-29 00:56:50.923848 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:56:50.923854 | orchestrator | 2026-03-29 00:56:50.923862 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-03-29 00:56:50.923866 | orchestrator | Sunday 29 March 2026 00:54:29 +0000 (0:00:00.688) 0:07:49.408 ********** 2026-03-29 00:56:50.923870 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:56:50.923881 | orchestrator | 2026-03-29 00:56:50.923885 | orchestrator | PLAY [Apply role ceph-crash] *************************************************** 2026-03-29 00:56:50.923889 | orchestrator | 2026-03-29 00:56:50.923892 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-29 00:56:50.923896 | orchestrator | Sunday 29 March 2026 00:54:30 +0000 (0:00:00.592) 0:07:50.001 ********** 2026-03-29 00:56:50.923901 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 00:56:50.923907 | orchestrator | 2026-03-29 00:56:50.923915 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-29 00:56:50.923919 | orchestrator | Sunday 29 March 2026 00:54:31 +0000 (0:00:01.130) 0:07:51.131 ********** 2026-03-29 00:56:50.923929 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 00:56:50.923933 | orchestrator | 2026-03-29 00:56:50.923937 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-29 00:56:50.923941 | orchestrator | Sunday 29 March 2026 00:54:32 +0000 (0:00:01.085) 0:07:52.217 ********** 2026-03-29 00:56:50.923944 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:56:50.923948 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:56:50.923952 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:56:50.923956 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:56:50.923959 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:56:50.923963 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:56:50.923967 | orchestrator | 2026-03-29 00:56:50.923970 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-29 00:56:50.923974 | orchestrator | Sunday 29 March 2026 00:54:33 +0000 (0:00:01.182) 0:07:53.399 ********** 2026-03-29 00:56:50.923978 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:50.923982 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:56:50.923985 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:56:50.923989 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:56:50.923992 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:56:50.923996 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:56:50.924000 | orchestrator | 2026-03-29 00:56:50.924004 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-29 00:56:50.924007 | orchestrator | Sunday 29 March 2026 00:54:34 +0000 (0:00:00.728) 0:07:54.128 ********** 2026-03-29 00:56:50.924011 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:50.924015 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:56:50.924018 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:56:50.924022 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:56:50.924026 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:56:50.924029 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:56:50.924033 | orchestrator | 2026-03-29 00:56:50.924037 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-29 00:56:50.924040 | orchestrator | Sunday 29 March 2026 00:54:34 +0000 (0:00:00.710) 0:07:54.838 ********** 2026-03-29 00:56:50.924044 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:50.924048 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:56:50.924052 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:56:50.924055 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:56:50.924059 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:56:50.924063 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:56:50.924066 | orchestrator | 2026-03-29 00:56:50.924070 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-29 00:56:50.924074 | orchestrator | Sunday 29 March 2026 00:54:35 +0000 (0:00:00.857) 0:07:55.696 ********** 2026-03-29 00:56:50.924077 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:56:50.924081 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:56:50.924085 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:56:50.924088 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:56:50.924095 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:56:50.924099 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:56:50.924103 | orchestrator | 2026-03-29 00:56:50.924107 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-29 00:56:50.924110 | orchestrator | Sunday 29 March 2026 00:54:36 +0000 (0:00:00.930) 0:07:56.626 ********** 2026-03-29 00:56:50.924114 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:56:50.924118 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:56:50.924121 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:56:50.924125 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:56:50.924129 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:56:50.924132 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:50.924136 | orchestrator | 2026-03-29 00:56:50.924140 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-29 00:56:50.924143 | orchestrator | Sunday 29 March 2026 00:54:37 +0000 (0:00:00.737) 0:07:57.364 ********** 2026-03-29 00:56:50.924147 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:56:50.924151 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:56:50.924154 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:56:50.924158 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:50.924162 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:56:50.924165 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:56:50.924169 | orchestrator | 2026-03-29 00:56:50.924173 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-29 00:56:50.924176 | orchestrator | Sunday 29 March 2026 00:54:37 +0000 (0:00:00.530) 0:07:57.894 ********** 2026-03-29 00:56:50.924180 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:56:50.924184 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:56:50.924187 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:56:50.924191 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:56:50.924195 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:56:50.924201 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:56:50.924207 | orchestrator | 2026-03-29 00:56:50.924212 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-29 00:56:50.924219 | orchestrator | Sunday 29 March 2026 00:54:39 +0000 (0:00:01.267) 0:07:59.162 ********** 2026-03-29 00:56:50.924227 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:56:50.924235 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:56:50.924241 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:56:50.924246 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:56:50.924252 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:56:50.924257 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:56:50.924263 | orchestrator | 2026-03-29 00:56:50.924269 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-29 00:56:50.924275 | orchestrator | Sunday 29 March 2026 00:54:40 +0000 (0:00:00.928) 0:08:00.090 ********** 2026-03-29 00:56:50.924281 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:56:50.924287 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:56:50.924293 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:56:50.924299 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:50.924306 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:56:50.924312 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:56:50.924318 | orchestrator | 2026-03-29 00:56:50.924327 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-29 00:56:50.924331 | orchestrator | Sunday 29 March 2026 00:54:40 +0000 (0:00:00.693) 0:08:00.783 ********** 2026-03-29 00:56:50.924334 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:56:50.924338 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:56:50.924346 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:56:50.924350 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:56:50.924354 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:56:50.924357 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:56:50.924361 | orchestrator | 2026-03-29 00:56:50.924365 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-29 00:56:50.924373 | orchestrator | Sunday 29 March 2026 00:54:41 +0000 (0:00:00.520) 0:08:01.303 ********** 2026-03-29 00:56:50.924377 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:56:50.924381 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:56:50.924384 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:56:50.924388 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:50.924392 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:56:50.924396 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:56:50.924402 | orchestrator | 2026-03-29 00:56:50.924407 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-29 00:56:50.924413 | orchestrator | Sunday 29 March 2026 00:54:42 +0000 (0:00:00.665) 0:08:01.969 ********** 2026-03-29 00:56:50.924419 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:56:50.924424 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:56:50.924430 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:56:50.924435 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:50.924441 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:56:50.924447 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:56:50.924453 | orchestrator | 2026-03-29 00:56:50.924458 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-29 00:56:50.924465 | orchestrator | Sunday 29 March 2026 00:54:42 +0000 (0:00:00.493) 0:08:02.463 ********** 2026-03-29 00:56:50.924470 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:56:50.924476 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:56:50.924482 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:56:50.924488 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:50.924495 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:56:50.924499 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:56:50.924503 | orchestrator | 2026-03-29 00:56:50.924507 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-29 00:56:50.924510 | orchestrator | Sunday 29 March 2026 00:54:43 +0000 (0:00:00.689) 0:08:03.153 ********** 2026-03-29 00:56:50.924514 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:56:50.924518 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:56:50.924522 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:56:50.924525 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:50.924529 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:56:50.924533 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:56:50.924536 | orchestrator | 2026-03-29 00:56:50.924540 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-29 00:56:50.924544 | orchestrator | Sunday 29 March 2026 00:54:43 +0000 (0:00:00.483) 0:08:03.636 ********** 2026-03-29 00:56:50.924547 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:56:50.924551 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:56:50.924555 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:56:50.924558 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:50.924562 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:56:50.924566 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:56:50.924570 | orchestrator | 2026-03-29 00:56:50.924573 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-29 00:56:50.924594 | orchestrator | Sunday 29 March 2026 00:54:44 +0000 (0:00:00.679) 0:08:04.316 ********** 2026-03-29 00:56:50.924601 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:56:50.924607 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:56:50.924613 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:56:50.924617 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:56:50.924621 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:56:50.924625 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:56:50.924628 | orchestrator | 2026-03-29 00:56:50.924632 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-29 00:56:50.924636 | orchestrator | Sunday 29 March 2026 00:54:44 +0000 (0:00:00.511) 0:08:04.828 ********** 2026-03-29 00:56:50.924640 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:56:50.924644 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:56:50.924647 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:56:50.924655 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:56:50.924659 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:56:50.924662 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:56:50.924666 | orchestrator | 2026-03-29 00:56:50.924670 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-29 00:56:50.924674 | orchestrator | Sunday 29 March 2026 00:54:45 +0000 (0:00:00.727) 0:08:05.555 ********** 2026-03-29 00:56:50.924677 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:56:50.924681 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:56:50.924685 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:56:50.924688 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:56:50.924692 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:56:50.924696 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:56:50.924699 | orchestrator | 2026-03-29 00:56:50.924703 | orchestrator | TASK [ceph-crash : Create client.crash keyring] ******************************** 2026-03-29 00:56:50.924707 | orchestrator | Sunday 29 March 2026 00:54:46 +0000 (0:00:01.074) 0:08:06.630 ********** 2026-03-29 00:56:50.924711 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-29 00:56:50.924714 | orchestrator | 2026-03-29 00:56:50.924718 | orchestrator | TASK [ceph-crash : Get keys from monitors] ************************************* 2026-03-29 00:56:50.924722 | orchestrator | Sunday 29 March 2026 00:54:49 +0000 (0:00:03.255) 0:08:09.886 ********** 2026-03-29 00:56:50.924726 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-29 00:56:50.924729 | orchestrator | 2026-03-29 00:56:50.924733 | orchestrator | TASK [ceph-crash : Copy ceph key(s) if needed] ********************************* 2026-03-29 00:56:50.924737 | orchestrator | Sunday 29 March 2026 00:54:51 +0000 (0:00:01.645) 0:08:11.531 ********** 2026-03-29 00:56:50.924741 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:56:50.924744 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:56:50.924748 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:56:50.924752 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:56:50.924759 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:56:50.924764 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:56:50.924770 | orchestrator | 2026-03-29 00:56:50.924780 | orchestrator | TASK [ceph-crash : Create /var/lib/ceph/crash/posted] ************************** 2026-03-29 00:56:50.924786 | orchestrator | Sunday 29 March 2026 00:54:53 +0000 (0:00:01.694) 0:08:13.225 ********** 2026-03-29 00:56:50.924797 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:56:50.924803 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:56:50.924808 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:56:50.924814 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:56:50.924819 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:56:50.924825 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:56:50.924831 | orchestrator | 2026-03-29 00:56:50.924837 | orchestrator | TASK [ceph-crash : Include_tasks systemd.yml] ********************************** 2026-03-29 00:56:50.924843 | orchestrator | Sunday 29 March 2026 00:54:54 +0000 (0:00:01.520) 0:08:14.745 ********** 2026-03-29 00:56:50.924849 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 00:56:50.924857 | orchestrator | 2026-03-29 00:56:50.924863 | orchestrator | TASK [ceph-crash : Generate systemd unit file for ceph-crash container] ******** 2026-03-29 00:56:50.924869 | orchestrator | Sunday 29 March 2026 00:54:55 +0000 (0:00:01.042) 0:08:15.788 ********** 2026-03-29 00:56:50.924875 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:56:50.924878 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:56:50.924882 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:56:50.924886 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:56:50.924891 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:56:50.924897 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:56:50.924903 | orchestrator | 2026-03-29 00:56:50.924909 | orchestrator | TASK [ceph-crash : Start the ceph-crash service] ******************************* 2026-03-29 00:56:50.924916 | orchestrator | Sunday 29 March 2026 00:54:57 +0000 (0:00:01.476) 0:08:17.265 ********** 2026-03-29 00:56:50.924929 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:56:50.924935 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:56:50.924941 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:56:50.924948 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:56:50.924953 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:56:50.924959 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:56:50.924965 | orchestrator | 2026-03-29 00:56:50.924970 | orchestrator | RUNNING HANDLER [ceph-handler : Ceph crash handler] **************************** 2026-03-29 00:56:50.924977 | orchestrator | Sunday 29 March 2026 00:55:00 +0000 (0:00:03.556) 0:08:20.821 ********** 2026-03-29 00:56:50.924983 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_crash.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 00:56:50.924990 | orchestrator | 2026-03-29 00:56:50.924996 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called before restart] ****** 2026-03-29 00:56:50.925001 | orchestrator | Sunday 29 March 2026 00:55:02 +0000 (0:00:01.092) 0:08:21.914 ********** 2026-03-29 00:56:50.925005 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:56:50.925009 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:56:50.925013 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:56:50.925016 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:56:50.925020 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:56:50.925024 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:56:50.925028 | orchestrator | 2026-03-29 00:56:50.925031 | orchestrator | RUNNING HANDLER [ceph-handler : Restart the ceph-crash service] **************** 2026-03-29 00:56:50.925035 | orchestrator | Sunday 29 March 2026 00:55:02 +0000 (0:00:00.620) 0:08:22.534 ********** 2026-03-29 00:56:50.925039 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:56:50.925042 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:56:50.925046 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:56:50.925050 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:56:50.925054 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:56:50.925057 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:56:50.925061 | orchestrator | 2026-03-29 00:56:50.925065 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called after restart] ******* 2026-03-29 00:56:50.925068 | orchestrator | Sunday 29 March 2026 00:55:04 +0000 (0:00:02.193) 0:08:24.728 ********** 2026-03-29 00:56:50.925072 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:56:50.925076 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:56:50.925080 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:56:50.925083 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:56:50.925087 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:56:50.925090 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:56:50.925094 | orchestrator | 2026-03-29 00:56:50.925098 | orchestrator | PLAY [Apply role ceph-mds] ***************************************************** 2026-03-29 00:56:50.925101 | orchestrator | 2026-03-29 00:56:50.925105 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-29 00:56:50.925109 | orchestrator | Sunday 29 March 2026 00:55:05 +0000 (0:00:00.765) 0:08:25.494 ********** 2026-03-29 00:56:50.925113 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-29 00:56:50.925116 | orchestrator | 2026-03-29 00:56:50.925120 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-29 00:56:50.925124 | orchestrator | Sunday 29 March 2026 00:55:06 +0000 (0:00:00.642) 0:08:26.136 ********** 2026-03-29 00:56:50.925128 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-29 00:56:50.925131 | orchestrator | 2026-03-29 00:56:50.925135 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-29 00:56:50.925139 | orchestrator | Sunday 29 March 2026 00:55:06 +0000 (0:00:00.426) 0:08:26.563 ********** 2026-03-29 00:56:50.925142 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:56:50.925146 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:56:50.925155 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:56:50.925158 | orchestrator | 2026-03-29 00:56:50.925162 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-29 00:56:50.925166 | orchestrator | Sunday 29 March 2026 00:55:07 +0000 (0:00:00.444) 0:08:27.007 ********** 2026-03-29 00:56:50.925173 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:56:50.925177 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:56:50.925181 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:56:50.925184 | orchestrator | 2026-03-29 00:56:50.925188 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-29 00:56:50.925196 | orchestrator | Sunday 29 March 2026 00:55:07 +0000 (0:00:00.575) 0:08:27.583 ********** 2026-03-29 00:56:50.925200 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:56:50.925204 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:56:50.925207 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:56:50.925211 | orchestrator | 2026-03-29 00:56:50.925215 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-29 00:56:50.925218 | orchestrator | Sunday 29 March 2026 00:55:08 +0000 (0:00:00.634) 0:08:28.217 ********** 2026-03-29 00:56:50.925222 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:56:50.925226 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:56:50.925229 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:56:50.925233 | orchestrator | 2026-03-29 00:56:50.925237 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-29 00:56:50.925240 | orchestrator | Sunday 29 March 2026 00:55:08 +0000 (0:00:00.618) 0:08:28.835 ********** 2026-03-29 00:56:50.925244 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:56:50.925248 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:56:50.925252 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:56:50.925255 | orchestrator | 2026-03-29 00:56:50.925259 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-29 00:56:50.925263 | orchestrator | Sunday 29 March 2026 00:55:09 +0000 (0:00:00.450) 0:08:29.285 ********** 2026-03-29 00:56:50.925266 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:56:50.925270 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:56:50.925274 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:56:50.925278 | orchestrator | 2026-03-29 00:56:50.925281 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-29 00:56:50.925285 | orchestrator | Sunday 29 March 2026 00:55:09 +0000 (0:00:00.264) 0:08:29.550 ********** 2026-03-29 00:56:50.925289 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:56:50.925293 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:56:50.925297 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:56:50.925303 | orchestrator | 2026-03-29 00:56:50.925308 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-29 00:56:50.925315 | orchestrator | Sunday 29 March 2026 00:55:09 +0000 (0:00:00.280) 0:08:29.830 ********** 2026-03-29 00:56:50.925320 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:56:50.925326 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:56:50.925332 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:56:50.925337 | orchestrator | 2026-03-29 00:56:50.925343 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-29 00:56:50.925349 | orchestrator | Sunday 29 March 2026 00:55:10 +0000 (0:00:00.634) 0:08:30.465 ********** 2026-03-29 00:56:50.925354 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:56:50.925359 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:56:50.925365 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:56:50.925370 | orchestrator | 2026-03-29 00:56:50.925376 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-29 00:56:50.925382 | orchestrator | Sunday 29 March 2026 00:55:11 +0000 (0:00:00.870) 0:08:31.335 ********** 2026-03-29 00:56:50.925395 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:56:50.925409 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:56:50.925424 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:56:50.925438 | orchestrator | 2026-03-29 00:56:50.925455 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-29 00:56:50.925460 | orchestrator | Sunday 29 March 2026 00:55:11 +0000 (0:00:00.267) 0:08:31.603 ********** 2026-03-29 00:56:50.925466 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:56:50.925472 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:56:50.925478 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:56:50.925483 | orchestrator | 2026-03-29 00:56:50.925490 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-29 00:56:50.925495 | orchestrator | Sunday 29 March 2026 00:55:11 +0000 (0:00:00.257) 0:08:31.860 ********** 2026-03-29 00:56:50.925501 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:56:50.925507 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:56:50.925513 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:56:50.925518 | orchestrator | 2026-03-29 00:56:50.925524 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-29 00:56:50.925530 | orchestrator | Sunday 29 March 2026 00:55:12 +0000 (0:00:00.285) 0:08:32.146 ********** 2026-03-29 00:56:50.925536 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:56:50.925542 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:56:50.925549 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:56:50.925558 | orchestrator | 2026-03-29 00:56:50.925572 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-29 00:56:50.925606 | orchestrator | Sunday 29 March 2026 00:55:12 +0000 (0:00:00.494) 0:08:32.640 ********** 2026-03-29 00:56:50.925612 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:56:50.925618 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:56:50.925624 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:56:50.925629 | orchestrator | 2026-03-29 00:56:50.925635 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-29 00:56:50.925641 | orchestrator | Sunday 29 March 2026 00:55:13 +0000 (0:00:00.299) 0:08:32.939 ********** 2026-03-29 00:56:50.925646 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:56:50.925651 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:56:50.925664 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:56:50.925677 | orchestrator | 2026-03-29 00:56:50.925692 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-29 00:56:50.925706 | orchestrator | Sunday 29 March 2026 00:55:13 +0000 (0:00:00.275) 0:08:33.215 ********** 2026-03-29 00:56:50.925718 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:56:50.925730 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:56:50.925743 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:56:50.925753 | orchestrator | 2026-03-29 00:56:50.925765 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-29 00:56:50.925778 | orchestrator | Sunday 29 March 2026 00:55:13 +0000 (0:00:00.287) 0:08:33.502 ********** 2026-03-29 00:56:50.925790 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:56:50.925802 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:56:50.925815 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:56:50.925827 | orchestrator | 2026-03-29 00:56:50.925853 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-29 00:56:50.925865 | orchestrator | Sunday 29 March 2026 00:55:14 +0000 (0:00:00.460) 0:08:33.963 ********** 2026-03-29 00:56:50.925877 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:56:50.925905 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:56:50.925917 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:56:50.925930 | orchestrator | 2026-03-29 00:56:50.925944 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-29 00:56:50.925956 | orchestrator | Sunday 29 March 2026 00:55:14 +0000 (0:00:00.433) 0:08:34.396 ********** 2026-03-29 00:56:50.925969 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:56:50.925981 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:56:50.925994 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:56:50.926008 | orchestrator | 2026-03-29 00:56:50.926065 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2026-03-29 00:56:50.926072 | orchestrator | Sunday 29 March 2026 00:55:15 +0000 (0:00:00.526) 0:08:34.922 ********** 2026-03-29 00:56:50.926090 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:56:50.926096 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:56:50.926103 | orchestrator | included: /ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml for testbed-node-3 2026-03-29 00:56:50.926110 | orchestrator | 2026-03-29 00:56:50.926117 | orchestrator | TASK [ceph-facts : Get current default crush rule details] ********************* 2026-03-29 00:56:50.926122 | orchestrator | Sunday 29 March 2026 00:55:15 +0000 (0:00:00.634) 0:08:35.557 ********** 2026-03-29 00:56:50.926128 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-29 00:56:50.926134 | orchestrator | 2026-03-29 00:56:50.926144 | orchestrator | TASK [ceph-facts : Get current default crush rule name] ************************ 2026-03-29 00:56:50.926157 | orchestrator | Sunday 29 March 2026 00:55:17 +0000 (0:00:01.775) 0:08:37.332 ********** 2026-03-29 00:56:50.926173 | orchestrator | skipping: [testbed-node-3] => (item={'rule_id': 0, 'rule_name': 'replicated_rule', 'type': 1, 'steps': [{'op': 'take', 'item': -1, 'item_name': 'default'}, {'op': 'chooseleaf_firstn', 'num': 0, 'type': 'host'}, {'op': 'emit'}]})  2026-03-29 00:56:50.926188 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:56:50.926200 | orchestrator | 2026-03-29 00:56:50.926213 | orchestrator | TASK [ceph-mds : Create filesystem pools] ************************************** 2026-03-29 00:56:50.926226 | orchestrator | Sunday 29 March 2026 00:55:17 +0000 (0:00:00.234) 0:08:37.566 ********** 2026-03-29 00:56:50.926241 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_data', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-29 00:56:50.926263 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_metadata', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-29 00:56:50.926269 | orchestrator | 2026-03-29 00:56:50.926275 | orchestrator | TASK [ceph-mds : Create ceph filesystem] *************************************** 2026-03-29 00:56:50.926281 | orchestrator | Sunday 29 March 2026 00:55:23 +0000 (0:00:06.135) 0:08:43.702 ********** 2026-03-29 00:56:50.926286 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-29 00:56:50.926292 | orchestrator | 2026-03-29 00:56:50.926298 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2026-03-29 00:56:50.926304 | orchestrator | Sunday 29 March 2026 00:55:26 +0000 (0:00:02.975) 0:08:46.678 ********** 2026-03-29 00:56:50.926310 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-29 00:56:50.926316 | orchestrator | 2026-03-29 00:56:50.926322 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2026-03-29 00:56:50.926327 | orchestrator | Sunday 29 March 2026 00:55:27 +0000 (0:00:00.758) 0:08:47.437 ********** 2026-03-29 00:56:50.926333 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2026-03-29 00:56:50.926338 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2026-03-29 00:56:50.926345 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2026-03-29 00:56:50.926351 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2026-03-29 00:56:50.926357 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2026-03-29 00:56:50.926363 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2026-03-29 00:56:50.926369 | orchestrator | 2026-03-29 00:56:50.926375 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2026-03-29 00:56:50.926380 | orchestrator | Sunday 29 March 2026 00:55:28 +0000 (0:00:01.261) 0:08:48.698 ********** 2026-03-29 00:56:50.926386 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-29 00:56:50.926407 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-29 00:56:50.926412 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-29 00:56:50.926415 | orchestrator | 2026-03-29 00:56:50.926419 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2026-03-29 00:56:50.926423 | orchestrator | Sunday 29 March 2026 00:55:30 +0000 (0:00:01.905) 0:08:50.604 ********** 2026-03-29 00:56:50.926427 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-29 00:56:50.926431 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-29 00:56:50.926435 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:56:50.926443 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-29 00:56:50.926447 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-03-29 00:56:50.926451 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:56:50.926455 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-29 00:56:50.926468 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-03-29 00:56:50.926471 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:56:50.926475 | orchestrator | 2026-03-29 00:56:50.926479 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2026-03-29 00:56:50.926483 | orchestrator | Sunday 29 March 2026 00:55:31 +0000 (0:00:01.210) 0:08:51.814 ********** 2026-03-29 00:56:50.926486 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:56:50.926490 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:56:50.926494 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:56:50.926498 | orchestrator | 2026-03-29 00:56:50.926501 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2026-03-29 00:56:50.926505 | orchestrator | Sunday 29 March 2026 00:55:33 +0000 (0:00:01.955) 0:08:53.769 ********** 2026-03-29 00:56:50.926509 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:56:50.926513 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:56:50.926516 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:56:50.926520 | orchestrator | 2026-03-29 00:56:50.926524 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2026-03-29 00:56:50.926527 | orchestrator | Sunday 29 March 2026 00:55:34 +0000 (0:00:00.459) 0:08:54.229 ********** 2026-03-29 00:56:50.926531 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-29 00:56:50.926535 | orchestrator | 2026-03-29 00:56:50.926539 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2026-03-29 00:56:50.926543 | orchestrator | Sunday 29 March 2026 00:55:34 +0000 (0:00:00.481) 0:08:54.711 ********** 2026-03-29 00:56:50.926546 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-29 00:56:50.926550 | orchestrator | 2026-03-29 00:56:50.926554 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2026-03-29 00:56:50.926558 | orchestrator | Sunday 29 March 2026 00:55:35 +0000 (0:00:00.711) 0:08:55.423 ********** 2026-03-29 00:56:50.926561 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:56:50.926565 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:56:50.926569 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:56:50.926573 | orchestrator | 2026-03-29 00:56:50.926576 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2026-03-29 00:56:50.926604 | orchestrator | Sunday 29 March 2026 00:55:36 +0000 (0:00:01.334) 0:08:56.758 ********** 2026-03-29 00:56:50.926611 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:56:50.926615 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:56:50.926619 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:56:50.926623 | orchestrator | 2026-03-29 00:56:50.926627 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2026-03-29 00:56:50.926630 | orchestrator | Sunday 29 March 2026 00:55:37 +0000 (0:00:01.117) 0:08:57.876 ********** 2026-03-29 00:56:50.926634 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:56:50.926638 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:56:50.926646 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:56:50.926650 | orchestrator | 2026-03-29 00:56:50.926654 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2026-03-29 00:56:50.926658 | orchestrator | Sunday 29 March 2026 00:55:39 +0000 (0:00:01.753) 0:08:59.629 ********** 2026-03-29 00:56:50.926662 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:56:50.926665 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:56:50.926669 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:56:50.926673 | orchestrator | 2026-03-29 00:56:50.926676 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2026-03-29 00:56:50.926680 | orchestrator | Sunday 29 March 2026 00:55:42 +0000 (0:00:02.670) 0:09:02.300 ********** 2026-03-29 00:56:50.926684 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:56:50.926688 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:56:50.926692 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:56:50.926695 | orchestrator | 2026-03-29 00:56:50.926699 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-29 00:56:50.926703 | orchestrator | Sunday 29 March 2026 00:55:43 +0000 (0:00:01.576) 0:09:03.876 ********** 2026-03-29 00:56:50.926706 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:56:50.926710 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:56:50.926715 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:56:50.926721 | orchestrator | 2026-03-29 00:56:50.926727 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-03-29 00:56:50.926734 | orchestrator | Sunday 29 March 2026 00:55:45 +0000 (0:00:01.219) 0:09:05.096 ********** 2026-03-29 00:56:50.926740 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-29 00:56:50.926746 | orchestrator | 2026-03-29 00:56:50.926753 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-03-29 00:56:50.926759 | orchestrator | Sunday 29 March 2026 00:55:45 +0000 (0:00:00.461) 0:09:05.557 ********** 2026-03-29 00:56:50.926765 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:56:50.926770 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:56:50.926777 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:56:50.926782 | orchestrator | 2026-03-29 00:56:50.926789 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-03-29 00:56:50.926795 | orchestrator | Sunday 29 March 2026 00:55:45 +0000 (0:00:00.312) 0:09:05.870 ********** 2026-03-29 00:56:50.926801 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:56:50.926808 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:56:50.926814 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:56:50.926820 | orchestrator | 2026-03-29 00:56:50.926825 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-03-29 00:56:50.926831 | orchestrator | Sunday 29 March 2026 00:55:47 +0000 (0:00:01.350) 0:09:07.221 ********** 2026-03-29 00:56:50.926839 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-29 00:56:50.926843 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-29 00:56:50.926847 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-29 00:56:50.926851 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:56:50.926854 | orchestrator | 2026-03-29 00:56:50.926858 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-03-29 00:56:50.926867 | orchestrator | Sunday 29 March 2026 00:55:47 +0000 (0:00:00.562) 0:09:07.784 ********** 2026-03-29 00:56:50.926871 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:56:50.926875 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:56:50.926879 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:56:50.926882 | orchestrator | 2026-03-29 00:56:50.926886 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2026-03-29 00:56:50.926890 | orchestrator | 2026-03-29 00:56:50.926894 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-29 00:56:50.926897 | orchestrator | Sunday 29 March 2026 00:55:48 +0000 (0:00:00.475) 0:09:08.260 ********** 2026-03-29 00:56:50.926905 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-29 00:56:50.926910 | orchestrator | 2026-03-29 00:56:50.926913 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-29 00:56:50.926917 | orchestrator | Sunday 29 March 2026 00:55:48 +0000 (0:00:00.587) 0:09:08.847 ********** 2026-03-29 00:56:50.926921 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-29 00:56:50.926925 | orchestrator | 2026-03-29 00:56:50.926928 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-29 00:56:50.926932 | orchestrator | Sunday 29 March 2026 00:55:49 +0000 (0:00:00.453) 0:09:09.301 ********** 2026-03-29 00:56:50.926936 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:56:50.926940 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:56:50.926944 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:56:50.926947 | orchestrator | 2026-03-29 00:56:50.926951 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-29 00:56:50.926955 | orchestrator | Sunday 29 March 2026 00:55:49 +0000 (0:00:00.421) 0:09:09.722 ********** 2026-03-29 00:56:50.926959 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:56:50.926962 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:56:50.926966 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:56:50.926970 | orchestrator | 2026-03-29 00:56:50.926973 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-29 00:56:50.926977 | orchestrator | Sunday 29 March 2026 00:55:50 +0000 (0:00:00.620) 0:09:10.343 ********** 2026-03-29 00:56:50.926981 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:56:50.926985 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:56:50.926988 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:56:50.926992 | orchestrator | 2026-03-29 00:56:50.926996 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-29 00:56:50.927073 | orchestrator | Sunday 29 March 2026 00:55:52 +0000 (0:00:01.804) 0:09:12.148 ********** 2026-03-29 00:56:50.927094 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:56:50.927097 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:56:50.927101 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:56:50.927105 | orchestrator | 2026-03-29 00:56:50.927109 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-29 00:56:50.927113 | orchestrator | Sunday 29 March 2026 00:55:52 +0000 (0:00:00.730) 0:09:12.878 ********** 2026-03-29 00:56:50.927116 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:56:50.927120 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:56:50.927124 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:56:50.927128 | orchestrator | 2026-03-29 00:56:50.927131 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-29 00:56:50.927135 | orchestrator | Sunday 29 March 2026 00:55:53 +0000 (0:00:00.681) 0:09:13.560 ********** 2026-03-29 00:56:50.927139 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:56:50.927143 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:56:50.927146 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:56:50.927150 | orchestrator | 2026-03-29 00:56:50.927154 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-29 00:56:50.927157 | orchestrator | Sunday 29 March 2026 00:55:53 +0000 (0:00:00.325) 0:09:13.886 ********** 2026-03-29 00:56:50.927161 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:56:50.927165 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:56:50.927171 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:56:50.927177 | orchestrator | 2026-03-29 00:56:50.927183 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-29 00:56:50.927190 | orchestrator | Sunday 29 March 2026 00:55:54 +0000 (0:00:00.315) 0:09:14.201 ********** 2026-03-29 00:56:50.927195 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:56:50.927202 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:56:50.927213 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:56:50.927219 | orchestrator | 2026-03-29 00:56:50.927225 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-29 00:56:50.927230 | orchestrator | Sunday 29 March 2026 00:55:55 +0000 (0:00:00.700) 0:09:14.901 ********** 2026-03-29 00:56:50.927236 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:56:50.927243 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:56:50.927249 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:56:50.927256 | orchestrator | 2026-03-29 00:56:50.927263 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-29 00:56:50.927267 | orchestrator | Sunday 29 March 2026 00:55:55 +0000 (0:00:00.997) 0:09:15.899 ********** 2026-03-29 00:56:50.927270 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:56:50.927274 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:56:50.927278 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:56:50.927282 | orchestrator | 2026-03-29 00:56:50.927285 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-29 00:56:50.927289 | orchestrator | Sunday 29 March 2026 00:55:56 +0000 (0:00:00.306) 0:09:16.205 ********** 2026-03-29 00:56:50.927293 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:56:50.927296 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:56:50.927300 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:56:50.927304 | orchestrator | 2026-03-29 00:56:50.927307 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-29 00:56:50.927314 | orchestrator | Sunday 29 March 2026 00:55:56 +0000 (0:00:00.320) 0:09:16.525 ********** 2026-03-29 00:56:50.927318 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:56:50.927322 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:56:50.927325 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:56:50.927329 | orchestrator | 2026-03-29 00:56:50.927338 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-29 00:56:50.927342 | orchestrator | Sunday 29 March 2026 00:55:56 +0000 (0:00:00.335) 0:09:16.861 ********** 2026-03-29 00:56:50.927345 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:56:50.927349 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:56:50.927353 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:56:50.927357 | orchestrator | 2026-03-29 00:56:50.927360 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-29 00:56:50.927364 | orchestrator | Sunday 29 March 2026 00:55:57 +0000 (0:00:00.635) 0:09:17.496 ********** 2026-03-29 00:56:50.927368 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:56:50.927371 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:56:50.927375 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:56:50.927379 | orchestrator | 2026-03-29 00:56:50.927383 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-29 00:56:50.927386 | orchestrator | Sunday 29 March 2026 00:55:57 +0000 (0:00:00.364) 0:09:17.861 ********** 2026-03-29 00:56:50.927390 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:56:50.927394 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:56:50.927398 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:56:50.927403 | orchestrator | 2026-03-29 00:56:50.927408 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-29 00:56:50.927415 | orchestrator | Sunday 29 March 2026 00:55:58 +0000 (0:00:00.342) 0:09:18.203 ********** 2026-03-29 00:56:50.927421 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:56:50.927428 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:56:50.927434 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:56:50.927440 | orchestrator | 2026-03-29 00:56:50.927447 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-29 00:56:50.927454 | orchestrator | Sunday 29 March 2026 00:55:58 +0000 (0:00:00.317) 0:09:18.521 ********** 2026-03-29 00:56:50.927460 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:56:50.927466 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:56:50.927472 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:56:50.927478 | orchestrator | 2026-03-29 00:56:50.927484 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-29 00:56:50.927495 | orchestrator | Sunday 29 March 2026 00:55:59 +0000 (0:00:00.608) 0:09:19.130 ********** 2026-03-29 00:56:50.927501 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:56:50.927507 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:56:50.927514 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:56:50.927520 | orchestrator | 2026-03-29 00:56:50.927526 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-29 00:56:50.927532 | orchestrator | Sunday 29 March 2026 00:55:59 +0000 (0:00:00.338) 0:09:19.468 ********** 2026-03-29 00:56:50.927538 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:56:50.927544 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:56:50.927551 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:56:50.927558 | orchestrator | 2026-03-29 00:56:50.927562 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2026-03-29 00:56:50.927566 | orchestrator | Sunday 29 March 2026 00:56:00 +0000 (0:00:00.532) 0:09:20.000 ********** 2026-03-29 00:56:50.927570 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-29 00:56:50.927573 | orchestrator | 2026-03-29 00:56:50.927626 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-03-29 00:56:50.927631 | orchestrator | Sunday 29 March 2026 00:56:00 +0000 (0:00:00.753) 0:09:20.754 ********** 2026-03-29 00:56:50.927635 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-29 00:56:50.927638 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-29 00:56:50.927643 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-29 00:56:50.927646 | orchestrator | 2026-03-29 00:56:50.927650 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-03-29 00:56:50.927654 | orchestrator | Sunday 29 March 2026 00:56:02 +0000 (0:00:01.725) 0:09:22.479 ********** 2026-03-29 00:56:50.927658 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-29 00:56:50.927662 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-29 00:56:50.927666 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:56:50.927669 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-29 00:56:50.927673 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-03-29 00:56:50.927677 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:56:50.927681 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-29 00:56:50.927684 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-03-29 00:56:50.927688 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:56:50.927692 | orchestrator | 2026-03-29 00:56:50.927696 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2026-03-29 00:56:50.927699 | orchestrator | Sunday 29 March 2026 00:56:03 +0000 (0:00:01.081) 0:09:23.561 ********** 2026-03-29 00:56:50.927703 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:56:50.927707 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:56:50.927710 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:56:50.927714 | orchestrator | 2026-03-29 00:56:50.927718 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2026-03-29 00:56:50.927722 | orchestrator | Sunday 29 March 2026 00:56:03 +0000 (0:00:00.307) 0:09:23.868 ********** 2026-03-29 00:56:50.927726 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-29 00:56:50.927729 | orchestrator | 2026-03-29 00:56:50.927733 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2026-03-29 00:56:50.927737 | orchestrator | Sunday 29 March 2026 00:56:04 +0000 (0:00:00.782) 0:09:24.650 ********** 2026-03-29 00:56:50.927745 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-29 00:56:50.927755 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-29 00:56:50.927763 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-29 00:56:50.927767 | orchestrator | 2026-03-29 00:56:50.927771 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2026-03-29 00:56:50.927774 | orchestrator | Sunday 29 March 2026 00:56:05 +0000 (0:00:00.764) 0:09:25.415 ********** 2026-03-29 00:56:50.927778 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-29 00:56:50.927782 | orchestrator | changed: [testbed-node-3 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-03-29 00:56:50.927786 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-29 00:56:50.927790 | orchestrator | changed: [testbed-node-4 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-03-29 00:56:50.927793 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-29 00:56:50.927797 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-03-29 00:56:50.927801 | orchestrator | 2026-03-29 00:56:50.927805 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-03-29 00:56:50.927808 | orchestrator | Sunday 29 March 2026 00:56:09 +0000 (0:00:04.072) 0:09:29.487 ********** 2026-03-29 00:56:50.927812 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-29 00:56:50.927816 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-29 00:56:50.927820 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-29 00:56:50.927824 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-29 00:56:50.927828 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-29 00:56:50.927834 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-29 00:56:50.927840 | orchestrator | 2026-03-29 00:56:50.927845 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-03-29 00:56:50.927851 | orchestrator | Sunday 29 March 2026 00:56:12 +0000 (0:00:02.447) 0:09:31.935 ********** 2026-03-29 00:56:50.927857 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-29 00:56:50.927863 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:56:50.927869 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-29 00:56:50.927877 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:56:50.927881 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-29 00:56:50.927884 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:56:50.927888 | orchestrator | 2026-03-29 00:56:50.927892 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2026-03-29 00:56:50.927895 | orchestrator | Sunday 29 March 2026 00:56:13 +0000 (0:00:01.117) 0:09:33.052 ********** 2026-03-29 00:56:50.927899 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2026-03-29 00:56:50.927903 | orchestrator | 2026-03-29 00:56:50.927906 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2026-03-29 00:56:50.927910 | orchestrator | Sunday 29 March 2026 00:56:13 +0000 (0:00:00.213) 0:09:33.266 ********** 2026-03-29 00:56:50.927914 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-29 00:56:50.927918 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-29 00:56:50.927922 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-29 00:56:50.927929 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-29 00:56:50.927933 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-29 00:56:50.927936 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:56:50.927940 | orchestrator | 2026-03-29 00:56:50.927944 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2026-03-29 00:56:50.927947 | orchestrator | Sunday 29 March 2026 00:56:13 +0000 (0:00:00.509) 0:09:33.775 ********** 2026-03-29 00:56:50.927951 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-29 00:56:50.927955 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-29 00:56:50.927959 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-29 00:56:50.927966 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-29 00:56:50.927970 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-29 00:56:50.927977 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:56:50.927981 | orchestrator | 2026-03-29 00:56:50.927985 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2026-03-29 00:56:50.927989 | orchestrator | Sunday 29 March 2026 00:56:14 +0000 (0:00:00.527) 0:09:34.303 ********** 2026-03-29 00:56:50.927992 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-29 00:56:50.927996 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-29 00:56:50.928000 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-29 00:56:50.928004 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-29 00:56:50.928008 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-29 00:56:50.928011 | orchestrator | 2026-03-29 00:56:50.928015 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2026-03-29 00:56:50.928019 | orchestrator | Sunday 29 March 2026 00:56:34 +0000 (0:00:20.168) 0:09:54.472 ********** 2026-03-29 00:56:50.928023 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:56:50.928026 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:56:50.928030 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:56:50.928034 | orchestrator | 2026-03-29 00:56:50.928037 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2026-03-29 00:56:50.928041 | orchestrator | Sunday 29 March 2026 00:56:34 +0000 (0:00:00.290) 0:09:54.762 ********** 2026-03-29 00:56:50.928046 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:56:50.928053 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:56:50.928059 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:56:50.928065 | orchestrator | 2026-03-29 00:56:50.928071 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2026-03-29 00:56:50.928077 | orchestrator | Sunday 29 March 2026 00:56:35 +0000 (0:00:00.480) 0:09:55.243 ********** 2026-03-29 00:56:50.928081 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-29 00:56:50.928084 | orchestrator | 2026-03-29 00:56:50.928091 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2026-03-29 00:56:50.928095 | orchestrator | Sunday 29 March 2026 00:56:35 +0000 (0:00:00.485) 0:09:55.729 ********** 2026-03-29 00:56:50.928099 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-29 00:56:50.928104 | orchestrator | 2026-03-29 00:56:50.928110 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2026-03-29 00:56:50.928117 | orchestrator | Sunday 29 March 2026 00:56:36 +0000 (0:00:00.672) 0:09:56.402 ********** 2026-03-29 00:56:50.928121 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:56:50.928125 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:56:50.928129 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:56:50.928132 | orchestrator | 2026-03-29 00:56:50.928136 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2026-03-29 00:56:50.928140 | orchestrator | Sunday 29 March 2026 00:56:37 +0000 (0:00:01.161) 0:09:57.563 ********** 2026-03-29 00:56:50.928144 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:56:50.928147 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:56:50.928151 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:56:50.928155 | orchestrator | 2026-03-29 00:56:50.928158 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2026-03-29 00:56:50.928162 | orchestrator | Sunday 29 March 2026 00:56:38 +0000 (0:00:01.090) 0:09:58.653 ********** 2026-03-29 00:56:50.928166 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:56:50.928170 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:56:50.928174 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:56:50.928177 | orchestrator | 2026-03-29 00:56:50.928181 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2026-03-29 00:56:50.928185 | orchestrator | Sunday 29 March 2026 00:56:41 +0000 (0:00:02.648) 0:10:01.302 ********** 2026-03-29 00:56:50.928188 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-29 00:56:50.928192 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-29 00:56:50.928196 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-29 00:56:50.928200 | orchestrator | 2026-03-29 00:56:50.928204 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-29 00:56:50.928207 | orchestrator | Sunday 29 March 2026 00:56:44 +0000 (0:00:02.939) 0:10:04.241 ********** 2026-03-29 00:56:50.928211 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:56:50.928215 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:56:50.928219 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:56:50.928222 | orchestrator | 2026-03-29 00:56:50.928229 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-03-29 00:56:50.928233 | orchestrator | Sunday 29 March 2026 00:56:44 +0000 (0:00:00.321) 0:10:04.563 ********** 2026-03-29 00:56:50.928240 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-29 00:56:50.928243 | orchestrator | 2026-03-29 00:56:50.928247 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-03-29 00:56:50.928251 | orchestrator | Sunday 29 March 2026 00:56:45 +0000 (0:00:00.883) 0:10:05.446 ********** 2026-03-29 00:56:50.928255 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:56:50.928258 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:56:50.928262 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:56:50.928266 | orchestrator | 2026-03-29 00:56:50.928270 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-03-29 00:56:50.928273 | orchestrator | Sunday 29 March 2026 00:56:45 +0000 (0:00:00.317) 0:10:05.764 ********** 2026-03-29 00:56:50.928277 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:56:50.928281 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:56:50.928287 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:56:50.928291 | orchestrator | 2026-03-29 00:56:50.928295 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-03-29 00:56:50.928299 | orchestrator | Sunday 29 March 2026 00:56:46 +0000 (0:00:00.351) 0:10:06.116 ********** 2026-03-29 00:56:50.928302 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-29 00:56:50.928306 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-29 00:56:50.928310 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-29 00:56:50.928313 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:56:50.928317 | orchestrator | 2026-03-29 00:56:50.928321 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-03-29 00:56:50.928324 | orchestrator | Sunday 29 March 2026 00:56:47 +0000 (0:00:01.174) 0:10:07.291 ********** 2026-03-29 00:56:50.928328 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:56:50.928332 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:56:50.928336 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:56:50.928339 | orchestrator | 2026-03-29 00:56:50.928343 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 00:56:50.928347 | orchestrator | testbed-node-0 : ok=134  changed=35  unreachable=0 failed=0 skipped=125  rescued=0 ignored=0 2026-03-29 00:56:50.928351 | orchestrator | testbed-node-1 : ok=127  changed=31  unreachable=0 failed=0 skipped=120  rescued=0 ignored=0 2026-03-29 00:56:50.928355 | orchestrator | testbed-node-2 : ok=134  changed=33  unreachable=0 failed=0 skipped=119  rescued=0 ignored=0 2026-03-29 00:56:50.928359 | orchestrator | testbed-node-3 : ok=193  changed=45  unreachable=0 failed=0 skipped=162  rescued=0 ignored=0 2026-03-29 00:56:50.928363 | orchestrator | testbed-node-4 : ok=175  changed=40  unreachable=0 failed=0 skipped=123  rescued=0 ignored=0 2026-03-29 00:56:50.928366 | orchestrator | testbed-node-5 : ok=177  changed=41  unreachable=0 failed=0 skipped=121  rescued=0 ignored=0 2026-03-29 00:56:50.928370 | orchestrator | 2026-03-29 00:56:50.928374 | orchestrator | 2026-03-29 00:56:50.928377 | orchestrator | 2026-03-29 00:56:50.928381 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-29 00:56:50.928385 | orchestrator | Sunday 29 March 2026 00:56:47 +0000 (0:00:00.248) 0:10:07.540 ********** 2026-03-29 00:56:50.928389 | orchestrator | =============================================================================== 2026-03-29 00:56:50.928392 | orchestrator | ceph-container-common : Pulling Ceph container image ------------------- 51.57s 2026-03-29 00:56:50.928396 | orchestrator | ceph-osd : Use ceph-volume to create osds ------------------------------ 33.96s 2026-03-29 00:56:50.928400 | orchestrator | ceph-mon : Waiting for the monitor(s) to form the quorum... ------------ 21.39s 2026-03-29 00:56:50.928403 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 20.17s 2026-03-29 00:56:50.928407 | orchestrator | ceph-mgr : Wait for all mgr to be up ----------------------------------- 13.64s 2026-03-29 00:56:50.928411 | orchestrator | ceph-osd : Wait for all osd to be up ----------------------------------- 13.55s 2026-03-29 00:56:50.928415 | orchestrator | ceph-mon : Set cluster configs ----------------------------------------- 10.19s 2026-03-29 00:56:50.928418 | orchestrator | ceph-mgr : Create ceph mgr keyring(s) on a mon node --------------------- 8.30s 2026-03-29 00:56:50.928422 | orchestrator | ceph-mon : Fetch ceph initial keys -------------------------------------- 6.65s 2026-03-29 00:56:50.928425 | orchestrator | ceph-config : Create ceph initial directories --------------------------- 6.31s 2026-03-29 00:56:50.928429 | orchestrator | ceph-mds : Create filesystem pools -------------------------------------- 6.14s 2026-03-29 00:56:50.928433 | orchestrator | ceph-mgr : Disable ceph mgr enabled modules ----------------------------- 6.09s 2026-03-29 00:56:50.928439 | orchestrator | ceph-mgr : Add modules to ceph-mgr -------------------------------------- 4.65s 2026-03-29 00:56:50.928443 | orchestrator | ceph-osd : Apply operating system tuning -------------------------------- 4.24s 2026-03-29 00:56:50.928447 | orchestrator | ceph-rgw : Create rgw keyrings ------------------------------------------ 4.07s 2026-03-29 00:56:50.928451 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 3.84s 2026-03-29 00:56:50.928454 | orchestrator | ceph-osd : Systemd start osd -------------------------------------------- 3.68s 2026-03-29 00:56:50.928461 | orchestrator | ceph-crash : Start the ceph-crash service ------------------------------- 3.56s 2026-03-29 00:56:50.928465 | orchestrator | ceph-mon : Generate systemd unit file for mon container ----------------- 3.30s 2026-03-29 00:56:50.928468 | orchestrator | ceph-mon : Copy admin keyring over to mons ------------------------------ 3.29s 2026-03-29 00:56:50.928474 | orchestrator | 2026-03-29 00:56:50 | INFO  | Task 01da8d9b-15ae-411a-8ee0-f8beb9938160 is in state STARTED 2026-03-29 00:56:50.928478 | orchestrator | 2026-03-29 00:56:50 | INFO  | Task 016221e2-dc07-4c3c-86a3-db4b32d3328c is in state STARTED 2026-03-29 00:56:50.928482 | orchestrator | 2026-03-29 00:56:50 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:56:53.958820 | orchestrator | 2026-03-29 00:56:53 | INFO  | Task 1f78857e-e9f0-4ef0-b3d3-37a20238e210 is in state STARTED 2026-03-29 00:56:53.960521 | orchestrator | 2026-03-29 00:56:53 | INFO  | Task 01da8d9b-15ae-411a-8ee0-f8beb9938160 is in state STARTED 2026-03-29 00:56:53.964689 | orchestrator | 2026-03-29 00:56:53 | INFO  | Task 016221e2-dc07-4c3c-86a3-db4b32d3328c is in state STARTED 2026-03-29 00:56:53.964748 | orchestrator | 2026-03-29 00:56:53 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:56:57.020135 | orchestrator | 2026-03-29 00:56:57 | INFO  | Task 1f78857e-e9f0-4ef0-b3d3-37a20238e210 is in state STARTED 2026-03-29 00:56:57.022513 | orchestrator | 2026-03-29 00:56:57 | INFO  | Task 01da8d9b-15ae-411a-8ee0-f8beb9938160 is in state STARTED 2026-03-29 00:56:57.025789 | orchestrator | 2026-03-29 00:56:57 | INFO  | Task 016221e2-dc07-4c3c-86a3-db4b32d3328c is in state STARTED 2026-03-29 00:56:57.025852 | orchestrator | 2026-03-29 00:56:57 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:57:00.071455 | orchestrator | 2026-03-29 00:57:00 | INFO  | Task 1f78857e-e9f0-4ef0-b3d3-37a20238e210 is in state STARTED 2026-03-29 00:57:00.072793 | orchestrator | 2026-03-29 00:57:00 | INFO  | Task 01da8d9b-15ae-411a-8ee0-f8beb9938160 is in state STARTED 2026-03-29 00:57:00.073191 | orchestrator | 2026-03-29 00:57:00 | INFO  | Task 016221e2-dc07-4c3c-86a3-db4b32d3328c is in state STARTED 2026-03-29 00:57:00.073271 | orchestrator | 2026-03-29 00:57:00 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:57:03.114306 | orchestrator | 2026-03-29 00:57:03 | INFO  | Task 1f78857e-e9f0-4ef0-b3d3-37a20238e210 is in state STARTED 2026-03-29 00:57:03.114834 | orchestrator | 2026-03-29 00:57:03 | INFO  | Task 01da8d9b-15ae-411a-8ee0-f8beb9938160 is in state STARTED 2026-03-29 00:57:03.115492 | orchestrator | 2026-03-29 00:57:03 | INFO  | Task 016221e2-dc07-4c3c-86a3-db4b32d3328c is in state STARTED 2026-03-29 00:57:03.115521 | orchestrator | 2026-03-29 00:57:03 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:57:06.165466 | orchestrator | 2026-03-29 00:57:06 | INFO  | Task 1f78857e-e9f0-4ef0-b3d3-37a20238e210 is in state STARTED 2026-03-29 00:57:06.166692 | orchestrator | 2026-03-29 00:57:06 | INFO  | Task 01da8d9b-15ae-411a-8ee0-f8beb9938160 is in state STARTED 2026-03-29 00:57:06.168252 | orchestrator | 2026-03-29 00:57:06 | INFO  | Task 016221e2-dc07-4c3c-86a3-db4b32d3328c is in state STARTED 2026-03-29 00:57:06.168300 | orchestrator | 2026-03-29 00:57:06 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:57:09.215981 | orchestrator | 2026-03-29 00:57:09 | INFO  | Task 1f78857e-e9f0-4ef0-b3d3-37a20238e210 is in state STARTED 2026-03-29 00:57:09.218932 | orchestrator | 2026-03-29 00:57:09 | INFO  | Task 01da8d9b-15ae-411a-8ee0-f8beb9938160 is in state STARTED 2026-03-29 00:57:09.220775 | orchestrator | 2026-03-29 00:57:09 | INFO  | Task 016221e2-dc07-4c3c-86a3-db4b32d3328c is in state STARTED 2026-03-29 00:57:09.221067 | orchestrator | 2026-03-29 00:57:09 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:57:12.277964 | orchestrator | 2026-03-29 00:57:12 | INFO  | Task 1f78857e-e9f0-4ef0-b3d3-37a20238e210 is in state STARTED 2026-03-29 00:57:12.280946 | orchestrator | 2026-03-29 00:57:12 | INFO  | Task 01da8d9b-15ae-411a-8ee0-f8beb9938160 is in state STARTED 2026-03-29 00:57:12.282775 | orchestrator | 2026-03-29 00:57:12 | INFO  | Task 016221e2-dc07-4c3c-86a3-db4b32d3328c is in state STARTED 2026-03-29 00:57:12.283204 | orchestrator | 2026-03-29 00:57:12 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:57:15.320821 | orchestrator | 2026-03-29 00:57:15 | INFO  | Task 1f78857e-e9f0-4ef0-b3d3-37a20238e210 is in state STARTED 2026-03-29 00:57:15.322165 | orchestrator | 2026-03-29 00:57:15 | INFO  | Task 01da8d9b-15ae-411a-8ee0-f8beb9938160 is in state STARTED 2026-03-29 00:57:15.325279 | orchestrator | 2026-03-29 00:57:15 | INFO  | Task 016221e2-dc07-4c3c-86a3-db4b32d3328c is in state STARTED 2026-03-29 00:57:15.325768 | orchestrator | 2026-03-29 00:57:15 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:57:18.378266 | orchestrator | 2026-03-29 00:57:18 | INFO  | Task 1f78857e-e9f0-4ef0-b3d3-37a20238e210 is in state STARTED 2026-03-29 00:57:18.378909 | orchestrator | 2026-03-29 00:57:18 | INFO  | Task 01da8d9b-15ae-411a-8ee0-f8beb9938160 is in state STARTED 2026-03-29 00:57:18.381533 | orchestrator | 2026-03-29 00:57:18 | INFO  | Task 016221e2-dc07-4c3c-86a3-db4b32d3328c is in state STARTED 2026-03-29 00:57:18.381634 | orchestrator | 2026-03-29 00:57:18 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:57:21.422787 | orchestrator | 2026-03-29 00:57:21 | INFO  | Task 1f78857e-e9f0-4ef0-b3d3-37a20238e210 is in state STARTED 2026-03-29 00:57:21.424530 | orchestrator | 2026-03-29 00:57:21 | INFO  | Task 01da8d9b-15ae-411a-8ee0-f8beb9938160 is in state STARTED 2026-03-29 00:57:21.425932 | orchestrator | 2026-03-29 00:57:21 | INFO  | Task 016221e2-dc07-4c3c-86a3-db4b32d3328c is in state STARTED 2026-03-29 00:57:21.426188 | orchestrator | 2026-03-29 00:57:21 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:57:24.464436 | orchestrator | 2026-03-29 00:57:24 | INFO  | Task 1f78857e-e9f0-4ef0-b3d3-37a20238e210 is in state STARTED 2026-03-29 00:57:24.466948 | orchestrator | 2026-03-29 00:57:24 | INFO  | Task 01da8d9b-15ae-411a-8ee0-f8beb9938160 is in state STARTED 2026-03-29 00:57:24.468946 | orchestrator | 2026-03-29 00:57:24 | INFO  | Task 016221e2-dc07-4c3c-86a3-db4b32d3328c is in state STARTED 2026-03-29 00:57:24.468987 | orchestrator | 2026-03-29 00:57:24 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:57:27.510245 | orchestrator | 2026-03-29 00:57:27 | INFO  | Task 1f78857e-e9f0-4ef0-b3d3-37a20238e210 is in state STARTED 2026-03-29 00:57:27.511185 | orchestrator | 2026-03-29 00:57:27 | INFO  | Task 01da8d9b-15ae-411a-8ee0-f8beb9938160 is in state STARTED 2026-03-29 00:57:27.512683 | orchestrator | 2026-03-29 00:57:27 | INFO  | Task 016221e2-dc07-4c3c-86a3-db4b32d3328c is in state STARTED 2026-03-29 00:57:27.512706 | orchestrator | 2026-03-29 00:57:27 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:57:30.567528 | orchestrator | 2026-03-29 00:57:30 | INFO  | Task 1f78857e-e9f0-4ef0-b3d3-37a20238e210 is in state STARTED 2026-03-29 00:57:30.569738 | orchestrator | 2026-03-29 00:57:30 | INFO  | Task 01da8d9b-15ae-411a-8ee0-f8beb9938160 is in state STARTED 2026-03-29 00:57:30.571924 | orchestrator | 2026-03-29 00:57:30 | INFO  | Task 016221e2-dc07-4c3c-86a3-db4b32d3328c is in state STARTED 2026-03-29 00:57:30.571960 | orchestrator | 2026-03-29 00:57:30 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:57:33.625303 | orchestrator | 2026-03-29 00:57:33 | INFO  | Task 1f78857e-e9f0-4ef0-b3d3-37a20238e210 is in state STARTED 2026-03-29 00:57:33.628307 | orchestrator | 2026-03-29 00:57:33 | INFO  | Task 01da8d9b-15ae-411a-8ee0-f8beb9938160 is in state SUCCESS 2026-03-29 00:57:33.629639 | orchestrator | 2026-03-29 00:57:33.629683 | orchestrator | 2026-03-29 00:57:33.629692 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-29 00:57:33.629700 | orchestrator | 2026-03-29 00:57:33.629708 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-29 00:57:33.629715 | orchestrator | Sunday 29 March 2026 00:55:07 +0000 (0:00:00.277) 0:00:00.277 ********** 2026-03-29 00:57:33.629722 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:57:33.629729 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:57:33.629736 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:57:33.629743 | orchestrator | 2026-03-29 00:57:33.629751 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-29 00:57:33.629757 | orchestrator | Sunday 29 March 2026 00:55:07 +0000 (0:00:00.255) 0:00:00.532 ********** 2026-03-29 00:57:33.629765 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2026-03-29 00:57:33.629771 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2026-03-29 00:57:33.629778 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2026-03-29 00:57:33.629784 | orchestrator | 2026-03-29 00:57:33.629790 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2026-03-29 00:57:33.629797 | orchestrator | 2026-03-29 00:57:33.629804 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-03-29 00:57:33.629811 | orchestrator | Sunday 29 March 2026 00:55:08 +0000 (0:00:00.286) 0:00:00.819 ********** 2026-03-29 00:57:33.629818 | orchestrator | included: /ansible/roles/opensearch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 00:57:33.629824 | orchestrator | 2026-03-29 00:57:33.629831 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2026-03-29 00:57:33.629838 | orchestrator | Sunday 29 March 2026 00:55:08 +0000 (0:00:00.521) 0:00:01.340 ********** 2026-03-29 00:57:33.629845 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-29 00:57:33.629851 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-29 00:57:33.629868 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-29 00:57:33.629875 | orchestrator | 2026-03-29 00:57:33.629882 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2026-03-29 00:57:33.629888 | orchestrator | Sunday 29 March 2026 00:55:09 +0000 (0:00:00.850) 0:00:02.190 ********** 2026-03-29 00:57:33.629897 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-29 00:57:33.629922 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-29 00:57:33.629939 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-29 00:57:33.629948 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-29 00:57:33.629961 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-29 00:57:33.629975 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-29 00:57:33.629982 | orchestrator | 2026-03-29 00:57:33.629988 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-03-29 00:57:33.629995 | orchestrator | Sunday 29 March 2026 00:55:10 +0000 (0:00:01.190) 0:00:03.381 ********** 2026-03-29 00:57:33.630001 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 00:57:33.630008 | orchestrator | 2026-03-29 00:57:33.630045 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2026-03-29 00:57:33.630054 | orchestrator | Sunday 29 March 2026 00:55:11 +0000 (0:00:00.441) 0:00:03.823 ********** 2026-03-29 00:57:33.630069 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-29 00:57:33.630077 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-29 00:57:33.630088 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-29 00:57:33.630101 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-29 00:57:33.630113 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-29 00:57:33.630121 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-29 00:57:33.630131 | orchestrator | 2026-03-29 00:57:33.630138 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2026-03-29 00:57:33.630144 | orchestrator | Sunday 29 March 2026 00:55:13 +0000 (0:00:02.726) 0:00:06.549 ********** 2026-03-29 00:57:33.630154 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-29 00:57:33.630166 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-29 00:57:33.630173 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:57:33.630180 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-29 00:57:33.630193 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-29 00:57:33.630200 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:57:33.630210 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-29 00:57:33.630310 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-29 00:57:33.630319 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:57:33.630326 | orchestrator | 2026-03-29 00:57:33.630333 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2026-03-29 00:57:33.630339 | orchestrator | Sunday 29 March 2026 00:55:14 +0000 (0:00:00.971) 0:00:07.521 ********** 2026-03-29 00:57:33.630345 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-29 00:57:33.630359 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-29 00:57:33.630366 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:57:33.630376 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-29 00:57:33.630387 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-29 00:57:33.630394 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:57:33.630400 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-29 00:57:33.630412 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-29 00:57:33.630419 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:57:33.630426 | orchestrator | 2026-03-29 00:57:33.630432 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2026-03-29 00:57:33.630438 | orchestrator | Sunday 29 March 2026 00:55:15 +0000 (0:00:00.716) 0:00:08.237 ********** 2026-03-29 00:57:33.630447 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-29 00:57:33.630464 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-29 00:57:33.630470 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-29 00:57:33.630482 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-29 00:57:33.630490 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-29 00:57:33.630504 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-29 00:57:33.630512 | orchestrator | 2026-03-29 00:57:33.630518 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2026-03-29 00:57:33.630525 | orchestrator | Sunday 29 March 2026 00:55:18 +0000 (0:00:02.542) 0:00:10.779 ********** 2026-03-29 00:57:33.630548 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:57:33.630555 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:57:33.630561 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:57:33.630567 | orchestrator | 2026-03-29 00:57:33.630573 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2026-03-29 00:57:33.630580 | orchestrator | Sunday 29 March 2026 00:55:21 +0000 (0:00:02.915) 0:00:13.695 ********** 2026-03-29 00:57:33.630586 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:57:33.630592 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:57:33.630598 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:57:33.630604 | orchestrator | 2026-03-29 00:57:33.630610 | orchestrator | TASK [opensearch : Check opensearch containers] ******************************** 2026-03-29 00:57:33.630616 | orchestrator | Sunday 29 March 2026 00:55:22 +0000 (0:00:01.503) 0:00:15.198 ********** 2026-03-29 00:57:33.630623 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-29 00:57:33.630635 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-29 00:57:33.630651 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-29 00:57:33.630659 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-29 00:57:33.630666 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-29 00:57:33.630678 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-29 00:57:33.630689 | orchestrator | 2026-03-29 00:57:33.630695 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-03-29 00:57:33.630701 | orchestrator | Sunday 29 March 2026 00:55:24 +0000 (0:00:01.961) 0:00:17.160 ********** 2026-03-29 00:57:33.630707 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:57:33.630714 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:57:33.630720 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:57:33.630727 | orchestrator | 2026-03-29 00:57:33.630733 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-03-29 00:57:33.630739 | orchestrator | Sunday 29 March 2026 00:55:24 +0000 (0:00:00.397) 0:00:17.558 ********** 2026-03-29 00:57:33.630745 | orchestrator | 2026-03-29 00:57:33.630752 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-03-29 00:57:33.630758 | orchestrator | Sunday 29 March 2026 00:55:24 +0000 (0:00:00.057) 0:00:17.615 ********** 2026-03-29 00:57:33.630764 | orchestrator | 2026-03-29 00:57:33.630770 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-03-29 00:57:33.630777 | orchestrator | Sunday 29 March 2026 00:55:24 +0000 (0:00:00.057) 0:00:17.673 ********** 2026-03-29 00:57:33.630784 | orchestrator | 2026-03-29 00:57:33.630793 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2026-03-29 00:57:33.630800 | orchestrator | Sunday 29 March 2026 00:55:25 +0000 (0:00:00.061) 0:00:17.735 ********** 2026-03-29 00:57:33.630807 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:57:33.630814 | orchestrator | 2026-03-29 00:57:33.630820 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2026-03-29 00:57:33.630827 | orchestrator | Sunday 29 March 2026 00:55:25 +0000 (0:00:00.198) 0:00:17.934 ********** 2026-03-29 00:57:33.630833 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:57:33.630840 | orchestrator | 2026-03-29 00:57:33.630846 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2026-03-29 00:57:33.630852 | orchestrator | Sunday 29 March 2026 00:55:25 +0000 (0:00:00.179) 0:00:18.113 ********** 2026-03-29 00:57:33.630859 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:57:33.630865 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:57:33.630871 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:57:33.630878 | orchestrator | 2026-03-29 00:57:33.630884 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2026-03-29 00:57:33.630890 | orchestrator | Sunday 29 March 2026 00:56:19 +0000 (0:00:54.457) 0:01:12.571 ********** 2026-03-29 00:57:33.630896 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:57:33.630902 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:57:33.630909 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:57:33.630915 | orchestrator | 2026-03-29 00:57:33.630920 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-03-29 00:57:33.630926 | orchestrator | Sunday 29 March 2026 00:57:19 +0000 (0:00:59.165) 0:02:11.737 ********** 2026-03-29 00:57:33.630931 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 00:57:33.630937 | orchestrator | 2026-03-29 00:57:33.630943 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2026-03-29 00:57:33.630949 | orchestrator | Sunday 29 March 2026 00:57:19 +0000 (0:00:00.672) 0:02:12.410 ********** 2026-03-29 00:57:33.630955 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:57:33.630961 | orchestrator | 2026-03-29 00:57:33.630966 | orchestrator | TASK [opensearch : Wait for OpenSearch cluster to become healthy] ************** 2026-03-29 00:57:33.630973 | orchestrator | Sunday 29 March 2026 00:57:22 +0000 (0:00:03.025) 0:02:15.436 ********** 2026-03-29 00:57:33.630986 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:57:33.630993 | orchestrator | 2026-03-29 00:57:33.630999 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2026-03-29 00:57:33.631005 | orchestrator | Sunday 29 March 2026 00:57:25 +0000 (0:00:02.334) 0:02:17.770 ********** 2026-03-29 00:57:33.631011 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:57:33.631018 | orchestrator | 2026-03-29 00:57:33.631023 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2026-03-29 00:57:33.631029 | orchestrator | Sunday 29 March 2026 00:57:27 +0000 (0:00:02.043) 0:02:19.813 ********** 2026-03-29 00:57:33.631035 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:57:33.631041 | orchestrator | 2026-03-29 00:57:33.631046 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2026-03-29 00:57:33.631051 | orchestrator | Sunday 29 March 2026 00:57:29 +0000 (0:00:02.424) 0:02:22.238 ********** 2026-03-29 00:57:33.631057 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:57:33.631063 | orchestrator | 2026-03-29 00:57:33.631068 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 00:57:33.631075 | orchestrator | testbed-node-0 : ok=19  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-29 00:57:33.631083 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-29 00:57:33.631096 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-29 00:57:33.631103 | orchestrator | 2026-03-29 00:57:33.631110 | orchestrator | 2026-03-29 00:57:33.631116 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-29 00:57:33.631123 | orchestrator | Sunday 29 March 2026 00:57:32 +0000 (0:00:02.559) 0:02:24.798 ********** 2026-03-29 00:57:33.631130 | orchestrator | =============================================================================== 2026-03-29 00:57:33.631136 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 59.17s 2026-03-29 00:57:33.631142 | orchestrator | opensearch : Restart opensearch container ------------------------------ 54.46s 2026-03-29 00:57:33.631148 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 3.03s 2026-03-29 00:57:33.631154 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 2.92s 2026-03-29 00:57:33.631160 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 2.73s 2026-03-29 00:57:33.631166 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 2.56s 2026-03-29 00:57:33.631172 | orchestrator | opensearch : Copying over config.json files for services ---------------- 2.54s 2026-03-29 00:57:33.631178 | orchestrator | opensearch : Create new log retention policy ---------------------------- 2.42s 2026-03-29 00:57:33.631185 | orchestrator | opensearch : Wait for OpenSearch cluster to become healthy -------------- 2.33s 2026-03-29 00:57:33.631192 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 2.04s 2026-03-29 00:57:33.631198 | orchestrator | opensearch : Check opensearch containers -------------------------------- 1.96s 2026-03-29 00:57:33.631204 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 1.50s 2026-03-29 00:57:33.631210 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 1.19s 2026-03-29 00:57:33.631219 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 0.97s 2026-03-29 00:57:33.631226 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 0.85s 2026-03-29 00:57:33.631232 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 0.72s 2026-03-29 00:57:33.631239 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.67s 2026-03-29 00:57:33.631245 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.52s 2026-03-29 00:57:33.631257 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.44s 2026-03-29 00:57:33.631263 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.40s 2026-03-29 00:57:33.631270 | orchestrator | 2026-03-29 00:57:33 | INFO  | Task 016221e2-dc07-4c3c-86a3-db4b32d3328c is in state STARTED 2026-03-29 00:57:33.631276 | orchestrator | 2026-03-29 00:57:33 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:57:36.686657 | orchestrator | 2026-03-29 00:57:36 | INFO  | Task 1f78857e-e9f0-4ef0-b3d3-37a20238e210 is in state STARTED 2026-03-29 00:57:36.687858 | orchestrator | 2026-03-29 00:57:36 | INFO  | Task 016221e2-dc07-4c3c-86a3-db4b32d3328c is in state STARTED 2026-03-29 00:57:36.687907 | orchestrator | 2026-03-29 00:57:36 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:57:39.747449 | orchestrator | 2026-03-29 00:57:39 | INFO  | Task 1f78857e-e9f0-4ef0-b3d3-37a20238e210 is in state STARTED 2026-03-29 00:57:39.747585 | orchestrator | 2026-03-29 00:57:39 | INFO  | Task 016221e2-dc07-4c3c-86a3-db4b32d3328c is in state STARTED 2026-03-29 00:57:39.747604 | orchestrator | 2026-03-29 00:57:39 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:57:42.803917 | orchestrator | 2026-03-29 00:57:42 | INFO  | Task 1f78857e-e9f0-4ef0-b3d3-37a20238e210 is in state STARTED 2026-03-29 00:57:42.804809 | orchestrator | 2026-03-29 00:57:42 | INFO  | Task 016221e2-dc07-4c3c-86a3-db4b32d3328c is in state STARTED 2026-03-29 00:57:42.804853 | orchestrator | 2026-03-29 00:57:42 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:57:45.857499 | orchestrator | 2026-03-29 00:57:45 | INFO  | Task 1f78857e-e9f0-4ef0-b3d3-37a20238e210 is in state STARTED 2026-03-29 00:57:45.858853 | orchestrator | 2026-03-29 00:57:45 | INFO  | Task 016221e2-dc07-4c3c-86a3-db4b32d3328c is in state STARTED 2026-03-29 00:57:45.859207 | orchestrator | 2026-03-29 00:57:45 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:57:48.912810 | orchestrator | 2026-03-29 00:57:48 | INFO  | Task 1f78857e-e9f0-4ef0-b3d3-37a20238e210 is in state STARTED 2026-03-29 00:57:48.916225 | orchestrator | 2026-03-29 00:57:48 | INFO  | Task 016221e2-dc07-4c3c-86a3-db4b32d3328c is in state STARTED 2026-03-29 00:57:48.916382 | orchestrator | 2026-03-29 00:57:48 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:57:51.968651 | orchestrator | 2026-03-29 00:57:51 | INFO  | Task 1f78857e-e9f0-4ef0-b3d3-37a20238e210 is in state STARTED 2026-03-29 00:57:51.970556 | orchestrator | 2026-03-29 00:57:51 | INFO  | Task 016221e2-dc07-4c3c-86a3-db4b32d3328c is in state STARTED 2026-03-29 00:57:51.970619 | orchestrator | 2026-03-29 00:57:51 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:57:55.016162 | orchestrator | 2026-03-29 00:57:55 | INFO  | Task 1f78857e-e9f0-4ef0-b3d3-37a20238e210 is in state STARTED 2026-03-29 00:57:55.018678 | orchestrator | 2026-03-29 00:57:55 | INFO  | Task 016221e2-dc07-4c3c-86a3-db4b32d3328c is in state STARTED 2026-03-29 00:57:55.018755 | orchestrator | 2026-03-29 00:57:55 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:57:58.084115 | orchestrator | 2026-03-29 00:57:58 | INFO  | Task 1f78857e-e9f0-4ef0-b3d3-37a20238e210 is in state STARTED 2026-03-29 00:57:58.085815 | orchestrator | 2026-03-29 00:57:58 | INFO  | Task 016221e2-dc07-4c3c-86a3-db4b32d3328c is in state STARTED 2026-03-29 00:57:58.085968 | orchestrator | 2026-03-29 00:57:58 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:58:01.142495 | orchestrator | 2026-03-29 00:58:01 | INFO  | Task cebd383f-6df6-4d15-a29b-3d6a6e0487ea is in state STARTED 2026-03-29 00:58:01.144200 | orchestrator | 2026-03-29 00:58:01 | INFO  | Task 8fd1b60d-53e6-44d6-87e3-817fe21433c8 is in state STARTED 2026-03-29 00:58:01.145097 | orchestrator | 2026-03-29 00:58:01 | INFO  | Task 1f78857e-e9f0-4ef0-b3d3-37a20238e210 is in state STARTED 2026-03-29 00:58:01.147040 | orchestrator | 2026-03-29 00:58:01.147110 | orchestrator | 2026-03-29 00:58:01 | INFO  | Task 016221e2-dc07-4c3c-86a3-db4b32d3328c is in state SUCCESS 2026-03-29 00:58:01.147205 | orchestrator | 2026-03-29 00:58:01 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:58:01.149344 | orchestrator | 2026-03-29 00:58:01.149395 | orchestrator | PLAY [Set kolla_action_mariadb] ************************************************ 2026-03-29 00:58:01.149407 | orchestrator | 2026-03-29 00:58:01.149427 | orchestrator | TASK [Inform the user about the following task] ******************************** 2026-03-29 00:58:01.149433 | orchestrator | Sunday 29 March 2026 00:55:07 +0000 (0:00:00.088) 0:00:00.088 ********** 2026-03-29 00:58:01.149440 | orchestrator | ok: [localhost] => { 2026-03-29 00:58:01.149448 | orchestrator |  "msg": "The task 'Check MariaDB service' fails if the MariaDB service has not yet been deployed. This is fine." 2026-03-29 00:58:01.149454 | orchestrator | } 2026-03-29 00:58:01.149460 | orchestrator | 2026-03-29 00:58:01.149466 | orchestrator | TASK [Check MariaDB service] *************************************************** 2026-03-29 00:58:01.149472 | orchestrator | Sunday 29 March 2026 00:55:07 +0000 (0:00:00.047) 0:00:00.135 ********** 2026-03-29 00:58:01.149478 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.9:3306"} 2026-03-29 00:58:01.149485 | orchestrator | ...ignoring 2026-03-29 00:58:01.149492 | orchestrator | 2026-03-29 00:58:01.149498 | orchestrator | TASK [Set kolla_action_mariadb = upgrade if MariaDB is already running] ******** 2026-03-29 00:58:01.149505 | orchestrator | Sunday 29 March 2026 00:55:10 +0000 (0:00:02.865) 0:00:03.001 ********** 2026-03-29 00:58:01.149588 | orchestrator | skipping: [localhost] 2026-03-29 00:58:01.149592 | orchestrator | 2026-03-29 00:58:01.149596 | orchestrator | TASK [Set kolla_action_mariadb = kolla_action_ng] ****************************** 2026-03-29 00:58:01.149600 | orchestrator | Sunday 29 March 2026 00:55:10 +0000 (0:00:00.059) 0:00:03.061 ********** 2026-03-29 00:58:01.149604 | orchestrator | ok: [localhost] 2026-03-29 00:58:01.149608 | orchestrator | 2026-03-29 00:58:01.149612 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-29 00:58:01.149617 | orchestrator | 2026-03-29 00:58:01.149623 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-29 00:58:01.149629 | orchestrator | Sunday 29 March 2026 00:55:10 +0000 (0:00:00.212) 0:00:03.274 ********** 2026-03-29 00:58:01.149639 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:58:01.149645 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:58:01.149651 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:58:01.149657 | orchestrator | 2026-03-29 00:58:01.149663 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-29 00:58:01.149669 | orchestrator | Sunday 29 March 2026 00:55:10 +0000 (0:00:00.261) 0:00:03.536 ********** 2026-03-29 00:58:01.149676 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2026-03-29 00:58:01.149683 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2026-03-29 00:58:01.149689 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2026-03-29 00:58:01.149844 | orchestrator | 2026-03-29 00:58:01.149854 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2026-03-29 00:58:01.149858 | orchestrator | 2026-03-29 00:58:01.149862 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2026-03-29 00:58:01.149866 | orchestrator | Sunday 29 March 2026 00:55:11 +0000 (0:00:00.383) 0:00:03.919 ********** 2026-03-29 00:58:01.149870 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-29 00:58:01.149875 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-03-29 00:58:01.149879 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-03-29 00:58:01.149882 | orchestrator | 2026-03-29 00:58:01.149886 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-03-29 00:58:01.149904 | orchestrator | Sunday 29 March 2026 00:55:11 +0000 (0:00:00.418) 0:00:04.338 ********** 2026-03-29 00:58:01.149908 | orchestrator | included: /ansible/roles/mariadb/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 00:58:01.149912 | orchestrator | 2026-03-29 00:58:01.149916 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2026-03-29 00:58:01.149920 | orchestrator | Sunday 29 March 2026 00:55:12 +0000 (0:00:00.617) 0:00:04.955 ********** 2026-03-29 00:58:01.149942 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-29 00:58:01.149949 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-29 00:58:01.149959 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-29 00:58:01.149964 | orchestrator | 2026-03-29 00:58:01.149972 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2026-03-29 00:58:01.149976 | orchestrator | Sunday 29 March 2026 00:55:15 +0000 (0:00:03.184) 0:00:08.140 ********** 2026-03-29 00:58:01.149982 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:58:01.149987 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:58:01.149991 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:58:01.149994 | orchestrator | 2026-03-29 00:58:01.149998 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2026-03-29 00:58:01.150002 | orchestrator | Sunday 29 March 2026 00:55:16 +0000 (0:00:00.693) 0:00:08.833 ********** 2026-03-29 00:58:01.150006 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:58:01.150009 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:58:01.150037 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:58:01.150041 | orchestrator | 2026-03-29 00:58:01.150045 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2026-03-29 00:58:01.150049 | orchestrator | Sunday 29 March 2026 00:55:17 +0000 (0:00:01.361) 0:00:10.195 ********** 2026-03-29 00:58:01.150054 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-29 00:58:01.150069 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-29 00:58:01.150073 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-29 00:58:01.150082 | orchestrator | 2026-03-29 00:58:01.150086 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2026-03-29 00:58:01.150089 | orchestrator | Sunday 29 March 2026 00:55:21 +0000 (0:00:03.820) 0:00:14.015 ********** 2026-03-29 00:58:01.150093 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:58:01.150097 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:58:01.150101 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:58:01.150104 | orchestrator | 2026-03-29 00:58:01.150108 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2026-03-29 00:58:01.150112 | orchestrator | Sunday 29 March 2026 00:55:22 +0000 (0:00:01.167) 0:00:15.182 ********** 2026-03-29 00:58:01.150116 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:58:01.150120 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:58:01.150123 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:58:01.150127 | orchestrator | 2026-03-29 00:58:01.150131 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-03-29 00:58:01.150135 | orchestrator | Sunday 29 March 2026 00:55:25 +0000 (0:00:03.352) 0:00:18.535 ********** 2026-03-29 00:58:01.150139 | orchestrator | included: /ansible/roles/mariadb/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 00:58:01.150142 | orchestrator | 2026-03-29 00:58:01.150146 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-03-29 00:58:01.150150 | orchestrator | Sunday 29 March 2026 00:55:26 +0000 (0:00:00.473) 0:00:19.009 ********** 2026-03-29 00:58:01.150160 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-29 00:58:01.150165 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:58:01.150169 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-29 00:58:01.150176 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:58:01.150185 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-29 00:58:01.150190 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:58:01.150193 | orchestrator | 2026-03-29 00:58:01.150197 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-03-29 00:58:01.150201 | orchestrator | Sunday 29 March 2026 00:55:29 +0000 (0:00:02.853) 0:00:21.862 ********** 2026-03-29 00:58:01.150205 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-29 00:58:01.150212 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:58:01.150219 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-29 00:58:01.150226 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:58:01.150230 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-29 00:58:01.150240 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:58:01.150243 | orchestrator | 2026-03-29 00:58:01.150247 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-03-29 00:58:01.150251 | orchestrator | Sunday 29 March 2026 00:55:31 +0000 (0:00:02.496) 0:00:24.359 ********** 2026-03-29 00:58:01.150255 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-29 00:58:01.150259 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:58:01.150269 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-29 00:58:01.150277 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:58:01.150281 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-29 00:58:01.150285 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:58:01.150289 | orchestrator | 2026-03-29 00:58:01.150292 | orchestrator | TASK [mariadb : Check mariadb containers] ************************************** 2026-03-29 00:58:01.150296 | orchestrator | Sunday 29 March 2026 00:55:34 +0000 (0:00:03.127) 0:00:27.487 ********** 2026-03-29 00:58:01.150306 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-29 00:58:01.150313 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-29 00:58:01.150323 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-29 00:58:01.150331 | orchestrator | 2026-03-29 00:58:01.150335 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2026-03-29 00:58:01.150339 | orchestrator | Sunday 29 March 2026 00:55:38 +0000 (0:00:03.140) 0:00:30.627 ********** 2026-03-29 00:58:01.150343 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:58:01.150346 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:58:01.150350 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:58:01.150354 | orchestrator | 2026-03-29 00:58:01.150358 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2026-03-29 00:58:01.150361 | orchestrator | Sunday 29 March 2026 00:55:38 +0000 (0:00:00.872) 0:00:31.500 ********** 2026-03-29 00:58:01.150365 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:58:01.150369 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:58:01.150373 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:58:01.150376 | orchestrator | 2026-03-29 00:58:01.150380 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2026-03-29 00:58:01.150384 | orchestrator | Sunday 29 March 2026 00:55:39 +0000 (0:00:00.349) 0:00:31.849 ********** 2026-03-29 00:58:01.150387 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:58:01.150391 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:58:01.150395 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:58:01.150399 | orchestrator | 2026-03-29 00:58:01.150402 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2026-03-29 00:58:01.150406 | orchestrator | Sunday 29 March 2026 00:55:39 +0000 (0:00:00.429) 0:00:32.279 ********** 2026-03-29 00:58:01.150411 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.10:3306"} 2026-03-29 00:58:01.150415 | orchestrator | ...ignoring 2026-03-29 00:58:01.150419 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.11:3306"} 2026-03-29 00:58:01.150423 | orchestrator | ...ignoring 2026-03-29 00:58:01.150426 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.12:3306"} 2026-03-29 00:58:01.150430 | orchestrator | ...ignoring 2026-03-29 00:58:01.150434 | orchestrator | 2026-03-29 00:58:01.150438 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2026-03-29 00:58:01.150441 | orchestrator | Sunday 29 March 2026 00:55:51 +0000 (0:00:11.459) 0:00:43.738 ********** 2026-03-29 00:58:01.150445 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:58:01.150449 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:58:01.150453 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:58:01.150456 | orchestrator | 2026-03-29 00:58:01.150460 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2026-03-29 00:58:01.150465 | orchestrator | Sunday 29 March 2026 00:55:51 +0000 (0:00:00.448) 0:00:44.187 ********** 2026-03-29 00:58:01.150470 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:58:01.150474 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:58:01.150482 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:58:01.150486 | orchestrator | 2026-03-29 00:58:01.150490 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2026-03-29 00:58:01.150494 | orchestrator | Sunday 29 March 2026 00:55:51 +0000 (0:00:00.416) 0:00:44.603 ********** 2026-03-29 00:58:01.150499 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:58:01.150503 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:58:01.150522 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:58:01.150527 | orchestrator | 2026-03-29 00:58:01.150531 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2026-03-29 00:58:01.150536 | orchestrator | Sunday 29 March 2026 00:55:52 +0000 (0:00:00.453) 0:00:45.056 ********** 2026-03-29 00:58:01.150540 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:58:01.150545 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:58:01.150549 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:58:01.150553 | orchestrator | 2026-03-29 00:58:01.150557 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2026-03-29 00:58:01.150562 | orchestrator | Sunday 29 March 2026 00:55:53 +0000 (0:00:00.644) 0:00:45.700 ********** 2026-03-29 00:58:01.150566 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:58:01.150570 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:58:01.150574 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:58:01.150579 | orchestrator | 2026-03-29 00:58:01.150583 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2026-03-29 00:58:01.150588 | orchestrator | Sunday 29 March 2026 00:55:53 +0000 (0:00:00.402) 0:00:46.103 ********** 2026-03-29 00:58:01.150594 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:58:01.150598 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:58:01.150603 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:58:01.150608 | orchestrator | 2026-03-29 00:58:01.150615 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-03-29 00:58:01.150622 | orchestrator | Sunday 29 March 2026 00:55:53 +0000 (0:00:00.447) 0:00:46.551 ********** 2026-03-29 00:58:01.150628 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:58:01.150634 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:58:01.150640 | orchestrator | included: /ansible/roles/mariadb/tasks/bootstrap_cluster.yml for testbed-node-0 2026-03-29 00:58:01.150646 | orchestrator | 2026-03-29 00:58:01.150652 | orchestrator | TASK [mariadb : Running MariaDB bootstrap container] *************************** 2026-03-29 00:58:01.150658 | orchestrator | Sunday 29 March 2026 00:55:54 +0000 (0:00:00.376) 0:00:46.927 ********** 2026-03-29 00:58:01.150664 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:58:01.150670 | orchestrator | 2026-03-29 00:58:01.150679 | orchestrator | TASK [mariadb : Store bootstrap host name into facts] ************************** 2026-03-29 00:58:01.150686 | orchestrator | Sunday 29 March 2026 00:56:04 +0000 (0:00:09.816) 0:00:56.744 ********** 2026-03-29 00:58:01.150692 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:58:01.150698 | orchestrator | 2026-03-29 00:58:01.150704 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-03-29 00:58:01.150710 | orchestrator | Sunday 29 March 2026 00:56:04 +0000 (0:00:00.343) 0:00:57.087 ********** 2026-03-29 00:58:01.150716 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:58:01.150722 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:58:01.150727 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:58:01.150733 | orchestrator | 2026-03-29 00:58:01.150739 | orchestrator | RUNNING HANDLER [mariadb : Starting first MariaDB container] ******************* 2026-03-29 00:58:01.150745 | orchestrator | Sunday 29 March 2026 00:56:05 +0000 (0:00:00.834) 0:00:57.921 ********** 2026-03-29 00:58:01.150751 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:58:01.150757 | orchestrator | 2026-03-29 00:58:01.150761 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service port liveness] ******* 2026-03-29 00:58:01.150764 | orchestrator | Sunday 29 March 2026 00:56:12 +0000 (0:00:06.817) 0:01:04.738 ********** 2026-03-29 00:58:01.150768 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:58:01.150772 | orchestrator | 2026-03-29 00:58:01.150780 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service to sync WSREP] ******* 2026-03-29 00:58:01.150784 | orchestrator | Sunday 29 March 2026 00:56:13 +0000 (0:00:01.516) 0:01:06.255 ********** 2026-03-29 00:58:01.150787 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:58:01.150791 | orchestrator | 2026-03-29 00:58:01.150795 | orchestrator | RUNNING HANDLER [mariadb : Ensure MariaDB is running normally on bootstrap host] *** 2026-03-29 00:58:01.150799 | orchestrator | Sunday 29 March 2026 00:56:16 +0000 (0:00:02.441) 0:01:08.696 ********** 2026-03-29 00:58:01.150802 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:58:01.150806 | orchestrator | 2026-03-29 00:58:01.150810 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2026-03-29 00:58:01.150814 | orchestrator | Sunday 29 March 2026 00:56:16 +0000 (0:00:00.144) 0:01:08.841 ********** 2026-03-29 00:58:01.150817 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:58:01.150821 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:58:01.150825 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:58:01.150829 | orchestrator | 2026-03-29 00:58:01.150832 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2026-03-29 00:58:01.150836 | orchestrator | Sunday 29 March 2026 00:56:16 +0000 (0:00:00.299) 0:01:09.141 ********** 2026-03-29 00:58:01.150840 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:58:01.150843 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:58:01.150847 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:58:01.150851 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2026-03-29 00:58:01.150855 | orchestrator | 2026-03-29 00:58:01.150860 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-03-29 00:58:01.150869 | orchestrator | skipping: no hosts matched 2026-03-29 00:58:01.150877 | orchestrator | 2026-03-29 00:58:01.150883 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-03-29 00:58:01.150889 | orchestrator | 2026-03-29 00:58:01.150895 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-03-29 00:58:01.150900 | orchestrator | Sunday 29 March 2026 00:56:16 +0000 (0:00:00.340) 0:01:09.481 ********** 2026-03-29 00:58:01.150906 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:58:01.150911 | orchestrator | 2026-03-29 00:58:01.150918 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-03-29 00:58:01.150924 | orchestrator | Sunday 29 March 2026 00:56:31 +0000 (0:00:14.662) 0:01:24.144 ********** 2026-03-29 00:58:01.150931 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:58:01.150937 | orchestrator | 2026-03-29 00:58:01.150943 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-03-29 00:58:01.150947 | orchestrator | Sunday 29 March 2026 00:56:47 +0000 (0:00:15.565) 0:01:39.710 ********** 2026-03-29 00:58:01.150951 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:58:01.150955 | orchestrator | 2026-03-29 00:58:01.150959 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-03-29 00:58:01.150962 | orchestrator | 2026-03-29 00:58:01.150966 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-03-29 00:58:01.150970 | orchestrator | Sunday 29 March 2026 00:56:49 +0000 (0:00:02.432) 0:01:42.142 ********** 2026-03-29 00:58:01.150973 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:58:01.150977 | orchestrator | 2026-03-29 00:58:01.150981 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-03-29 00:58:01.150985 | orchestrator | Sunday 29 March 2026 00:57:11 +0000 (0:00:21.694) 0:02:03.837 ********** 2026-03-29 00:58:01.150988 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:58:01.150992 | orchestrator | 2026-03-29 00:58:01.150996 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-03-29 00:58:01.150999 | orchestrator | Sunday 29 March 2026 00:57:21 +0000 (0:00:09.882) 0:02:13.720 ********** 2026-03-29 00:58:01.151003 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:58:01.151007 | orchestrator | 2026-03-29 00:58:01.151011 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2026-03-29 00:58:01.151019 | orchestrator | 2026-03-29 00:58:01.151027 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-03-29 00:58:01.151035 | orchestrator | Sunday 29 March 2026 00:57:23 +0000 (0:00:02.405) 0:02:16.125 ********** 2026-03-29 00:58:01.151038 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:58:01.151042 | orchestrator | 2026-03-29 00:58:01.151046 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-03-29 00:58:01.151050 | orchestrator | Sunday 29 March 2026 00:57:39 +0000 (0:00:16.377) 0:02:32.502 ********** 2026-03-29 00:58:01.151054 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:58:01.151057 | orchestrator | 2026-03-29 00:58:01.151061 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-03-29 00:58:01.151065 | orchestrator | Sunday 29 March 2026 00:57:40 +0000 (0:00:00.739) 0:02:33.242 ********** 2026-03-29 00:58:01.151069 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:58:01.151072 | orchestrator | 2026-03-29 00:58:01.151076 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2026-03-29 00:58:01.151080 | orchestrator | 2026-03-29 00:58:01.151084 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2026-03-29 00:58:01.151087 | orchestrator | Sunday 29 March 2026 00:57:43 +0000 (0:00:02.763) 0:02:36.006 ********** 2026-03-29 00:58:01.151091 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 00:58:01.151095 | orchestrator | 2026-03-29 00:58:01.151099 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2026-03-29 00:58:01.151103 | orchestrator | Sunday 29 March 2026 00:57:44 +0000 (0:00:00.739) 0:02:36.746 ********** 2026-03-29 00:58:01.151106 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:58:01.151110 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:58:01.151114 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:58:01.151118 | orchestrator | 2026-03-29 00:58:01.151122 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2026-03-29 00:58:01.151125 | orchestrator | Sunday 29 March 2026 00:57:46 +0000 (0:00:02.851) 0:02:39.597 ********** 2026-03-29 00:58:01.151129 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:58:01.151133 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:58:01.151137 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:58:01.151140 | orchestrator | 2026-03-29 00:58:01.151144 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2026-03-29 00:58:01.151148 | orchestrator | Sunday 29 March 2026 00:57:49 +0000 (0:00:02.654) 0:02:42.252 ********** 2026-03-29 00:58:01.151152 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:58:01.151155 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:58:01.151159 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:58:01.151163 | orchestrator | 2026-03-29 00:58:01.151167 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2026-03-29 00:58:01.151171 | orchestrator | Sunday 29 March 2026 00:57:52 +0000 (0:00:02.530) 0:02:44.783 ********** 2026-03-29 00:58:01.151175 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:58:01.151181 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:58:01.151187 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:58:01.151193 | orchestrator | 2026-03-29 00:58:01.151200 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2026-03-29 00:58:01.151205 | orchestrator | Sunday 29 March 2026 00:57:54 +0000 (0:00:02.538) 0:02:47.322 ********** 2026-03-29 00:58:01.151211 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:58:01.151217 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:58:01.151223 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:58:01.151228 | orchestrator | 2026-03-29 00:58:01.151234 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2026-03-29 00:58:01.151239 | orchestrator | Sunday 29 March 2026 00:57:58 +0000 (0:00:03.606) 0:02:50.929 ********** 2026-03-29 00:58:01.151245 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:58:01.151251 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:58:01.151262 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:58:01.151268 | orchestrator | 2026-03-29 00:58:01.151275 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 00:58:01.151282 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2026-03-29 00:58:01.151288 | orchestrator | testbed-node-0 : ok=34  changed=16  unreachable=0 failed=0 skipped=11  rescued=0 ignored=1  2026-03-29 00:58:01.151296 | orchestrator | testbed-node-1 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2026-03-29 00:58:01.151302 | orchestrator | testbed-node-2 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2026-03-29 00:58:01.151308 | orchestrator | 2026-03-29 00:58:01.151315 | orchestrator | 2026-03-29 00:58:01.151322 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-29 00:58:01.151329 | orchestrator | Sunday 29 March 2026 00:57:58 +0000 (0:00:00.223) 0:02:51.152 ********** 2026-03-29 00:58:01.151336 | orchestrator | =============================================================================== 2026-03-29 00:58:01.151343 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 36.36s 2026-03-29 00:58:01.151350 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 25.45s 2026-03-29 00:58:01.151356 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 16.38s 2026-03-29 00:58:01.151363 | orchestrator | mariadb : Check MariaDB service port liveness -------------------------- 11.46s 2026-03-29 00:58:01.151369 | orchestrator | mariadb : Running MariaDB bootstrap container --------------------------- 9.82s 2026-03-29 00:58:01.151376 | orchestrator | mariadb : Starting first MariaDB container ------------------------------ 6.82s 2026-03-29 00:58:01.151386 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 4.84s 2026-03-29 00:58:01.151392 | orchestrator | mariadb : Copying over config.json files for services ------------------- 3.82s 2026-03-29 00:58:01.151402 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 3.61s 2026-03-29 00:58:01.151408 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 3.35s 2026-03-29 00:58:01.151414 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 3.18s 2026-03-29 00:58:01.151420 | orchestrator | mariadb : Check mariadb containers -------------------------------------- 3.14s 2026-03-29 00:58:01.151426 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS key ----- 3.13s 2026-03-29 00:58:01.151432 | orchestrator | Check MariaDB service --------------------------------------------------- 2.87s 2026-03-29 00:58:01.151438 | orchestrator | service-cert-copy : mariadb | Copying over extra CA certificates -------- 2.85s 2026-03-29 00:58:01.151444 | orchestrator | mariadb : Creating shard root mysql user -------------------------------- 2.85s 2026-03-29 00:58:01.151450 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 2.76s 2026-03-29 00:58:01.151457 | orchestrator | mariadb : Creating mysql monitor user ----------------------------------- 2.66s 2026-03-29 00:58:01.151463 | orchestrator | mariadb : Granting permissions on Mariabackup database to backup user --- 2.54s 2026-03-29 00:58:01.151467 | orchestrator | mariadb : Creating database backup user and setting permissions --------- 2.53s 2026-03-29 00:58:04.187764 | orchestrator | 2026-03-29 00:58:04 | INFO  | Task cebd383f-6df6-4d15-a29b-3d6a6e0487ea is in state STARTED 2026-03-29 00:58:04.189109 | orchestrator | 2026-03-29 00:58:04 | INFO  | Task 8fd1b60d-53e6-44d6-87e3-817fe21433c8 is in state STARTED 2026-03-29 00:58:04.190615 | orchestrator | 2026-03-29 00:58:04 | INFO  | Task 1f78857e-e9f0-4ef0-b3d3-37a20238e210 is in state STARTED 2026-03-29 00:58:04.190799 | orchestrator | 2026-03-29 00:58:04 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:58:07.229973 | orchestrator | 2026-03-29 00:58:07 | INFO  | Task cebd383f-6df6-4d15-a29b-3d6a6e0487ea is in state STARTED 2026-03-29 00:58:07.231209 | orchestrator | 2026-03-29 00:58:07 | INFO  | Task 8fd1b60d-53e6-44d6-87e3-817fe21433c8 is in state STARTED 2026-03-29 00:58:07.231906 | orchestrator | 2026-03-29 00:58:07 | INFO  | Task 1f78857e-e9f0-4ef0-b3d3-37a20238e210 is in state STARTED 2026-03-29 00:58:07.231937 | orchestrator | 2026-03-29 00:58:07 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:58:10.279776 | orchestrator | 2026-03-29 00:58:10 | INFO  | Task cebd383f-6df6-4d15-a29b-3d6a6e0487ea is in state STARTED 2026-03-29 00:58:10.281356 | orchestrator | 2026-03-29 00:58:10 | INFO  | Task 8fd1b60d-53e6-44d6-87e3-817fe21433c8 is in state STARTED 2026-03-29 00:58:10.286096 | orchestrator | 2026-03-29 00:58:10 | INFO  | Task 1f78857e-e9f0-4ef0-b3d3-37a20238e210 is in state STARTED 2026-03-29 00:58:10.286157 | orchestrator | 2026-03-29 00:58:10 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:58:13.337342 | orchestrator | 2026-03-29 00:58:13 | INFO  | Task cebd383f-6df6-4d15-a29b-3d6a6e0487ea is in state STARTED 2026-03-29 00:58:13.339848 | orchestrator | 2026-03-29 00:58:13 | INFO  | Task 8fd1b60d-53e6-44d6-87e3-817fe21433c8 is in state STARTED 2026-03-29 00:58:13.343729 | orchestrator | 2026-03-29 00:58:13 | INFO  | Task 1f78857e-e9f0-4ef0-b3d3-37a20238e210 is in state STARTED 2026-03-29 00:58:13.343797 | orchestrator | 2026-03-29 00:58:13 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:58:16.378711 | orchestrator | 2026-03-29 00:58:16 | INFO  | Task cebd383f-6df6-4d15-a29b-3d6a6e0487ea is in state STARTED 2026-03-29 00:58:16.379191 | orchestrator | 2026-03-29 00:58:16 | INFO  | Task 8fd1b60d-53e6-44d6-87e3-817fe21433c8 is in state STARTED 2026-03-29 00:58:16.380196 | orchestrator | 2026-03-29 00:58:16 | INFO  | Task 1f78857e-e9f0-4ef0-b3d3-37a20238e210 is in state STARTED 2026-03-29 00:58:16.380257 | orchestrator | 2026-03-29 00:58:16 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:58:19.414839 | orchestrator | 2026-03-29 00:58:19 | INFO  | Task cebd383f-6df6-4d15-a29b-3d6a6e0487ea is in state STARTED 2026-03-29 00:58:19.417274 | orchestrator | 2026-03-29 00:58:19 | INFO  | Task 8fd1b60d-53e6-44d6-87e3-817fe21433c8 is in state STARTED 2026-03-29 00:58:19.419191 | orchestrator | 2026-03-29 00:58:19 | INFO  | Task 1f78857e-e9f0-4ef0-b3d3-37a20238e210 is in state STARTED 2026-03-29 00:58:19.419248 | orchestrator | 2026-03-29 00:58:19 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:58:22.469650 | orchestrator | 2026-03-29 00:58:22 | INFO  | Task cebd383f-6df6-4d15-a29b-3d6a6e0487ea is in state STARTED 2026-03-29 00:58:22.471997 | orchestrator | 2026-03-29 00:58:22 | INFO  | Task 8fd1b60d-53e6-44d6-87e3-817fe21433c8 is in state STARTED 2026-03-29 00:58:22.473322 | orchestrator | 2026-03-29 00:58:22 | INFO  | Task 1f78857e-e9f0-4ef0-b3d3-37a20238e210 is in state STARTED 2026-03-29 00:58:22.473362 | orchestrator | 2026-03-29 00:58:22 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:58:25.520848 | orchestrator | 2026-03-29 00:58:25 | INFO  | Task cebd383f-6df6-4d15-a29b-3d6a6e0487ea is in state STARTED 2026-03-29 00:58:25.522544 | orchestrator | 2026-03-29 00:58:25 | INFO  | Task 8fd1b60d-53e6-44d6-87e3-817fe21433c8 is in state STARTED 2026-03-29 00:58:25.524578 | orchestrator | 2026-03-29 00:58:25 | INFO  | Task 1f78857e-e9f0-4ef0-b3d3-37a20238e210 is in state STARTED 2026-03-29 00:58:25.524637 | orchestrator | 2026-03-29 00:58:25 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:58:28.565732 | orchestrator | 2026-03-29 00:58:28 | INFO  | Task cebd383f-6df6-4d15-a29b-3d6a6e0487ea is in state STARTED 2026-03-29 00:58:28.568541 | orchestrator | 2026-03-29 00:58:28 | INFO  | Task 8fd1b60d-53e6-44d6-87e3-817fe21433c8 is in state STARTED 2026-03-29 00:58:28.571273 | orchestrator | 2026-03-29 00:58:28 | INFO  | Task 1f78857e-e9f0-4ef0-b3d3-37a20238e210 is in state STARTED 2026-03-29 00:58:28.571381 | orchestrator | 2026-03-29 00:58:28 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:58:31.613043 | orchestrator | 2026-03-29 00:58:31 | INFO  | Task cebd383f-6df6-4d15-a29b-3d6a6e0487ea is in state STARTED 2026-03-29 00:58:31.615658 | orchestrator | 2026-03-29 00:58:31 | INFO  | Task 8fd1b60d-53e6-44d6-87e3-817fe21433c8 is in state STARTED 2026-03-29 00:58:31.617940 | orchestrator | 2026-03-29 00:58:31 | INFO  | Task 1f78857e-e9f0-4ef0-b3d3-37a20238e210 is in state STARTED 2026-03-29 00:58:31.618062 | orchestrator | 2026-03-29 00:58:31 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:58:34.661402 | orchestrator | 2026-03-29 00:58:34 | INFO  | Task cebd383f-6df6-4d15-a29b-3d6a6e0487ea is in state STARTED 2026-03-29 00:58:34.662160 | orchestrator | 2026-03-29 00:58:34 | INFO  | Task 8fd1b60d-53e6-44d6-87e3-817fe21433c8 is in state STARTED 2026-03-29 00:58:34.663635 | orchestrator | 2026-03-29 00:58:34 | INFO  | Task 1f78857e-e9f0-4ef0-b3d3-37a20238e210 is in state STARTED 2026-03-29 00:58:34.663689 | orchestrator | 2026-03-29 00:58:34 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:58:37.709430 | orchestrator | 2026-03-29 00:58:37 | INFO  | Task cebd383f-6df6-4d15-a29b-3d6a6e0487ea is in state STARTED 2026-03-29 00:58:37.712231 | orchestrator | 2026-03-29 00:58:37 | INFO  | Task 8fd1b60d-53e6-44d6-87e3-817fe21433c8 is in state STARTED 2026-03-29 00:58:37.714166 | orchestrator | 2026-03-29 00:58:37 | INFO  | Task 1f78857e-e9f0-4ef0-b3d3-37a20238e210 is in state STARTED 2026-03-29 00:58:37.714219 | orchestrator | 2026-03-29 00:58:37 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:58:40.750345 | orchestrator | 2026-03-29 00:58:40 | INFO  | Task cebd383f-6df6-4d15-a29b-3d6a6e0487ea is in state STARTED 2026-03-29 00:58:40.751972 | orchestrator | 2026-03-29 00:58:40 | INFO  | Task 8fd1b60d-53e6-44d6-87e3-817fe21433c8 is in state STARTED 2026-03-29 00:58:40.753239 | orchestrator | 2026-03-29 00:58:40 | INFO  | Task 1f78857e-e9f0-4ef0-b3d3-37a20238e210 is in state STARTED 2026-03-29 00:58:40.753258 | orchestrator | 2026-03-29 00:58:40 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:58:43.805358 | orchestrator | 2026-03-29 00:58:43 | INFO  | Task cebd383f-6df6-4d15-a29b-3d6a6e0487ea is in state STARTED 2026-03-29 00:58:43.807569 | orchestrator | 2026-03-29 00:58:43 | INFO  | Task 8fd1b60d-53e6-44d6-87e3-817fe21433c8 is in state STARTED 2026-03-29 00:58:43.809150 | orchestrator | 2026-03-29 00:58:43 | INFO  | Task 58128f79-0730-49f6-8b80-36a7f0684464 is in state STARTED 2026-03-29 00:58:43.814206 | orchestrator | 2026-03-29 00:58:43 | INFO  | Task 1f78857e-e9f0-4ef0-b3d3-37a20238e210 is in state SUCCESS 2026-03-29 00:58:43.815757 | orchestrator | 2026-03-29 00:58:43.815783 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-03-29 00:58:43.815787 | orchestrator | 2.16.14 2026-03-29 00:58:43.815791 | orchestrator | 2026-03-29 00:58:43.815794 | orchestrator | PLAY [Create ceph pools] ******************************************************* 2026-03-29 00:58:43.815798 | orchestrator | 2026-03-29 00:58:43.815801 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-29 00:58:43.815805 | orchestrator | Sunday 29 March 2026 00:56:52 +0000 (0:00:00.403) 0:00:00.403 ********** 2026-03-29 00:58:43.815809 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-29 00:58:43.815824 | orchestrator | 2026-03-29 00:58:43.815827 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-03-29 00:58:43.815830 | orchestrator | Sunday 29 March 2026 00:56:52 +0000 (0:00:00.441) 0:00:00.845 ********** 2026-03-29 00:58:43.815833 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:58:43.815837 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:58:43.815840 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:58:43.815843 | orchestrator | 2026-03-29 00:58:43.815846 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-03-29 00:58:43.815856 | orchestrator | Sunday 29 March 2026 00:56:53 +0000 (0:00:00.943) 0:00:01.789 ********** 2026-03-29 00:58:43.815859 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:58:43.815862 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:58:43.815865 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:58:43.815868 | orchestrator | 2026-03-29 00:58:43.815871 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-29 00:58:43.815874 | orchestrator | Sunday 29 March 2026 00:56:53 +0000 (0:00:00.220) 0:00:02.009 ********** 2026-03-29 00:58:43.815877 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:58:43.815880 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:58:43.815883 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:58:43.815886 | orchestrator | 2026-03-29 00:58:43.815889 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-29 00:58:43.815892 | orchestrator | Sunday 29 March 2026 00:56:54 +0000 (0:00:00.682) 0:00:02.692 ********** 2026-03-29 00:58:43.815895 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:58:43.815898 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:58:43.815902 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:58:43.815905 | orchestrator | 2026-03-29 00:58:43.815908 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-03-29 00:58:43.815911 | orchestrator | Sunday 29 March 2026 00:56:54 +0000 (0:00:00.288) 0:00:02.980 ********** 2026-03-29 00:58:43.815914 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:58:43.815917 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:58:43.815920 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:58:43.815923 | orchestrator | 2026-03-29 00:58:43.815926 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-03-29 00:58:43.815929 | orchestrator | Sunday 29 March 2026 00:56:54 +0000 (0:00:00.265) 0:00:03.246 ********** 2026-03-29 00:58:43.815932 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:58:43.815935 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:58:43.815938 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:58:43.815941 | orchestrator | 2026-03-29 00:58:43.815944 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-03-29 00:58:43.815947 | orchestrator | Sunday 29 March 2026 00:56:55 +0000 (0:00:00.297) 0:00:03.544 ********** 2026-03-29 00:58:43.815950 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:58:43.815954 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:58:43.815957 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:58:43.815960 | orchestrator | 2026-03-29 00:58:43.815963 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-03-29 00:58:43.815966 | orchestrator | Sunday 29 March 2026 00:56:55 +0000 (0:00:00.394) 0:00:03.938 ********** 2026-03-29 00:58:43.815970 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:58:43.815973 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:58:43.815976 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:58:43.815979 | orchestrator | 2026-03-29 00:58:43.815982 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-03-29 00:58:43.815985 | orchestrator | Sunday 29 March 2026 00:56:55 +0000 (0:00:00.258) 0:00:04.196 ********** 2026-03-29 00:58:43.815988 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-29 00:58:43.815991 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-29 00:58:43.815994 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-29 00:58:43.816000 | orchestrator | 2026-03-29 00:58:43.816003 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-03-29 00:58:43.816030 | orchestrator | Sunday 29 March 2026 00:56:56 +0000 (0:00:00.628) 0:00:04.825 ********** 2026-03-29 00:58:43.816033 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:58:43.816036 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:58:43.816039 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:58:43.816042 | orchestrator | 2026-03-29 00:58:43.816045 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-03-29 00:58:43.816048 | orchestrator | Sunday 29 March 2026 00:56:56 +0000 (0:00:00.415) 0:00:05.240 ********** 2026-03-29 00:58:43.816085 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-29 00:58:43.816088 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-29 00:58:43.816091 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-29 00:58:43.816094 | orchestrator | 2026-03-29 00:58:43.816097 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-03-29 00:58:43.816100 | orchestrator | Sunday 29 March 2026 00:56:59 +0000 (0:00:02.910) 0:00:08.150 ********** 2026-03-29 00:58:43.816103 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-29 00:58:43.816106 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-29 00:58:43.816110 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-29 00:58:43.816113 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:58:43.816116 | orchestrator | 2026-03-29 00:58:43.816145 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-03-29 00:58:43.816150 | orchestrator | Sunday 29 March 2026 00:57:00 +0000 (0:00:00.359) 0:00:08.510 ********** 2026-03-29 00:58:43.816154 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-29 00:58:43.816223 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-29 00:58:43.816229 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-29 00:58:43.816232 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:58:43.816236 | orchestrator | 2026-03-29 00:58:43.816241 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-03-29 00:58:43.816246 | orchestrator | Sunday 29 March 2026 00:57:00 +0000 (0:00:00.686) 0:00:09.197 ********** 2026-03-29 00:58:43.816252 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-29 00:58:43.816259 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-29 00:58:43.816264 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-29 00:58:43.816273 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:58:43.816278 | orchestrator | 2026-03-29 00:58:43.816284 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-03-29 00:58:43.816289 | orchestrator | Sunday 29 March 2026 00:57:00 +0000 (0:00:00.140) 0:00:09.337 ********** 2026-03-29 00:58:43.816295 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'c5e500ac787e', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-03-29 00:56:57.814942', 'end': '2026-03-29 00:56:57.860264', 'delta': '0:00:00.045322', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['c5e500ac787e'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-03-29 00:58:43.816303 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'fc9de10043cf', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-03-29 00:56:58.859912', 'end': '2026-03-29 00:56:58.896208', 'delta': '0:00:00.036296', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['fc9de10043cf'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-03-29 00:58:43.816312 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '9fd92284b6a7', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-03-29 00:56:59.612091', 'end': '2026-03-29 00:56:59.651949', 'delta': '0:00:00.039858', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['9fd92284b6a7'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-03-29 00:58:43.816318 | orchestrator | 2026-03-29 00:58:43.816323 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-03-29 00:58:43.816332 | orchestrator | Sunday 29 March 2026 00:57:01 +0000 (0:00:00.277) 0:00:09.614 ********** 2026-03-29 00:58:43.816335 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:58:43.816338 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:58:43.816341 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:58:43.816344 | orchestrator | 2026-03-29 00:58:43.816347 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-03-29 00:58:43.816350 | orchestrator | Sunday 29 March 2026 00:57:01 +0000 (0:00:00.377) 0:00:09.992 ********** 2026-03-29 00:58:43.816354 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2026-03-29 00:58:43.816357 | orchestrator | 2026-03-29 00:58:43.816360 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-03-29 00:58:43.816363 | orchestrator | Sunday 29 March 2026 00:57:02 +0000 (0:00:01.082) 0:00:11.074 ********** 2026-03-29 00:58:43.816366 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:58:43.816369 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:58:43.816375 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:58:43.816378 | orchestrator | 2026-03-29 00:58:43.816381 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-03-29 00:58:43.816384 | orchestrator | Sunday 29 March 2026 00:57:02 +0000 (0:00:00.252) 0:00:11.326 ********** 2026-03-29 00:58:43.816387 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:58:43.816390 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:58:43.816393 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:58:43.816396 | orchestrator | 2026-03-29 00:58:43.816399 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-29 00:58:43.816402 | orchestrator | Sunday 29 March 2026 00:57:03 +0000 (0:00:00.369) 0:00:11.695 ********** 2026-03-29 00:58:43.816405 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:58:43.816408 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:58:43.816411 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:58:43.816414 | orchestrator | 2026-03-29 00:58:43.816417 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-03-29 00:58:43.816420 | orchestrator | Sunday 29 March 2026 00:57:03 +0000 (0:00:00.469) 0:00:12.165 ********** 2026-03-29 00:58:43.816423 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:58:43.816427 | orchestrator | 2026-03-29 00:58:43.816430 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-03-29 00:58:43.816433 | orchestrator | Sunday 29 March 2026 00:57:03 +0000 (0:00:00.148) 0:00:12.313 ********** 2026-03-29 00:58:43.816436 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:58:43.816439 | orchestrator | 2026-03-29 00:58:43.816442 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-29 00:58:43.816445 | orchestrator | Sunday 29 March 2026 00:57:04 +0000 (0:00:00.228) 0:00:12.543 ********** 2026-03-29 00:58:43.816448 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:58:43.816451 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:58:43.816454 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:58:43.816457 | orchestrator | 2026-03-29 00:58:43.816460 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-03-29 00:58:43.816463 | orchestrator | Sunday 29 March 2026 00:57:04 +0000 (0:00:00.267) 0:00:12.810 ********** 2026-03-29 00:58:43.816466 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:58:43.816469 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:58:43.816472 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:58:43.816475 | orchestrator | 2026-03-29 00:58:43.816490 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-03-29 00:58:43.816496 | orchestrator | Sunday 29 March 2026 00:57:04 +0000 (0:00:00.307) 0:00:13.118 ********** 2026-03-29 00:58:43.816500 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:58:43.816504 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:58:43.816509 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:58:43.816513 | orchestrator | 2026-03-29 00:58:43.816519 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-03-29 00:58:43.816524 | orchestrator | Sunday 29 March 2026 00:57:05 +0000 (0:00:00.528) 0:00:13.647 ********** 2026-03-29 00:58:43.816529 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:58:43.816534 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:58:43.816539 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:58:43.816543 | orchestrator | 2026-03-29 00:58:43.816546 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-03-29 00:58:43.816549 | orchestrator | Sunday 29 March 2026 00:57:05 +0000 (0:00:00.318) 0:00:13.965 ********** 2026-03-29 00:58:43.816552 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:58:43.816557 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:58:43.816562 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:58:43.816567 | orchestrator | 2026-03-29 00:58:43.816572 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-03-29 00:58:43.816577 | orchestrator | Sunday 29 March 2026 00:57:05 +0000 (0:00:00.324) 0:00:14.290 ********** 2026-03-29 00:58:43.816585 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:58:43.816590 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:58:43.816595 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:58:43.816603 | orchestrator | 2026-03-29 00:58:43.816608 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-03-29 00:58:43.816614 | orchestrator | Sunday 29 March 2026 00:57:06 +0000 (0:00:00.334) 0:00:14.625 ********** 2026-03-29 00:58:43.816619 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:58:43.816624 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:58:43.816629 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:58:43.816634 | orchestrator | 2026-03-29 00:58:43.816640 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-03-29 00:58:43.816644 | orchestrator | Sunday 29 March 2026 00:57:06 +0000 (0:00:00.613) 0:00:15.238 ********** 2026-03-29 00:58:43.816652 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--cb4f0063--6caa--55a9--9ed6--73f648958ae5-osd--block--cb4f0063--6caa--55a9--9ed6--73f648958ae5', 'dm-uuid-LVM-18FbCbvoBegBDNziKS3a5CeZ2dFoK2wu0N0E07gwNCXzlSASmyYj5WPMEdm7tBUd'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-29 00:58:43.816659 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--9db53e8f--4e16--545c--9934--db4b909c3b32-osd--block--9db53e8f--4e16--545c--9934--db4b909c3b32', 'dm-uuid-LVM-dFKc45nUf5iLu79iHhJ43d7H348x9NFjq3sa4hhA7pTFRvreSAL7kYcXRjShn3hY'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-29 00:58:43.816664 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-29 00:58:43.816670 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-29 00:58:43.816675 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-29 00:58:43.816680 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-29 00:58:43.816686 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-29 00:58:43.816698 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-29 00:58:43.816704 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-29 00:58:43.816712 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-29 00:58:43.816717 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7b7cca30-4cc6-46c3-a861-4239a25d0253', 'scsi-SQEMU_QEMU_HARDDISK_7b7cca30-4cc6-46c3-a861-4239a25d0253'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7b7cca30-4cc6-46c3-a861-4239a25d0253-part1', 'scsi-SQEMU_QEMU_HARDDISK_7b7cca30-4cc6-46c3-a861-4239a25d0253-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7b7cca30-4cc6-46c3-a861-4239a25d0253-part14', 'scsi-SQEMU_QEMU_HARDDISK_7b7cca30-4cc6-46c3-a861-4239a25d0253-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7b7cca30-4cc6-46c3-a861-4239a25d0253-part15', 'scsi-SQEMU_QEMU_HARDDISK_7b7cca30-4cc6-46c3-a861-4239a25d0253-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7b7cca30-4cc6-46c3-a861-4239a25d0253-part16', 'scsi-SQEMU_QEMU_HARDDISK_7b7cca30-4cc6-46c3-a861-4239a25d0253-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-29 00:58:43.816721 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--cb4f0063--6caa--55a9--9ed6--73f648958ae5-osd--block--cb4f0063--6caa--55a9--9ed6--73f648958ae5'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-09eM06-PD0h-wxVC-7dOo-u1c0-fl3j-382s19', 'scsi-0QEMU_QEMU_HARDDISK_cf707a58-c66d-4c72-840a-e00f4b50b6ac', 'scsi-SQEMU_QEMU_HARDDISK_cf707a58-c66d-4c72-840a-e00f4b50b6ac'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-29 00:58:43.816730 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--9db53e8f--4e16--545c--9934--db4b909c3b32-osd--block--9db53e8f--4e16--545c--9934--db4b909c3b32'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-cSlgWj-hXCs-N7CV-oNQq-3ad2-8oJB-B66ILb', 'scsi-0QEMU_QEMU_HARDDISK_756b3521-cc64-4337-8d74-551033403337', 'scsi-SQEMU_QEMU_HARDDISK_756b3521-cc64-4337-8d74-551033403337'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-29 00:58:43.816735 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_006c3921-cee3-45d1-95d5-34c501bc63f9', 'scsi-SQEMU_QEMU_HARDDISK_006c3921-cee3-45d1-95d5-34c501bc63f9'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-29 00:58:43.816738 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--ce40293b--1bc0--5558--a1b7--16c9a624d7c9-osd--block--ce40293b--1bc0--5558--a1b7--16c9a624d7c9', 'dm-uuid-LVM-6hbEsefhTAiYT2twgIRfBFKeXHhANdtLmZy7Xesck6f4vVy3CfM6Jyla6mlP71ci'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-29 00:58:43.816742 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-29-00-03-49-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-29 00:58:43.816745 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--c9903f66--e17d--5d19--b140--42471f0a3aa8-osd--block--c9903f66--e17d--5d19--b140--42471f0a3aa8', 'dm-uuid-LVM-whOfwp51vxB6KSTsdyjJLvfitjjSuaFs23iFECqnqj2NhA3btC5jY7YWidpOeMfo'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-29 00:58:43.816748 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-29 00:58:43.816756 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-29 00:58:43.816761 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-29 00:58:43.816765 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-29 00:58:43.816769 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-29 00:58:43.816773 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-29 00:58:43.816776 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-29 00:58:43.816779 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:58:43.816782 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-29 00:58:43.816788 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_331a20ca-a2d6-4acb-b247-8df95204773a', 'scsi-SQEMU_QEMU_HARDDISK_331a20ca-a2d6-4acb-b247-8df95204773a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_331a20ca-a2d6-4acb-b247-8df95204773a-part1', 'scsi-SQEMU_QEMU_HARDDISK_331a20ca-a2d6-4acb-b247-8df95204773a-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_331a20ca-a2d6-4acb-b247-8df95204773a-part14', 'scsi-SQEMU_QEMU_HARDDISK_331a20ca-a2d6-4acb-b247-8df95204773a-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_331a20ca-a2d6-4acb-b247-8df95204773a-part15', 'scsi-SQEMU_QEMU_HARDDISK_331a20ca-a2d6-4acb-b247-8df95204773a-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_331a20ca-a2d6-4acb-b247-8df95204773a-part16', 'scsi-SQEMU_QEMU_HARDDISK_331a20ca-a2d6-4acb-b247-8df95204773a-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-29 00:58:43.816796 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--ce40293b--1bc0--5558--a1b7--16c9a624d7c9-osd--block--ce40293b--1bc0--5558--a1b7--16c9a624d7c9'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-1L2DIb-j926-6me5-KfCU-0DmO-6Hcl-HH1eUV', 'scsi-0QEMU_QEMU_HARDDISK_f8431fa8-afc6-4068-bff4-a67d5c0799f9', 'scsi-SQEMU_QEMU_HARDDISK_f8431fa8-afc6-4068-bff4-a67d5c0799f9'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-29 00:58:43.816799 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--c9903f66--e17d--5d19--b140--42471f0a3aa8-osd--block--c9903f66--e17d--5d19--b140--42471f0a3aa8'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-9RD3yg-iigT-Wyq9-U1Cd-YSqQ-ePCr-skDTbW', 'scsi-0QEMU_QEMU_HARDDISK_08797191-4f26-4e13-8d53-ed6640c6fbd2', 'scsi-SQEMU_QEMU_HARDDISK_08797191-4f26-4e13-8d53-ed6640c6fbd2'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-29 00:58:43.816803 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6eff12ff-972f-42e1-84ee-23c8e4926f48', 'scsi-SQEMU_QEMU_HARDDISK_6eff12ff-972f-42e1-84ee-23c8e4926f48'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-29 00:58:43.816806 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-29-00-03-31-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-29 00:58:43.816811 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:58:43.816814 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--185c2dd0--6b1c--571f--b734--244d928106eb-osd--block--185c2dd0--6b1c--571f--b734--244d928106eb', 'dm-uuid-LVM-yt3wn1MfD3Yrl20FyTmocI3ouGdQngsND3KunRKngYF0iMv3GtbeAEjxSjIK3cWd'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-29 00:58:43.816820 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--18721a71--2d87--5ab0--bec8--5e03a015e695-osd--block--18721a71--2d87--5ab0--bec8--5e03a015e695', 'dm-uuid-LVM-JKvUQZO2kAxAc4jJG9NJg9LeFxFWhhcLFpEl4OB8kAUPSVlpLZb6vpxxiz3mwLsG'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-29 00:58:43.816823 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-29 00:58:43.816828 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-29 00:58:43.816832 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-29 00:58:43.816835 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-29 00:58:43.816838 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-29 00:58:43.816841 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-29 00:58:43.816847 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-29 00:58:43.816850 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-29 00:58:43.816857 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7f3fbef7-4677-4949-824a-e0d60c532987', 'scsi-SQEMU_QEMU_HARDDISK_7f3fbef7-4677-4949-824a-e0d60c532987'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7f3fbef7-4677-4949-824a-e0d60c532987-part1', 'scsi-SQEMU_QEMU_HARDDISK_7f3fbef7-4677-4949-824a-e0d60c532987-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7f3fbef7-4677-4949-824a-e0d60c532987-part14', 'scsi-SQEMU_QEMU_HARDDISK_7f3fbef7-4677-4949-824a-e0d60c532987-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7f3fbef7-4677-4949-824a-e0d60c532987-part15', 'scsi-SQEMU_QEMU_HARDDISK_7f3fbef7-4677-4949-824a-e0d60c532987-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7f3fbef7-4677-4949-824a-e0d60c532987-part16', 'scsi-SQEMU_QEMU_HARDDISK_7f3fbef7-4677-4949-824a-e0d60c532987-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-29 00:58:43.816861 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--185c2dd0--6b1c--571f--b734--244d928106eb-osd--block--185c2dd0--6b1c--571f--b734--244d928106eb'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-8NXN2d-8oWY-zl9N-hWCw-e0nf-McpG-E2aXC2', 'scsi-0QEMU_QEMU_HARDDISK_19b179cd-386f-4584-8a4b-106e5ad8592d', 'scsi-SQEMU_QEMU_HARDDISK_19b179cd-386f-4584-8a4b-106e5ad8592d'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-29 00:58:43.816867 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--18721a71--2d87--5ab0--bec8--5e03a015e695-osd--block--18721a71--2d87--5ab0--bec8--5e03a015e695'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-AolG9Q-XuJi-H8Ed-7JBm-KDZW-dSrk-GYSOpU', 'scsi-0QEMU_QEMU_HARDDISK_64dd44e8-56db-4990-9653-26f9a904c769', 'scsi-SQEMU_QEMU_HARDDISK_64dd44e8-56db-4990-9653-26f9a904c769'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-29 00:58:43.816871 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_66d732d5-e9a7-47c2-8d7a-ba89d690a00e', 'scsi-SQEMU_QEMU_HARDDISK_66d732d5-e9a7-47c2-8d7a-ba89d690a00e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-29 00:58:43.816877 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-29-00-03-56-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-29 00:58:43.816881 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:58:43.816885 | orchestrator | 2026-03-29 00:58:43.816888 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-03-29 00:58:43.816892 | orchestrator | Sunday 29 March 2026 00:57:07 +0000 (0:00:00.609) 0:00:15.847 ********** 2026-03-29 00:58:43.816898 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--cb4f0063--6caa--55a9--9ed6--73f648958ae5-osd--block--cb4f0063--6caa--55a9--9ed6--73f648958ae5', 'dm-uuid-LVM-18FbCbvoBegBDNziKS3a5CeZ2dFoK2wu0N0E07gwNCXzlSASmyYj5WPMEdm7tBUd'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 00:58:43.816902 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--9db53e8f--4e16--545c--9934--db4b909c3b32-osd--block--9db53e8f--4e16--545c--9934--db4b909c3b32', 'dm-uuid-LVM-dFKc45nUf5iLu79iHhJ43d7H348x9NFjq3sa4hhA7pTFRvreSAL7kYcXRjShn3hY'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 00:58:43.816908 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 00:58:43.816911 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 00:58:43.816915 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 00:58:43.816922 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 00:58:43.816926 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 00:58:43.816946 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 00:58:43.816950 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 00:58:43.816957 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 00:58:43.816965 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7b7cca30-4cc6-46c3-a861-4239a25d0253', 'scsi-SQEMU_QEMU_HARDDISK_7b7cca30-4cc6-46c3-a861-4239a25d0253'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7b7cca30-4cc6-46c3-a861-4239a25d0253-part1', 'scsi-SQEMU_QEMU_HARDDISK_7b7cca30-4cc6-46c3-a861-4239a25d0253-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7b7cca30-4cc6-46c3-a861-4239a25d0253-part14', 'scsi-SQEMU_QEMU_HARDDISK_7b7cca30-4cc6-46c3-a861-4239a25d0253-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7b7cca30-4cc6-46c3-a861-4239a25d0253-part15', 'scsi-SQEMU_QEMU_HARDDISK_7b7cca30-4cc6-46c3-a861-4239a25d0253-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7b7cca30-4cc6-46c3-a861-4239a25d0253-part16', 'scsi-SQEMU_QEMU_HARDDISK_7b7cca30-4cc6-46c3-a861-4239a25d0253-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 00:58:43.816969 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--ce40293b--1bc0--5558--a1b7--16c9a624d7c9-osd--block--ce40293b--1bc0--5558--a1b7--16c9a624d7c9', 'dm-uuid-LVM-6hbEsefhTAiYT2twgIRfBFKeXHhANdtLmZy7Xesck6f4vVy3CfM6Jyla6mlP71ci'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 00:58:43.816974 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--cb4f0063--6caa--55a9--9ed6--73f648958ae5-osd--block--cb4f0063--6caa--55a9--9ed6--73f648958ae5'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-09eM06-PD0h-wxVC-7dOo-u1c0-fl3j-382s19', 'scsi-0QEMU_QEMU_HARDDISK_cf707a58-c66d-4c72-840a-e00f4b50b6ac', 'scsi-SQEMU_QEMU_HARDDISK_cf707a58-c66d-4c72-840a-e00f4b50b6ac'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 00:58:43.816978 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--c9903f66--e17d--5d19--b140--42471f0a3aa8-osd--block--c9903f66--e17d--5d19--b140--42471f0a3aa8', 'dm-uuid-LVM-whOfwp51vxB6KSTsdyjJLvfitjjSuaFs23iFECqnqj2NhA3btC5jY7YWidpOeMfo'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 00:58:43.816983 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--9db53e8f--4e16--545c--9934--db4b909c3b32-osd--block--9db53e8f--4e16--545c--9934--db4b909c3b32'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-cSlgWj-hXCs-N7CV-oNQq-3ad2-8oJB-B66ILb', 'scsi-0QEMU_QEMU_HARDDISK_756b3521-cc64-4337-8d74-551033403337', 'scsi-SQEMU_QEMU_HARDDISK_756b3521-cc64-4337-8d74-551033403337'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 00:58:43.816989 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 00:58:43.816992 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 00:58:43.816998 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_006c3921-cee3-45d1-95d5-34c501bc63f9', 'scsi-SQEMU_QEMU_HARDDISK_006c3921-cee3-45d1-95d5-34c501bc63f9'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 00:58:43.817001 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 00:58:43.817005 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-29-00-03-49-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 00:58:43.817046 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:58:43.817053 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 00:58:43.817059 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 00:58:43.817062 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 00:58:43.817111 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 00:58:43.817116 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 00:58:43.817119 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--185c2dd0--6b1c--571f--b734--244d928106eb-osd--block--185c2dd0--6b1c--571f--b734--244d928106eb', 'dm-uuid-LVM-yt3wn1MfD3Yrl20FyTmocI3ouGdQngsND3KunRKngYF0iMv3GtbeAEjxSjIK3cWd'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 00:58:43.817150 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_331a20ca-a2d6-4acb-b247-8df95204773a', 'scsi-SQEMU_QEMU_HARDDISK_331a20ca-a2d6-4acb-b247-8df95204773a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_331a20ca-a2d6-4acb-b247-8df95204773a-part1', 'scsi-SQEMU_QEMU_HARDDISK_331a20ca-a2d6-4acb-b247-8df95204773a-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_331a20ca-a2d6-4acb-b247-8df95204773a-part14', 'scsi-SQEMU_QEMU_HARDDISK_331a20ca-a2d6-4acb-b247-8df95204773a-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_331a20ca-a2d6-4acb-b247-8df95204773a-part15', 'scsi-SQEMU_QEMU_HARDDISK_331a20ca-a2d6-4acb-b247-8df95204773a-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_331a20ca-a2d6-4acb-b247-8df95204773a-part16', 'scsi-SQEMU_QEMU_HARDDISK_331a20ca-a2d6-4acb-b247-8df95204773a-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 00:58:43.817158 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--18721a71--2d87--5ab0--bec8--5e03a015e695-osd--block--18721a71--2d87--5ab0--bec8--5e03a015e695', 'dm-uuid-LVM-JKvUQZO2kAxAc4jJG9NJg9LeFxFWhhcLFpEl4OB8kAUPSVlpLZb6vpxxiz3mwLsG'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 00:58:43.817179 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--ce40293b--1bc0--5558--a1b7--16c9a624d7c9-osd--block--ce40293b--1bc0--5558--a1b7--16c9a624d7c9'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-1L2DIb-j926-6me5-KfCU-0DmO-6Hcl-HH1eUV', 'scsi-0QEMU_QEMU_HARDDISK_f8431fa8-afc6-4068-bff4-a67d5c0799f9', 'scsi-SQEMU_QEMU_HARDDISK_f8431fa8-afc6-4068-bff4-a67d5c0799f9'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 00:58:43.817525 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 00:58:43.817543 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--c9903f66--e17d--5d19--b140--42471f0a3aa8-osd--block--c9903f66--e17d--5d19--b140--42471f0a3aa8'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-9RD3yg-iigT-Wyq9-U1Cd-YSqQ-ePCr-skDTbW', 'scsi-0QEMU_QEMU_HARDDISK_08797191-4f26-4e13-8d53-ed6640c6fbd2', 'scsi-SQEMU_QEMU_HARDDISK_08797191-4f26-4e13-8d53-ed6640c6fbd2'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 00:58:43.817547 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 00:58:43.817555 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6eff12ff-972f-42e1-84ee-23c8e4926f48', 'scsi-SQEMU_QEMU_HARDDISK_6eff12ff-972f-42e1-84ee-23c8e4926f48'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 00:58:43.817559 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 00:58:43.817562 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-29-00-03-31-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 00:58:43.817572 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 00:58:43.817577 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:58:43.817586 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 00:58:43.817595 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 00:58:43.817601 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 00:58:43.817605 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 00:58:43.817613 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7f3fbef7-4677-4949-824a-e0d60c532987', 'scsi-SQEMU_QEMU_HARDDISK_7f3fbef7-4677-4949-824a-e0d60c532987'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7f3fbef7-4677-4949-824a-e0d60c532987-part1', 'scsi-SQEMU_QEMU_HARDDISK_7f3fbef7-4677-4949-824a-e0d60c532987-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7f3fbef7-4677-4949-824a-e0d60c532987-part14', 'scsi-SQEMU_QEMU_HARDDISK_7f3fbef7-4677-4949-824a-e0d60c532987-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7f3fbef7-4677-4949-824a-e0d60c532987-part15', 'scsi-SQEMU_QEMU_HARDDISK_7f3fbef7-4677-4949-824a-e0d60c532987-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7f3fbef7-4677-4949-824a-e0d60c532987-part16', 'scsi-SQEMU_QEMU_HARDDISK_7f3fbef7-4677-4949-824a-e0d60c532987-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 00:58:43.817620 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--185c2dd0--6b1c--571f--b734--244d928106eb-osd--block--185c2dd0--6b1c--571f--b734--244d928106eb'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-8NXN2d-8oWY-zl9N-hWCw-e0nf-McpG-E2aXC2', 'scsi-0QEMU_QEMU_HARDDISK_19b179cd-386f-4584-8a4b-106e5ad8592d', 'scsi-SQEMU_QEMU_HARDDISK_19b179cd-386f-4584-8a4b-106e5ad8592d'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 00:58:43.817624 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--18721a71--2d87--5ab0--bec8--5e03a015e695-osd--block--18721a71--2d87--5ab0--bec8--5e03a015e695'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-AolG9Q-XuJi-H8Ed-7JBm-KDZW-dSrk-GYSOpU', 'scsi-0QEMU_QEMU_HARDDISK_64dd44e8-56db-4990-9653-26f9a904c769', 'scsi-SQEMU_QEMU_HARDDISK_64dd44e8-56db-4990-9653-26f9a904c769'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 00:58:43.817627 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_66d732d5-e9a7-47c2-8d7a-ba89d690a00e', 'scsi-SQEMU_QEMU_HARDDISK_66d732d5-e9a7-47c2-8d7a-ba89d690a00e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 00:58:43.817633 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-29-00-03-56-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 00:58:43.817636 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:58:43.817639 | orchestrator | 2026-03-29 00:58:43.817642 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-03-29 00:58:43.817646 | orchestrator | Sunday 29 March 2026 00:57:08 +0000 (0:00:00.580) 0:00:16.427 ********** 2026-03-29 00:58:43.817649 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:58:43.817655 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:58:43.817658 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:58:43.817661 | orchestrator | 2026-03-29 00:58:43.817664 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-03-29 00:58:43.817667 | orchestrator | Sunday 29 March 2026 00:57:08 +0000 (0:00:00.660) 0:00:17.088 ********** 2026-03-29 00:58:43.817672 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:58:43.817675 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:58:43.817678 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:58:43.817681 | orchestrator | 2026-03-29 00:58:43.817684 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-29 00:58:43.817687 | orchestrator | Sunday 29 March 2026 00:57:09 +0000 (0:00:00.471) 0:00:17.560 ********** 2026-03-29 00:58:43.817691 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:58:43.817694 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:58:43.817697 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:58:43.817700 | orchestrator | 2026-03-29 00:58:43.817706 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-29 00:58:43.817711 | orchestrator | Sunday 29 March 2026 00:57:09 +0000 (0:00:00.631) 0:00:18.191 ********** 2026-03-29 00:58:43.817715 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:58:43.817721 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:58:43.817726 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:58:43.817730 | orchestrator | 2026-03-29 00:58:43.817736 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-29 00:58:43.817741 | orchestrator | Sunday 29 March 2026 00:57:10 +0000 (0:00:00.293) 0:00:18.484 ********** 2026-03-29 00:58:43.817746 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:58:43.817751 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:58:43.817757 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:58:43.817762 | orchestrator | 2026-03-29 00:58:43.817767 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-29 00:58:43.817770 | orchestrator | Sunday 29 March 2026 00:57:10 +0000 (0:00:00.468) 0:00:18.953 ********** 2026-03-29 00:58:43.817773 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:58:43.817776 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:58:43.817780 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:58:43.817783 | orchestrator | 2026-03-29 00:58:43.817786 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-03-29 00:58:43.817789 | orchestrator | Sunday 29 March 2026 00:57:11 +0000 (0:00:00.502) 0:00:19.455 ********** 2026-03-29 00:58:43.817792 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-03-29 00:58:43.817795 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-03-29 00:58:43.817798 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-03-29 00:58:43.817801 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-03-29 00:58:43.817804 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-03-29 00:58:43.817807 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-03-29 00:58:43.817810 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-03-29 00:58:43.817813 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-03-29 00:58:43.817816 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-03-29 00:58:43.817819 | orchestrator | 2026-03-29 00:58:43.817823 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-03-29 00:58:43.817828 | orchestrator | Sunday 29 March 2026 00:57:11 +0000 (0:00:00.865) 0:00:20.320 ********** 2026-03-29 00:58:43.817835 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-29 00:58:43.817842 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-29 00:58:43.817846 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-29 00:58:43.817851 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:58:43.817856 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-03-29 00:58:43.817876 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-03-29 00:58:43.817882 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-03-29 00:58:43.817887 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:58:43.817892 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-03-29 00:58:43.817897 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-03-29 00:58:43.817902 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-03-29 00:58:43.817908 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:58:43.817913 | orchestrator | 2026-03-29 00:58:43.817918 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-03-29 00:58:43.817924 | orchestrator | Sunday 29 March 2026 00:57:12 +0000 (0:00:00.351) 0:00:20.672 ********** 2026-03-29 00:58:43.817929 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-29 00:58:43.817934 | orchestrator | 2026-03-29 00:58:43.817940 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-29 00:58:43.817947 | orchestrator | Sunday 29 March 2026 00:57:13 +0000 (0:00:00.727) 0:00:21.400 ********** 2026-03-29 00:58:43.817956 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:58:43.817962 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:58:43.817967 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:58:43.817973 | orchestrator | 2026-03-29 00:58:43.817977 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-29 00:58:43.817983 | orchestrator | Sunday 29 March 2026 00:57:13 +0000 (0:00:00.333) 0:00:21.734 ********** 2026-03-29 00:58:43.817988 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:58:43.817993 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:58:43.817998 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:58:43.818003 | orchestrator | 2026-03-29 00:58:43.818009 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-29 00:58:43.818102 | orchestrator | Sunday 29 March 2026 00:57:13 +0000 (0:00:00.312) 0:00:22.046 ********** 2026-03-29 00:58:43.818109 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:58:43.818114 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:58:43.818119 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:58:43.818124 | orchestrator | 2026-03-29 00:58:43.818129 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-29 00:58:43.818135 | orchestrator | Sunday 29 March 2026 00:57:13 +0000 (0:00:00.315) 0:00:22.362 ********** 2026-03-29 00:58:43.818140 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:58:43.818153 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:58:43.818159 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:58:43.818165 | orchestrator | 2026-03-29 00:58:43.818171 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-29 00:58:43.818176 | orchestrator | Sunday 29 March 2026 00:57:14 +0000 (0:00:00.680) 0:00:23.042 ********** 2026-03-29 00:58:43.818180 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-29 00:58:43.818184 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-29 00:58:43.818188 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-29 00:58:43.818191 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:58:43.818195 | orchestrator | 2026-03-29 00:58:43.818198 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-29 00:58:43.818201 | orchestrator | Sunday 29 March 2026 00:57:15 +0000 (0:00:00.390) 0:00:23.432 ********** 2026-03-29 00:58:43.818205 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-29 00:58:43.818209 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-29 00:58:43.818212 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-29 00:58:43.818215 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:58:43.818219 | orchestrator | 2026-03-29 00:58:43.818222 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-29 00:58:43.818230 | orchestrator | Sunday 29 March 2026 00:57:15 +0000 (0:00:00.367) 0:00:23.800 ********** 2026-03-29 00:58:43.818233 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-29 00:58:43.818236 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-29 00:58:43.818240 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-29 00:58:43.818243 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:58:43.818247 | orchestrator | 2026-03-29 00:58:43.818250 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-29 00:58:43.818254 | orchestrator | Sunday 29 March 2026 00:57:15 +0000 (0:00:00.387) 0:00:24.188 ********** 2026-03-29 00:58:43.818257 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:58:43.818261 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:58:43.818265 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:58:43.818268 | orchestrator | 2026-03-29 00:58:43.818271 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-29 00:58:43.818275 | orchestrator | Sunday 29 March 2026 00:57:16 +0000 (0:00:00.303) 0:00:24.491 ********** 2026-03-29 00:58:43.818278 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-03-29 00:58:43.818282 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-03-29 00:58:43.818285 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-03-29 00:58:43.818289 | orchestrator | 2026-03-29 00:58:43.818292 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-03-29 00:58:43.818296 | orchestrator | Sunday 29 March 2026 00:57:16 +0000 (0:00:00.554) 0:00:25.045 ********** 2026-03-29 00:58:43.818300 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-29 00:58:43.818303 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-29 00:58:43.818307 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-29 00:58:43.818310 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-03-29 00:58:43.818314 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-29 00:58:43.818317 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-29 00:58:43.818321 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-29 00:58:43.818324 | orchestrator | 2026-03-29 00:58:43.818327 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-03-29 00:58:43.818332 | orchestrator | Sunday 29 March 2026 00:57:17 +0000 (0:00:00.897) 0:00:25.943 ********** 2026-03-29 00:58:43.818335 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-29 00:58:43.818339 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-29 00:58:43.818342 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-29 00:58:43.818346 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-03-29 00:58:43.818349 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-29 00:58:43.818353 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-29 00:58:43.818360 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-29 00:58:43.818363 | orchestrator | 2026-03-29 00:58:43.818367 | orchestrator | TASK [Include tasks from the ceph-osd role] ************************************ 2026-03-29 00:58:43.818371 | orchestrator | Sunday 29 March 2026 00:57:19 +0000 (0:00:01.735) 0:00:27.679 ********** 2026-03-29 00:58:43.818374 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:58:43.818378 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:58:43.818381 | orchestrator | included: /ansible/tasks/openstack_config.yml for testbed-node-5 2026-03-29 00:58:43.818385 | orchestrator | 2026-03-29 00:58:43.818388 | orchestrator | TASK [create openstack pool(s)] ************************************************ 2026-03-29 00:58:43.818394 | orchestrator | Sunday 29 March 2026 00:57:19 +0000 (0:00:00.326) 0:00:28.005 ********** 2026-03-29 00:58:43.818398 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'backups', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-29 00:58:43.818405 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'volumes', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-29 00:58:43.818409 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'images', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-29 00:58:43.818413 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'metrics', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-29 00:58:43.818416 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'vms', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-29 00:58:43.818420 | orchestrator | 2026-03-29 00:58:43.818423 | orchestrator | TASK [generate keys] *********************************************************** 2026-03-29 00:58:43.818427 | orchestrator | Sunday 29 March 2026 00:57:56 +0000 (0:00:36.968) 0:01:04.974 ********** 2026-03-29 00:58:43.818430 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-29 00:58:43.818434 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-29 00:58:43.818437 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-29 00:58:43.818441 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-29 00:58:43.818445 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-29 00:58:43.818448 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-29 00:58:43.818452 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] }}] 2026-03-29 00:58:43.818455 | orchestrator | 2026-03-29 00:58:43.818459 | orchestrator | TASK [get keys from monitors] ************************************************** 2026-03-29 00:58:43.818463 | orchestrator | Sunday 29 March 2026 00:58:14 +0000 (0:00:17.920) 0:01:22.895 ********** 2026-03-29 00:58:43.818466 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-29 00:58:43.818470 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-29 00:58:43.818473 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-29 00:58:43.818477 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-29 00:58:43.818493 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-29 00:58:43.818498 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-29 00:58:43.818501 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-29 00:58:43.818505 | orchestrator | 2026-03-29 00:58:43.818509 | orchestrator | TASK [copy ceph key(s) if needed] ********************************************** 2026-03-29 00:58:43.818512 | orchestrator | Sunday 29 March 2026 00:58:24 +0000 (0:00:09.586) 0:01:32.481 ********** 2026-03-29 00:58:43.818518 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-29 00:58:43.818522 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-29 00:58:43.818525 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-29 00:58:43.818529 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-29 00:58:43.818533 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-29 00:58:43.818538 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-29 00:58:43.818541 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-29 00:58:43.818544 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-29 00:58:43.818548 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-29 00:58:43.818551 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-29 00:58:43.818555 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-29 00:58:43.818560 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-29 00:58:43.818565 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-29 00:58:43.818570 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-29 00:58:43.818575 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-29 00:58:43.818582 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-29 00:58:43.818588 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-29 00:58:43.818593 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-29 00:58:43.818598 | orchestrator | changed: [testbed-node-5 -> {{ item.1 }}] 2026-03-29 00:58:43.818603 | orchestrator | 2026-03-29 00:58:43.818607 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 00:58:43.818612 | orchestrator | testbed-node-3 : ok=25  changed=0 unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-03-29 00:58:43.818619 | orchestrator | testbed-node-4 : ok=18  changed=0 unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2026-03-29 00:58:43.818624 | orchestrator | testbed-node-5 : ok=23  changed=3  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-03-29 00:58:43.818629 | orchestrator | 2026-03-29 00:58:43.818634 | orchestrator | 2026-03-29 00:58:43.818639 | orchestrator | 2026-03-29 00:58:43.818643 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-29 00:58:43.818648 | orchestrator | Sunday 29 March 2026 00:58:41 +0000 (0:00:17.451) 0:01:49.933 ********** 2026-03-29 00:58:43.818653 | orchestrator | =============================================================================== 2026-03-29 00:58:43.818658 | orchestrator | create openstack pool(s) ----------------------------------------------- 36.97s 2026-03-29 00:58:43.818664 | orchestrator | generate keys ---------------------------------------------------------- 17.92s 2026-03-29 00:58:43.818670 | orchestrator | copy ceph key(s) if needed --------------------------------------------- 17.45s 2026-03-29 00:58:43.818675 | orchestrator | get keys from monitors -------------------------------------------------- 9.59s 2026-03-29 00:58:43.818680 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 2.91s 2026-03-29 00:58:43.818685 | orchestrator | ceph-facts : Set_fact ceph_admin_command -------------------------------- 1.74s 2026-03-29 00:58:43.818691 | orchestrator | ceph-facts : Get current fsid if cluster is already running ------------- 1.08s 2026-03-29 00:58:43.818695 | orchestrator | ceph-facts : Check if it is atomic host --------------------------------- 0.94s 2026-03-29 00:58:43.818705 | orchestrator | ceph-facts : Set_fact ceph_run_cmd -------------------------------------- 0.90s 2026-03-29 00:58:43.818710 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 0.87s 2026-03-29 00:58:43.818715 | orchestrator | ceph-facts : Import_tasks set_radosgw_address.yml ----------------------- 0.73s 2026-03-29 00:58:43.818720 | orchestrator | ceph-facts : Check if the ceph mon socket is in-use --------------------- 0.69s 2026-03-29 00:58:43.818725 | orchestrator | ceph-facts : Check if podman binary is present -------------------------- 0.68s 2026-03-29 00:58:43.818730 | orchestrator | ceph-facts : Set_fact _radosgw_address to radosgw_address --------------- 0.68s 2026-03-29 00:58:43.818734 | orchestrator | ceph-facts : Check if the ceph conf exists ------------------------------ 0.66s 2026-03-29 00:58:43.818739 | orchestrator | ceph-facts : Read osd pool default crush rule --------------------------- 0.63s 2026-03-29 00:58:43.818744 | orchestrator | ceph-facts : Set_fact monitor_name ansible_facts['hostname'] ------------ 0.63s 2026-03-29 00:58:43.818749 | orchestrator | ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks --- 0.61s 2026-03-29 00:58:43.818755 | orchestrator | ceph-facts : Collect existed devices ------------------------------------ 0.61s 2026-03-29 00:58:43.818761 | orchestrator | ceph-facts : Set_fact devices generate device list when osd_auto_discovery --- 0.58s 2026-03-29 00:58:43.818766 | orchestrator | 2026-03-29 00:58:43 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:58:46.856590 | orchestrator | 2026-03-29 00:58:46 | INFO  | Task cebd383f-6df6-4d15-a29b-3d6a6e0487ea is in state STARTED 2026-03-29 00:58:46.858371 | orchestrator | 2026-03-29 00:58:46 | INFO  | Task 8fd1b60d-53e6-44d6-87e3-817fe21433c8 is in state STARTED 2026-03-29 00:58:46.860586 | orchestrator | 2026-03-29 00:58:46 | INFO  | Task 58128f79-0730-49f6-8b80-36a7f0684464 is in state STARTED 2026-03-29 00:58:46.860649 | orchestrator | 2026-03-29 00:58:46 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:58:49.911782 | orchestrator | 2026-03-29 00:58:49 | INFO  | Task cebd383f-6df6-4d15-a29b-3d6a6e0487ea is in state STARTED 2026-03-29 00:58:49.914302 | orchestrator | 2026-03-29 00:58:49 | INFO  | Task 8fd1b60d-53e6-44d6-87e3-817fe21433c8 is in state STARTED 2026-03-29 00:58:49.917066 | orchestrator | 2026-03-29 00:58:49 | INFO  | Task 58128f79-0730-49f6-8b80-36a7f0684464 is in state STARTED 2026-03-29 00:58:49.917137 | orchestrator | 2026-03-29 00:58:49 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:58:52.955213 | orchestrator | 2026-03-29 00:58:52 | INFO  | Task cebd383f-6df6-4d15-a29b-3d6a6e0487ea is in state STARTED 2026-03-29 00:58:52.956230 | orchestrator | 2026-03-29 00:58:52 | INFO  | Task 8fd1b60d-53e6-44d6-87e3-817fe21433c8 is in state STARTED 2026-03-29 00:58:52.956856 | orchestrator | 2026-03-29 00:58:52 | INFO  | Task 58128f79-0730-49f6-8b80-36a7f0684464 is in state STARTED 2026-03-29 00:58:52.957946 | orchestrator | 2026-03-29 00:58:52 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:58:56.024816 | orchestrator | 2026-03-29 00:58:56 | INFO  | Task cebd383f-6df6-4d15-a29b-3d6a6e0487ea is in state STARTED 2026-03-29 00:58:56.029015 | orchestrator | 2026-03-29 00:58:56 | INFO  | Task 8fd1b60d-53e6-44d6-87e3-817fe21433c8 is in state STARTED 2026-03-29 00:58:56.032676 | orchestrator | 2026-03-29 00:58:56 | INFO  | Task 58128f79-0730-49f6-8b80-36a7f0684464 is in state STARTED 2026-03-29 00:58:56.033669 | orchestrator | 2026-03-29 00:58:56 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:58:59.083404 | orchestrator | 2026-03-29 00:58:59 | INFO  | Task cebd383f-6df6-4d15-a29b-3d6a6e0487ea is in state STARTED 2026-03-29 00:58:59.085293 | orchestrator | 2026-03-29 00:58:59 | INFO  | Task 8fd1b60d-53e6-44d6-87e3-817fe21433c8 is in state STARTED 2026-03-29 00:58:59.086707 | orchestrator | 2026-03-29 00:58:59 | INFO  | Task 58128f79-0730-49f6-8b80-36a7f0684464 is in state STARTED 2026-03-29 00:58:59.086963 | orchestrator | 2026-03-29 00:58:59 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:59:02.119660 | orchestrator | 2026-03-29 00:59:02 | INFO  | Task cebd383f-6df6-4d15-a29b-3d6a6e0487ea is in state STARTED 2026-03-29 00:59:02.120794 | orchestrator | 2026-03-29 00:59:02 | INFO  | Task 8fd1b60d-53e6-44d6-87e3-817fe21433c8 is in state STARTED 2026-03-29 00:59:02.120830 | orchestrator | 2026-03-29 00:59:02 | INFO  | Task 58128f79-0730-49f6-8b80-36a7f0684464 is in state STARTED 2026-03-29 00:59:02.120838 | orchestrator | 2026-03-29 00:59:02 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:59:05.161644 | orchestrator | 2026-03-29 00:59:05 | INFO  | Task cebd383f-6df6-4d15-a29b-3d6a6e0487ea is in state STARTED 2026-03-29 00:59:05.163341 | orchestrator | 2026-03-29 00:59:05 | INFO  | Task 8fd1b60d-53e6-44d6-87e3-817fe21433c8 is in state STARTED 2026-03-29 00:59:05.164902 | orchestrator | 2026-03-29 00:59:05 | INFO  | Task 58128f79-0730-49f6-8b80-36a7f0684464 is in state STARTED 2026-03-29 00:59:05.165229 | orchestrator | 2026-03-29 00:59:05 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:59:08.209670 | orchestrator | 2026-03-29 00:59:08 | INFO  | Task cebd383f-6df6-4d15-a29b-3d6a6e0487ea is in state STARTED 2026-03-29 00:59:08.211807 | orchestrator | 2026-03-29 00:59:08 | INFO  | Task 8fd1b60d-53e6-44d6-87e3-817fe21433c8 is in state STARTED 2026-03-29 00:59:08.212728 | orchestrator | 2026-03-29 00:59:08 | INFO  | Task 58128f79-0730-49f6-8b80-36a7f0684464 is in state STARTED 2026-03-29 00:59:08.212749 | orchestrator | 2026-03-29 00:59:08 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:59:11.263564 | orchestrator | 2026-03-29 00:59:11 | INFO  | Task cebd383f-6df6-4d15-a29b-3d6a6e0487ea is in state STARTED 2026-03-29 00:59:11.265912 | orchestrator | 2026-03-29 00:59:11 | INFO  | Task 8fd1b60d-53e6-44d6-87e3-817fe21433c8 is in state STARTED 2026-03-29 00:59:11.267844 | orchestrator | 2026-03-29 00:59:11 | INFO  | Task 58128f79-0730-49f6-8b80-36a7f0684464 is in state STARTED 2026-03-29 00:59:11.268048 | orchestrator | 2026-03-29 00:59:11 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:59:14.327356 | orchestrator | 2026-03-29 00:59:14 | INFO  | Task cebd383f-6df6-4d15-a29b-3d6a6e0487ea is in state STARTED 2026-03-29 00:59:14.328953 | orchestrator | 2026-03-29 00:59:14 | INFO  | Task 8fd1b60d-53e6-44d6-87e3-817fe21433c8 is in state STARTED 2026-03-29 00:59:14.331647 | orchestrator | 2026-03-29 00:59:14 | INFO  | Task 58128f79-0730-49f6-8b80-36a7f0684464 is in state STARTED 2026-03-29 00:59:14.331699 | orchestrator | 2026-03-29 00:59:14 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:59:17.382154 | orchestrator | 2026-03-29 00:59:17 | INFO  | Task cebd383f-6df6-4d15-a29b-3d6a6e0487ea is in state STARTED 2026-03-29 00:59:17.387929 | orchestrator | 2026-03-29 00:59:17 | INFO  | Task 8fd1b60d-53e6-44d6-87e3-817fe21433c8 is in state STARTED 2026-03-29 00:59:17.390755 | orchestrator | 2026-03-29 00:59:17 | INFO  | Task 58128f79-0730-49f6-8b80-36a7f0684464 is in state STARTED 2026-03-29 00:59:17.390937 | orchestrator | 2026-03-29 00:59:17 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:59:20.443600 | orchestrator | 2026-03-29 00:59:20 | INFO  | Task cebd383f-6df6-4d15-a29b-3d6a6e0487ea is in state STARTED 2026-03-29 00:59:20.445221 | orchestrator | 2026-03-29 00:59:20 | INFO  | Task 8fd1b60d-53e6-44d6-87e3-817fe21433c8 is in state STARTED 2026-03-29 00:59:20.448925 | orchestrator | 2026-03-29 00:59:20 | INFO  | Task 58128f79-0730-49f6-8b80-36a7f0684464 is in state STARTED 2026-03-29 00:59:20.449010 | orchestrator | 2026-03-29 00:59:20 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:59:23.507082 | orchestrator | 2026-03-29 00:59:23 | INFO  | Task cebd383f-6df6-4d15-a29b-3d6a6e0487ea is in state STARTED 2026-03-29 00:59:23.507387 | orchestrator | 2026-03-29 00:59:23 | INFO  | Task cc596ce9-3273-4177-a75f-661f838ebdcd is in state STARTED 2026-03-29 00:59:23.508903 | orchestrator | 2026-03-29 00:59:23 | INFO  | Task 8fd1b60d-53e6-44d6-87e3-817fe21433c8 is in state STARTED 2026-03-29 00:59:23.510180 | orchestrator | 2026-03-29 00:59:23 | INFO  | Task 58128f79-0730-49f6-8b80-36a7f0684464 is in state SUCCESS 2026-03-29 00:59:23.510214 | orchestrator | 2026-03-29 00:59:23 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:59:26.564441 | orchestrator | 2026-03-29 00:59:26 | INFO  | Task cebd383f-6df6-4d15-a29b-3d6a6e0487ea is in state STARTED 2026-03-29 00:59:26.567545 | orchestrator | 2026-03-29 00:59:26 | INFO  | Task cc596ce9-3273-4177-a75f-661f838ebdcd is in state STARTED 2026-03-29 00:59:26.571121 | orchestrator | 2026-03-29 00:59:26 | INFO  | Task 8fd1b60d-53e6-44d6-87e3-817fe21433c8 is in state STARTED 2026-03-29 00:59:26.571264 | orchestrator | 2026-03-29 00:59:26 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:59:29.609656 | orchestrator | 2026-03-29 00:59:29 | INFO  | Task cebd383f-6df6-4d15-a29b-3d6a6e0487ea is in state STARTED 2026-03-29 00:59:29.609750 | orchestrator | 2026-03-29 00:59:29 | INFO  | Task cc596ce9-3273-4177-a75f-661f838ebdcd is in state STARTED 2026-03-29 00:59:29.611582 | orchestrator | 2026-03-29 00:59:29 | INFO  | Task 8fd1b60d-53e6-44d6-87e3-817fe21433c8 is in state STARTED 2026-03-29 00:59:29.611645 | orchestrator | 2026-03-29 00:59:29 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:59:32.677282 | orchestrator | 2026-03-29 00:59:32 | INFO  | Task cebd383f-6df6-4d15-a29b-3d6a6e0487ea is in state STARTED 2026-03-29 00:59:32.678679 | orchestrator | 2026-03-29 00:59:32 | INFO  | Task cc596ce9-3273-4177-a75f-661f838ebdcd is in state STARTED 2026-03-29 00:59:32.683284 | orchestrator | 2026-03-29 00:59:32 | INFO  | Task 8fd1b60d-53e6-44d6-87e3-817fe21433c8 is in state SUCCESS 2026-03-29 00:59:32.685371 | orchestrator | 2026-03-29 00:59:32.685412 | orchestrator | 2026-03-29 00:59:32.685417 | orchestrator | PLAY [Copy ceph keys to the configuration repository] ************************** 2026-03-29 00:59:32.685421 | orchestrator | 2026-03-29 00:59:32.685425 | orchestrator | TASK [Check if ceph keys exist] ************************************************ 2026-03-29 00:59:32.685429 | orchestrator | Sunday 29 March 2026 00:58:45 +0000 (0:00:00.264) 0:00:00.264 ********** 2026-03-29 00:59:32.685432 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-03-29 00:59:32.685437 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-29 00:59:32.685463 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-29 00:59:32.685467 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-03-29 00:59:32.685471 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-29 00:59:32.685475 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-03-29 00:59:32.685478 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-03-29 00:59:32.685481 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-03-29 00:59:32.685484 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-03-29 00:59:32.685487 | orchestrator | 2026-03-29 00:59:32.685504 | orchestrator | TASK [Fetch all ceph keys] ***************************************************** 2026-03-29 00:59:32.685509 | orchestrator | Sunday 29 March 2026 00:58:50 +0000 (0:00:05.207) 0:00:05.471 ********** 2026-03-29 00:59:32.685514 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-03-29 00:59:32.685519 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-29 00:59:32.685525 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-29 00:59:32.685528 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-03-29 00:59:32.685531 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-29 00:59:32.685534 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-03-29 00:59:32.685537 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-03-29 00:59:32.685540 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-03-29 00:59:32.685549 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-03-29 00:59:32.685552 | orchestrator | 2026-03-29 00:59:32.685555 | orchestrator | TASK [Create share directory] ************************************************** 2026-03-29 00:59:32.685558 | orchestrator | Sunday 29 March 2026 00:58:54 +0000 (0:00:04.096) 0:00:09.567 ********** 2026-03-29 00:59:32.685562 | orchestrator | changed: [testbed-manager -> localhost] 2026-03-29 00:59:32.685565 | orchestrator | 2026-03-29 00:59:32.685568 | orchestrator | TASK [Write ceph keys to the share directory] ********************************** 2026-03-29 00:59:32.685571 | orchestrator | Sunday 29 March 2026 00:58:56 +0000 (0:00:01.786) 0:00:11.354 ********** 2026-03-29 00:59:32.685575 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2026-03-29 00:59:32.685580 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-03-29 00:59:32.685618 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-03-29 00:59:32.685626 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2026-03-29 00:59:32.685629 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-03-29 00:59:32.685632 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2026-03-29 00:59:32.685635 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2026-03-29 00:59:32.685638 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2026-03-29 00:59:32.685641 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2026-03-29 00:59:32.685644 | orchestrator | 2026-03-29 00:59:32.685648 | orchestrator | TASK [Check if target directories exist] *************************************** 2026-03-29 00:59:32.685654 | orchestrator | Sunday 29 March 2026 00:59:10 +0000 (0:00:13.519) 0:00:24.873 ********** 2026-03-29 00:59:32.685837 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/infrastructure/files/ceph) 2026-03-29 00:59:32.685845 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-volume) 2026-03-29 00:59:32.685849 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-03-29 00:59:32.685853 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-03-29 00:59:32.685866 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-03-29 00:59:32.685873 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-03-29 00:59:32.685887 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/glance) 2026-03-29 00:59:32.685892 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/gnocchi) 2026-03-29 00:59:32.685897 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/manila) 2026-03-29 00:59:32.685903 | orchestrator | 2026-03-29 00:59:32.685906 | orchestrator | TASK [Write ceph keys to the configuration directory] ************************** 2026-03-29 00:59:32.685910 | orchestrator | Sunday 29 March 2026 00:59:13 +0000 (0:00:03.446) 0:00:28.320 ********** 2026-03-29 00:59:32.685913 | orchestrator | changed: [testbed-manager] => (item=ceph.client.admin.keyring) 2026-03-29 00:59:32.685916 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-03-29 00:59:32.685920 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-03-29 00:59:32.685923 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder-backup.keyring) 2026-03-29 00:59:32.685926 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-03-29 00:59:32.685929 | orchestrator | changed: [testbed-manager] => (item=ceph.client.nova.keyring) 2026-03-29 00:59:32.685932 | orchestrator | changed: [testbed-manager] => (item=ceph.client.glance.keyring) 2026-03-29 00:59:32.685935 | orchestrator | changed: [testbed-manager] => (item=ceph.client.gnocchi.keyring) 2026-03-29 00:59:32.685938 | orchestrator | changed: [testbed-manager] => (item=ceph.client.manila.keyring) 2026-03-29 00:59:32.685941 | orchestrator | 2026-03-29 00:59:32.685944 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 00:59:32.685947 | orchestrator | testbed-manager : ok=6  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-29 00:59:32.685951 | orchestrator | 2026-03-29 00:59:32.685954 | orchestrator | 2026-03-29 00:59:32.685957 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-29 00:59:32.685960 | orchestrator | Sunday 29 March 2026 00:59:20 +0000 (0:00:07.231) 0:00:35.552 ********** 2026-03-29 00:59:32.685963 | orchestrator | =============================================================================== 2026-03-29 00:59:32.685966 | orchestrator | Write ceph keys to the share directory --------------------------------- 13.52s 2026-03-29 00:59:32.685969 | orchestrator | Write ceph keys to the configuration directory -------------------------- 7.23s 2026-03-29 00:59:32.685972 | orchestrator | Check if ceph keys exist ------------------------------------------------ 5.21s 2026-03-29 00:59:32.685975 | orchestrator | Fetch all ceph keys ----------------------------------------------------- 4.10s 2026-03-29 00:59:32.685978 | orchestrator | Check if target directories exist --------------------------------------- 3.45s 2026-03-29 00:59:32.685985 | orchestrator | Create share directory -------------------------------------------------- 1.79s 2026-03-29 00:59:32.685988 | orchestrator | 2026-03-29 00:59:32.685991 | orchestrator | 2026-03-29 00:59:32.685994 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-29 00:59:32.685997 | orchestrator | 2026-03-29 00:59:32.686000 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-29 00:59:32.686003 | orchestrator | Sunday 29 March 2026 00:58:02 +0000 (0:00:00.281) 0:00:00.281 ********** 2026-03-29 00:59:32.686006 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:59:32.686010 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:59:32.686036 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:59:32.686040 | orchestrator | 2026-03-29 00:59:32.686043 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-29 00:59:32.686046 | orchestrator | Sunday 29 March 2026 00:58:02 +0000 (0:00:00.262) 0:00:00.543 ********** 2026-03-29 00:59:32.686049 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2026-03-29 00:59:32.686052 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2026-03-29 00:59:32.686055 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2026-03-29 00:59:32.686061 | orchestrator | 2026-03-29 00:59:32.686064 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2026-03-29 00:59:32.686067 | orchestrator | 2026-03-29 00:59:32.686070 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-03-29 00:59:32.686074 | orchestrator | Sunday 29 March 2026 00:58:02 +0000 (0:00:00.266) 0:00:00.810 ********** 2026-03-29 00:59:32.686077 | orchestrator | included: /ansible/roles/horizon/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 00:59:32.686080 | orchestrator | 2026-03-29 00:59:32.686083 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2026-03-29 00:59:32.686086 | orchestrator | Sunday 29 March 2026 00:58:03 +0000 (0:00:00.555) 0:00:01.365 ********** 2026-03-29 00:59:32.686097 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-29 00:59:32.686104 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-29 00:59:32.686119 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-29 00:59:32.686125 | orchestrator | 2026-03-29 00:59:32.686131 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2026-03-29 00:59:32.686136 | orchestrator | Sunday 29 March 2026 00:58:04 +0000 (0:00:01.549) 0:00:02.915 ********** 2026-03-29 00:59:32.686141 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:59:32.686146 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:59:32.686150 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:59:32.686155 | orchestrator | 2026-03-29 00:59:32.686160 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-03-29 00:59:32.686165 | orchestrator | Sunday 29 March 2026 00:58:04 +0000 (0:00:00.265) 0:00:03.181 ********** 2026-03-29 00:59:32.686173 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2026-03-29 00:59:32.686178 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'heat', 'enabled': 'no'})  2026-03-29 00:59:32.686184 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'ironic', 'enabled': False})  2026-03-29 00:59:32.686187 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2026-03-29 00:59:32.686190 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2026-03-29 00:59:32.686193 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2026-03-29 00:59:32.686196 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2026-03-29 00:59:32.686199 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2026-03-29 00:59:32.686202 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2026-03-29 00:59:32.686205 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'heat', 'enabled': 'no'})  2026-03-29 00:59:32.686208 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'ironic', 'enabled': False})  2026-03-29 00:59:32.686211 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2026-03-29 00:59:32.686214 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2026-03-29 00:59:32.686217 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2026-03-29 00:59:32.686220 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2026-03-29 00:59:32.686223 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2026-03-29 00:59:32.686227 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2026-03-29 00:59:32.686230 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'heat', 'enabled': 'no'})  2026-03-29 00:59:32.686233 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'ironic', 'enabled': False})  2026-03-29 00:59:32.686236 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2026-03-29 00:59:32.686239 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2026-03-29 00:59:32.686242 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2026-03-29 00:59:32.686247 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2026-03-29 00:59:32.686251 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2026-03-29 00:59:32.686254 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2026-03-29 00:59:32.686258 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2026-03-29 00:59:32.686262 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2026-03-29 00:59:32.686265 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2026-03-29 00:59:32.686268 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2026-03-29 00:59:32.686271 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2026-03-29 00:59:32.686274 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2026-03-29 00:59:32.686277 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2026-03-29 00:59:32.686283 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2026-03-29 00:59:32.686286 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2026-03-29 00:59:32.686289 | orchestrator | 2026-03-29 00:59:32.686292 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-29 00:59:32.686295 | orchestrator | Sunday 29 March 2026 00:58:05 +0000 (0:00:00.667) 0:00:03.848 ********** 2026-03-29 00:59:32.686298 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:59:32.686301 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:59:32.686304 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:59:32.686307 | orchestrator | 2026-03-29 00:59:32.686311 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-29 00:59:32.686315 | orchestrator | Sunday 29 March 2026 00:58:06 +0000 (0:00:00.416) 0:00:04.265 ********** 2026-03-29 00:59:32.686318 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:59:32.686321 | orchestrator | 2026-03-29 00:59:32.686324 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-29 00:59:32.686328 | orchestrator | Sunday 29 March 2026 00:58:06 +0000 (0:00:00.100) 0:00:04.365 ********** 2026-03-29 00:59:32.686331 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:59:32.686334 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:59:32.686337 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:59:32.686340 | orchestrator | 2026-03-29 00:59:32.686343 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-29 00:59:32.686346 | orchestrator | Sunday 29 March 2026 00:58:06 +0000 (0:00:00.257) 0:00:04.623 ********** 2026-03-29 00:59:32.686349 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:59:32.686352 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:59:32.686355 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:59:32.686358 | orchestrator | 2026-03-29 00:59:32.686361 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-29 00:59:32.686364 | orchestrator | Sunday 29 March 2026 00:58:06 +0000 (0:00:00.271) 0:00:04.895 ********** 2026-03-29 00:59:32.686368 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:59:32.686371 | orchestrator | 2026-03-29 00:59:32.686374 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-29 00:59:32.686377 | orchestrator | Sunday 29 March 2026 00:58:06 +0000 (0:00:00.138) 0:00:05.033 ********** 2026-03-29 00:59:32.686380 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:59:32.686383 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:59:32.686386 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:59:32.686389 | orchestrator | 2026-03-29 00:59:32.686392 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-29 00:59:32.686395 | orchestrator | Sunday 29 March 2026 00:58:07 +0000 (0:00:00.392) 0:00:05.425 ********** 2026-03-29 00:59:32.686398 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:59:32.686401 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:59:32.686404 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:59:32.686407 | orchestrator | 2026-03-29 00:59:32.686411 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-29 00:59:32.686414 | orchestrator | Sunday 29 March 2026 00:58:07 +0000 (0:00:00.271) 0:00:05.697 ********** 2026-03-29 00:59:32.686419 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:59:32.686424 | orchestrator | 2026-03-29 00:59:32.686428 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-29 00:59:32.686433 | orchestrator | Sunday 29 March 2026 00:58:07 +0000 (0:00:00.096) 0:00:05.794 ********** 2026-03-29 00:59:32.686437 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:59:32.686457 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:59:32.686462 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:59:32.686467 | orchestrator | 2026-03-29 00:59:32.686473 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-29 00:59:32.686482 | orchestrator | Sunday 29 March 2026 00:58:07 +0000 (0:00:00.263) 0:00:06.057 ********** 2026-03-29 00:59:32.686485 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:59:32.686489 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:59:32.686492 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:59:32.686496 | orchestrator | 2026-03-29 00:59:32.686499 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-29 00:59:32.686503 | orchestrator | Sunday 29 March 2026 00:58:08 +0000 (0:00:00.267) 0:00:06.325 ********** 2026-03-29 00:59:32.686506 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:59:32.686510 | orchestrator | 2026-03-29 00:59:32.686513 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-29 00:59:32.686517 | orchestrator | Sunday 29 March 2026 00:58:08 +0000 (0:00:00.121) 0:00:06.447 ********** 2026-03-29 00:59:32.686520 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:59:32.686524 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:59:32.686528 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:59:32.686539 | orchestrator | 2026-03-29 00:59:32.686543 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-29 00:59:32.686547 | orchestrator | Sunday 29 March 2026 00:58:08 +0000 (0:00:00.395) 0:00:06.842 ********** 2026-03-29 00:59:32.686550 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:59:32.686560 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:59:32.686570 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:59:32.686579 | orchestrator | 2026-03-29 00:59:32.686584 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-29 00:59:32.686588 | orchestrator | Sunday 29 March 2026 00:58:08 +0000 (0:00:00.254) 0:00:07.097 ********** 2026-03-29 00:59:32.686593 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:59:32.686598 | orchestrator | 2026-03-29 00:59:32.686603 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-29 00:59:32.686607 | orchestrator | Sunday 29 March 2026 00:58:08 +0000 (0:00:00.113) 0:00:07.210 ********** 2026-03-29 00:59:32.686612 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:59:32.686617 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:59:32.686622 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:59:32.686626 | orchestrator | 2026-03-29 00:59:32.686632 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-29 00:59:32.686636 | orchestrator | Sunday 29 March 2026 00:58:09 +0000 (0:00:00.262) 0:00:07.472 ********** 2026-03-29 00:59:32.686642 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:59:32.686646 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:59:32.686650 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:59:32.686653 | orchestrator | 2026-03-29 00:59:32.686656 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-29 00:59:32.686659 | orchestrator | Sunday 29 March 2026 00:58:09 +0000 (0:00:00.496) 0:00:07.969 ********** 2026-03-29 00:59:32.686662 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:59:32.686665 | orchestrator | 2026-03-29 00:59:32.686668 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-29 00:59:32.686671 | orchestrator | Sunday 29 March 2026 00:58:09 +0000 (0:00:00.136) 0:00:08.106 ********** 2026-03-29 00:59:32.686674 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:59:32.686677 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:59:32.686680 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:59:32.686683 | orchestrator | 2026-03-29 00:59:32.686689 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-29 00:59:32.686692 | orchestrator | Sunday 29 March 2026 00:58:10 +0000 (0:00:00.334) 0:00:08.440 ********** 2026-03-29 00:59:32.686695 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:59:32.686698 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:59:32.686701 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:59:32.686704 | orchestrator | 2026-03-29 00:59:32.686707 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-29 00:59:32.686714 | orchestrator | Sunday 29 March 2026 00:58:10 +0000 (0:00:00.397) 0:00:08.837 ********** 2026-03-29 00:59:32.686717 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:59:32.686720 | orchestrator | 2026-03-29 00:59:32.686723 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-29 00:59:32.686726 | orchestrator | Sunday 29 March 2026 00:58:10 +0000 (0:00:00.129) 0:00:08.967 ********** 2026-03-29 00:59:32.686729 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:59:32.686732 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:59:32.686735 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:59:32.686738 | orchestrator | 2026-03-29 00:59:32.686742 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-29 00:59:32.686745 | orchestrator | Sunday 29 March 2026 00:58:10 +0000 (0:00:00.272) 0:00:09.240 ********** 2026-03-29 00:59:32.686748 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:59:32.686751 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:59:32.686754 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:59:32.686757 | orchestrator | 2026-03-29 00:59:32.686762 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-29 00:59:32.686767 | orchestrator | Sunday 29 March 2026 00:58:11 +0000 (0:00:00.512) 0:00:09.752 ********** 2026-03-29 00:59:32.686771 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:59:32.686776 | orchestrator | 2026-03-29 00:59:32.686781 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-29 00:59:32.686786 | orchestrator | Sunday 29 March 2026 00:58:11 +0000 (0:00:00.131) 0:00:09.884 ********** 2026-03-29 00:59:32.686791 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:59:32.686795 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:59:32.686800 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:59:32.686805 | orchestrator | 2026-03-29 00:59:32.686810 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-29 00:59:32.686814 | orchestrator | Sunday 29 March 2026 00:58:11 +0000 (0:00:00.311) 0:00:10.195 ********** 2026-03-29 00:59:32.686819 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:59:32.686824 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:59:32.686827 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:59:32.686830 | orchestrator | 2026-03-29 00:59:32.686833 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-29 00:59:32.686837 | orchestrator | Sunday 29 March 2026 00:58:12 +0000 (0:00:00.329) 0:00:10.525 ********** 2026-03-29 00:59:32.686840 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:59:32.686843 | orchestrator | 2026-03-29 00:59:32.686850 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-29 00:59:32.686857 | orchestrator | Sunday 29 March 2026 00:58:12 +0000 (0:00:00.119) 0:00:10.645 ********** 2026-03-29 00:59:32.686865 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:59:32.686869 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:59:32.686874 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:59:32.686880 | orchestrator | 2026-03-29 00:59:32.686884 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-29 00:59:32.686887 | orchestrator | Sunday 29 March 2026 00:58:12 +0000 (0:00:00.284) 0:00:10.929 ********** 2026-03-29 00:59:32.686890 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:59:32.686894 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:59:32.686899 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:59:32.686904 | orchestrator | 2026-03-29 00:59:32.686909 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-29 00:59:32.686913 | orchestrator | Sunday 29 March 2026 00:58:13 +0000 (0:00:00.545) 0:00:11.475 ********** 2026-03-29 00:59:32.686918 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:59:32.686923 | orchestrator | 2026-03-29 00:59:32.686928 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-29 00:59:32.686933 | orchestrator | Sunday 29 March 2026 00:58:13 +0000 (0:00:00.162) 0:00:11.637 ********** 2026-03-29 00:59:32.686938 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:59:32.686951 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:59:32.686956 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:59:32.686961 | orchestrator | 2026-03-29 00:59:32.686966 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2026-03-29 00:59:32.686969 | orchestrator | Sunday 29 March 2026 00:58:13 +0000 (0:00:00.278) 0:00:11.916 ********** 2026-03-29 00:59:32.686973 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:59:32.686976 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:59:32.686979 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:59:32.686982 | orchestrator | 2026-03-29 00:59:32.686985 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2026-03-29 00:59:32.686988 | orchestrator | Sunday 29 March 2026 00:58:15 +0000 (0:00:01.777) 0:00:13.693 ********** 2026-03-29 00:59:32.686991 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-03-29 00:59:32.686994 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-03-29 00:59:32.686997 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-03-29 00:59:32.687000 | orchestrator | 2026-03-29 00:59:32.687003 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2026-03-29 00:59:32.687006 | orchestrator | Sunday 29 March 2026 00:58:18 +0000 (0:00:02.586) 0:00:16.279 ********** 2026-03-29 00:59:32.687010 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-03-29 00:59:32.687013 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-03-29 00:59:32.687016 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-03-29 00:59:32.687019 | orchestrator | 2026-03-29 00:59:32.687025 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2026-03-29 00:59:32.687028 | orchestrator | Sunday 29 March 2026 00:58:20 +0000 (0:00:02.466) 0:00:18.746 ********** 2026-03-29 00:59:32.687031 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-03-29 00:59:32.687034 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-03-29 00:59:32.687037 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-03-29 00:59:32.687040 | orchestrator | 2026-03-29 00:59:32.687043 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2026-03-29 00:59:32.687046 | orchestrator | Sunday 29 March 2026 00:58:22 +0000 (0:00:01.590) 0:00:20.336 ********** 2026-03-29 00:59:32.687049 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:59:32.687052 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:59:32.687055 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:59:32.687058 | orchestrator | 2026-03-29 00:59:32.687061 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2026-03-29 00:59:32.687065 | orchestrator | Sunday 29 March 2026 00:58:22 +0000 (0:00:00.303) 0:00:20.640 ********** 2026-03-29 00:59:32.687068 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:59:32.687071 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:59:32.687074 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:59:32.687077 | orchestrator | 2026-03-29 00:59:32.687080 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-03-29 00:59:32.687083 | orchestrator | Sunday 29 March 2026 00:58:22 +0000 (0:00:00.359) 0:00:20.999 ********** 2026-03-29 00:59:32.687086 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 00:59:32.687089 | orchestrator | 2026-03-29 00:59:32.687092 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2026-03-29 00:59:32.687096 | orchestrator | Sunday 29 March 2026 00:58:23 +0000 (0:00:00.877) 0:00:21.876 ********** 2026-03-29 00:59:32.687107 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-29 00:59:32.687120 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-29 00:59:32.687130 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-29 00:59:32.687134 | orchestrator | 2026-03-29 00:59:32.687137 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2026-03-29 00:59:32.687140 | orchestrator | Sunday 29 March 2026 00:58:25 +0000 (0:00:01.414) 0:00:23.291 ********** 2026-03-29 00:59:32.687149 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-29 00:59:32.687154 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:59:32.687160 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-29 00:59:32.687163 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:59:32.687170 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-29 00:59:32.687176 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:59:32.687179 | orchestrator | 2026-03-29 00:59:32.687182 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2026-03-29 00:59:32.687185 | orchestrator | Sunday 29 March 2026 00:58:26 +0000 (0:00:01.089) 0:00:24.381 ********** 2026-03-29 00:59:32.687191 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-29 00:59:32.687194 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:59:32.687200 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-29 00:59:32.687206 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:59:32.687211 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-29 00:59:32.687216 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:59:32.687219 | orchestrator | 2026-03-29 00:59:32.687222 | orchestrator | TASK [horizon : Deploy horizon container] ************************************** 2026-03-29 00:59:32.687225 | orchestrator | Sunday 29 March 2026 00:58:27 +0000 (0:00:01.098) 0:00:25.479 ********** 2026-03-29 00:59:32.687232 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-29 00:59:32.687238 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-29 00:59:32.687246 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-29 00:59:32.687250 | orchestrator | 2026-03-29 00:59:32.687253 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-03-29 00:59:32.687256 | orchestrator | Sunday 29 March 2026 00:58:28 +0000 (0:00:01.369) 0:00:26.849 ********** 2026-03-29 00:59:32.687259 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:59:32.687262 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:59:32.687265 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:59:32.687268 | orchestrator | 2026-03-29 00:59:32.687272 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-03-29 00:59:32.687275 | orchestrator | Sunday 29 March 2026 00:58:28 +0000 (0:00:00.325) 0:00:27.175 ********** 2026-03-29 00:59:32.687278 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 00:59:32.687281 | orchestrator | 2026-03-29 00:59:32.687286 | orchestrator | TASK [horizon : Creating Horizon database] ************************************* 2026-03-29 00:59:32.687290 | orchestrator | Sunday 29 March 2026 00:58:29 +0000 (0:00:00.930) 0:00:28.106 ********** 2026-03-29 00:59:32.687293 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:59:32.687296 | orchestrator | 2026-03-29 00:59:32.687299 | orchestrator | TASK [horizon : Creating Horizon database user and setting permissions] ******** 2026-03-29 00:59:32.687304 | orchestrator | Sunday 29 March 2026 00:58:32 +0000 (0:00:02.805) 0:00:30.911 ********** 2026-03-29 00:59:32.687307 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:59:32.687310 | orchestrator | 2026-03-29 00:59:32.687313 | orchestrator | TASK [horizon : Running Horizon bootstrap container] *************************** 2026-03-29 00:59:32.687317 | orchestrator | Sunday 29 March 2026 00:58:35 +0000 (0:00:02.603) 0:00:33.515 ********** 2026-03-29 00:59:32.687320 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:59:32.687323 | orchestrator | 2026-03-29 00:59:32.687326 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-03-29 00:59:32.687329 | orchestrator | Sunday 29 March 2026 00:58:50 +0000 (0:00:15.320) 0:00:48.835 ********** 2026-03-29 00:59:32.687332 | orchestrator | 2026-03-29 00:59:32.687335 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-03-29 00:59:32.687338 | orchestrator | Sunday 29 March 2026 00:58:50 +0000 (0:00:00.072) 0:00:48.908 ********** 2026-03-29 00:59:32.687341 | orchestrator | 2026-03-29 00:59:32.687344 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-03-29 00:59:32.687348 | orchestrator | Sunday 29 March 2026 00:58:50 +0000 (0:00:00.066) 0:00:48.975 ********** 2026-03-29 00:59:32.687351 | orchestrator | 2026-03-29 00:59:32.687354 | orchestrator | RUNNING HANDLER [horizon : Restart horizon container] ************************** 2026-03-29 00:59:32.687357 | orchestrator | Sunday 29 March 2026 00:58:50 +0000 (0:00:00.066) 0:00:49.041 ********** 2026-03-29 00:59:32.687360 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:59:32.687363 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:59:32.687366 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:59:32.687369 | orchestrator | 2026-03-29 00:59:32.687373 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 00:59:32.687376 | orchestrator | testbed-node-0 : ok=37  changed=11  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-03-29 00:59:32.687379 | orchestrator | testbed-node-1 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2026-03-29 00:59:32.687382 | orchestrator | testbed-node-2 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2026-03-29 00:59:32.687385 | orchestrator | 2026-03-29 00:59:32.687389 | orchestrator | 2026-03-29 00:59:32.687393 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-29 00:59:32.687396 | orchestrator | Sunday 29 March 2026 00:59:31 +0000 (0:00:40.687) 0:01:29.729 ********** 2026-03-29 00:59:32.687400 | orchestrator | =============================================================================== 2026-03-29 00:59:32.687403 | orchestrator | horizon : Restart horizon container ------------------------------------ 40.69s 2026-03-29 00:59:32.687406 | orchestrator | horizon : Running Horizon bootstrap container -------------------------- 15.32s 2026-03-29 00:59:32.687409 | orchestrator | horizon : Creating Horizon database ------------------------------------- 2.81s 2026-03-29 00:59:32.687412 | orchestrator | horizon : Creating Horizon database user and setting permissions -------- 2.60s 2026-03-29 00:59:32.687415 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 2.59s 2026-03-29 00:59:32.687418 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 2.47s 2026-03-29 00:59:32.687421 | orchestrator | horizon : Copying over config.json files for services ------------------- 1.78s 2026-03-29 00:59:32.687424 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 1.59s 2026-03-29 00:59:32.687427 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 1.55s 2026-03-29 00:59:32.687430 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 1.41s 2026-03-29 00:59:32.687433 | orchestrator | horizon : Deploy horizon container -------------------------------------- 1.37s 2026-03-29 00:59:32.687436 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 1.10s 2026-03-29 00:59:32.687502 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 1.09s 2026-03-29 00:59:32.687507 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.93s 2026-03-29 00:59:32.687510 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.88s 2026-03-29 00:59:32.687513 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.67s 2026-03-29 00:59:32.687516 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.56s 2026-03-29 00:59:32.687519 | orchestrator | horizon : Update policy file name --------------------------------------- 0.55s 2026-03-29 00:59:32.687522 | orchestrator | horizon : Update policy file name --------------------------------------- 0.51s 2026-03-29 00:59:32.687526 | orchestrator | horizon : Update policy file name --------------------------------------- 0.50s 2026-03-29 00:59:32.687529 | orchestrator | 2026-03-29 00:59:32 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:59:35.728555 | orchestrator | 2026-03-29 00:59:35 | INFO  | Task cebd383f-6df6-4d15-a29b-3d6a6e0487ea is in state STARTED 2026-03-29 00:59:35.729289 | orchestrator | 2026-03-29 00:59:35 | INFO  | Task cc596ce9-3273-4177-a75f-661f838ebdcd is in state STARTED 2026-03-29 00:59:35.729323 | orchestrator | 2026-03-29 00:59:35 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:59:38.769159 | orchestrator | 2026-03-29 00:59:38 | INFO  | Task cebd383f-6df6-4d15-a29b-3d6a6e0487ea is in state STARTED 2026-03-29 00:59:38.770064 | orchestrator | 2026-03-29 00:59:38 | INFO  | Task cc596ce9-3273-4177-a75f-661f838ebdcd is in state STARTED 2026-03-29 00:59:38.770147 | orchestrator | 2026-03-29 00:59:38 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:59:41.820714 | orchestrator | 2026-03-29 00:59:41 | INFO  | Task cebd383f-6df6-4d15-a29b-3d6a6e0487ea is in state STARTED 2026-03-29 00:59:41.822593 | orchestrator | 2026-03-29 00:59:41 | INFO  | Task cc596ce9-3273-4177-a75f-661f838ebdcd is in state STARTED 2026-03-29 00:59:41.822653 | orchestrator | 2026-03-29 00:59:41 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:59:44.866757 | orchestrator | 2026-03-29 00:59:44 | INFO  | Task cebd383f-6df6-4d15-a29b-3d6a6e0487ea is in state STARTED 2026-03-29 00:59:44.867624 | orchestrator | 2026-03-29 00:59:44 | INFO  | Task cc596ce9-3273-4177-a75f-661f838ebdcd is in state STARTED 2026-03-29 00:59:44.867677 | orchestrator | 2026-03-29 00:59:44 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:59:47.911781 | orchestrator | 2026-03-29 00:59:47 | INFO  | Task cebd383f-6df6-4d15-a29b-3d6a6e0487ea is in state STARTED 2026-03-29 00:59:47.911898 | orchestrator | 2026-03-29 00:59:47 | INFO  | Task cc596ce9-3273-4177-a75f-661f838ebdcd is in state STARTED 2026-03-29 00:59:47.911924 | orchestrator | 2026-03-29 00:59:47 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:59:50.944421 | orchestrator | 2026-03-29 00:59:50 | INFO  | Task cebd383f-6df6-4d15-a29b-3d6a6e0487ea is in state STARTED 2026-03-29 00:59:50.945483 | orchestrator | 2026-03-29 00:59:50 | INFO  | Task cc596ce9-3273-4177-a75f-661f838ebdcd is in state STARTED 2026-03-29 00:59:50.945533 | orchestrator | 2026-03-29 00:59:50 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:59:53.990215 | orchestrator | 2026-03-29 00:59:53 | INFO  | Task cebd383f-6df6-4d15-a29b-3d6a6e0487ea is in state STARTED 2026-03-29 00:59:53.992200 | orchestrator | 2026-03-29 00:59:53 | INFO  | Task cc596ce9-3273-4177-a75f-661f838ebdcd is in state STARTED 2026-03-29 00:59:53.992266 | orchestrator | 2026-03-29 00:59:53 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:59:57.043945 | orchestrator | 2026-03-29 00:59:57 | INFO  | Task cebd383f-6df6-4d15-a29b-3d6a6e0487ea is in state STARTED 2026-03-29 00:59:57.044773 | orchestrator | 2026-03-29 00:59:57 | INFO  | Task cc596ce9-3273-4177-a75f-661f838ebdcd is in state STARTED 2026-03-29 00:59:57.044893 | orchestrator | 2026-03-29 00:59:57 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:00:00.092120 | orchestrator | 2026-03-29 01:00:00 | INFO  | Task cebd383f-6df6-4d15-a29b-3d6a6e0487ea is in state STARTED 2026-03-29 01:00:00.094301 | orchestrator | 2026-03-29 01:00:00 | INFO  | Task cc596ce9-3273-4177-a75f-661f838ebdcd is in state STARTED 2026-03-29 01:00:00.094510 | orchestrator | 2026-03-29 01:00:00 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:00:03.142981 | orchestrator | 2026-03-29 01:00:03 | INFO  | Task cebd383f-6df6-4d15-a29b-3d6a6e0487ea is in state STARTED 2026-03-29 01:00:03.144448 | orchestrator | 2026-03-29 01:00:03 | INFO  | Task cc596ce9-3273-4177-a75f-661f838ebdcd is in state STARTED 2026-03-29 01:00:03.144498 | orchestrator | 2026-03-29 01:00:03 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:00:06.187234 | orchestrator | 2026-03-29 01:00:06 | INFO  | Task cebd383f-6df6-4d15-a29b-3d6a6e0487ea is in state STARTED 2026-03-29 01:00:06.189208 | orchestrator | 2026-03-29 01:00:06 | INFO  | Task cc596ce9-3273-4177-a75f-661f838ebdcd is in state STARTED 2026-03-29 01:00:06.189258 | orchestrator | 2026-03-29 01:00:06 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:00:09.232186 | orchestrator | 2026-03-29 01:00:09 | INFO  | Task cebd383f-6df6-4d15-a29b-3d6a6e0487ea is in state STARTED 2026-03-29 01:00:09.233750 | orchestrator | 2026-03-29 01:00:09 | INFO  | Task cc596ce9-3273-4177-a75f-661f838ebdcd is in state STARTED 2026-03-29 01:00:09.233813 | orchestrator | 2026-03-29 01:00:09 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:00:12.277382 | orchestrator | 2026-03-29 01:00:12 | INFO  | Task cebd383f-6df6-4d15-a29b-3d6a6e0487ea is in state STARTED 2026-03-29 01:00:12.277501 | orchestrator | 2026-03-29 01:00:12 | INFO  | Task cc596ce9-3273-4177-a75f-661f838ebdcd is in state STARTED 2026-03-29 01:00:12.277545 | orchestrator | 2026-03-29 01:00:12 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:00:15.320277 | orchestrator | 2026-03-29 01:00:15 | INFO  | Task cebd383f-6df6-4d15-a29b-3d6a6e0487ea is in state STARTED 2026-03-29 01:00:15.321063 | orchestrator | 2026-03-29 01:00:15 | INFO  | Task cc596ce9-3273-4177-a75f-661f838ebdcd is in state STARTED 2026-03-29 01:00:15.321101 | orchestrator | 2026-03-29 01:00:15 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:00:18.366592 | orchestrator | 2026-03-29 01:00:18 | INFO  | Task cebd383f-6df6-4d15-a29b-3d6a6e0487ea is in state STARTED 2026-03-29 01:00:18.366678 | orchestrator | 2026-03-29 01:00:18 | INFO  | Task cc596ce9-3273-4177-a75f-661f838ebdcd is in state STARTED 2026-03-29 01:00:18.366687 | orchestrator | 2026-03-29 01:00:18 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:00:21.417886 | orchestrator | 2026-03-29 01:00:21 | INFO  | Task cebd383f-6df6-4d15-a29b-3d6a6e0487ea is in state STARTED 2026-03-29 01:00:21.419103 | orchestrator | 2026-03-29 01:00:21 | INFO  | Task cc596ce9-3273-4177-a75f-661f838ebdcd is in state SUCCESS 2026-03-29 01:00:21.422793 | orchestrator | 2026-03-29 01:00:21 | INFO  | Task 4ee82f15-bb76-4bf5-9135-7a1b7ae7758f is in state STARTED 2026-03-29 01:00:21.426253 | orchestrator | 2026-03-29 01:00:21 | INFO  | Task 3c94cf3d-c3ee-49dc-8632-6669d0e282b7 is in state STARTED 2026-03-29 01:00:21.427194 | orchestrator | 2026-03-29 01:00:21 | INFO  | Task 16d40639-f2c8-4df4-ad0f-208f6999aa2a is in state STARTED 2026-03-29 01:00:21.427229 | orchestrator | 2026-03-29 01:00:21 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:00:24.481644 | orchestrator | 2026-03-29 01:00:24 | INFO  | Task cebd383f-6df6-4d15-a29b-3d6a6e0487ea is in state STARTED 2026-03-29 01:00:24.482172 | orchestrator | 2026-03-29 01:00:24 | INFO  | Task 4ee82f15-bb76-4bf5-9135-7a1b7ae7758f is in state STARTED 2026-03-29 01:00:24.482956 | orchestrator | 2026-03-29 01:00:24 | INFO  | Task 3c94cf3d-c3ee-49dc-8632-6669d0e282b7 is in state STARTED 2026-03-29 01:00:24.484454 | orchestrator | 2026-03-29 01:00:24 | INFO  | Task 16d40639-f2c8-4df4-ad0f-208f6999aa2a is in state STARTED 2026-03-29 01:00:24.484509 | orchestrator | 2026-03-29 01:00:24 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:00:27.525682 | orchestrator | 2026-03-29 01:00:27 | INFO  | Task cebd383f-6df6-4d15-a29b-3d6a6e0487ea is in state STARTED 2026-03-29 01:00:27.525866 | orchestrator | 2026-03-29 01:00:27 | INFO  | Task 60ca6c37-86a5-42bd-92c5-06c0dec8515f is in state STARTED 2026-03-29 01:00:27.526845 | orchestrator | 2026-03-29 01:00:27 | INFO  | Task 4ee82f15-bb76-4bf5-9135-7a1b7ae7758f is in state STARTED 2026-03-29 01:00:27.527728 | orchestrator | 2026-03-29 01:00:27 | INFO  | Task 3c94cf3d-c3ee-49dc-8632-6669d0e282b7 is in state SUCCESS 2026-03-29 01:00:27.528183 | orchestrator | 2026-03-29 01:00:27 | INFO  | Task 16d40639-f2c8-4df4-ad0f-208f6999aa2a is in state STARTED 2026-03-29 01:00:27.529014 | orchestrator | 2026-03-29 01:00:27 | INFO  | Task 1349bb5e-0833-43cc-95ae-00c3ffd7c70d is in state STARTED 2026-03-29 01:00:27.529052 | orchestrator | 2026-03-29 01:00:27 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:00:30.614952 | orchestrator | 2026-03-29 01:00:30 | INFO  | Task cebd383f-6df6-4d15-a29b-3d6a6e0487ea is in state STARTED 2026-03-29 01:00:30.616686 | orchestrator | 2026-03-29 01:00:30 | INFO  | Task 60ca6c37-86a5-42bd-92c5-06c0dec8515f is in state STARTED 2026-03-29 01:00:30.616839 | orchestrator | 2026-03-29 01:00:30 | INFO  | Task 4ee82f15-bb76-4bf5-9135-7a1b7ae7758f is in state STARTED 2026-03-29 01:00:30.617599 | orchestrator | 2026-03-29 01:00:30 | INFO  | Task 16d40639-f2c8-4df4-ad0f-208f6999aa2a is in state STARTED 2026-03-29 01:00:30.618348 | orchestrator | 2026-03-29 01:00:30 | INFO  | Task 1349bb5e-0833-43cc-95ae-00c3ffd7c70d is in state STARTED 2026-03-29 01:00:30.618411 | orchestrator | 2026-03-29 01:00:30 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:00:33.652296 | orchestrator | 2026-03-29 01:00:33 | INFO  | Task cebd383f-6df6-4d15-a29b-3d6a6e0487ea is in state STARTED 2026-03-29 01:00:33.652932 | orchestrator | 2026-03-29 01:00:33 | INFO  | Task 60ca6c37-86a5-42bd-92c5-06c0dec8515f is in state STARTED 2026-03-29 01:00:33.653841 | orchestrator | 2026-03-29 01:00:33 | INFO  | Task 4ee82f15-bb76-4bf5-9135-7a1b7ae7758f is in state STARTED 2026-03-29 01:00:33.654571 | orchestrator | 2026-03-29 01:00:33 | INFO  | Task 16d40639-f2c8-4df4-ad0f-208f6999aa2a is in state STARTED 2026-03-29 01:00:33.655247 | orchestrator | 2026-03-29 01:00:33 | INFO  | Task 1349bb5e-0833-43cc-95ae-00c3ffd7c70d is in state STARTED 2026-03-29 01:00:33.655609 | orchestrator | 2026-03-29 01:00:33 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:00:36.682782 | orchestrator | 2026-03-29 01:00:36 | INFO  | Task cebd383f-6df6-4d15-a29b-3d6a6e0487ea is in state STARTED 2026-03-29 01:00:36.682968 | orchestrator | 2026-03-29 01:00:36 | INFO  | Task 60ca6c37-86a5-42bd-92c5-06c0dec8515f is in state STARTED 2026-03-29 01:00:36.684288 | orchestrator | 2026-03-29 01:00:36 | INFO  | Task 4ee82f15-bb76-4bf5-9135-7a1b7ae7758f is in state STARTED 2026-03-29 01:00:36.685111 | orchestrator | 2026-03-29 01:00:36 | INFO  | Task 16d40639-f2c8-4df4-ad0f-208f6999aa2a is in state STARTED 2026-03-29 01:00:36.686315 | orchestrator | 2026-03-29 01:00:36 | INFO  | Task 1349bb5e-0833-43cc-95ae-00c3ffd7c70d is in state STARTED 2026-03-29 01:00:36.686351 | orchestrator | 2026-03-29 01:00:36 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:00:39.730536 | orchestrator | 2026-03-29 01:00:39 | INFO  | Task cebd383f-6df6-4d15-a29b-3d6a6e0487ea is in state SUCCESS 2026-03-29 01:00:39.731653 | orchestrator | 2026-03-29 01:00:39.731716 | orchestrator | 2026-03-29 01:00:39.731725 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2026-03-29 01:00:39.731734 | orchestrator | 2026-03-29 01:00:39.731741 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2026-03-29 01:00:39.731749 | orchestrator | Sunday 29 March 2026 00:59:24 +0000 (0:00:00.297) 0:00:00.297 ********** 2026-03-29 01:00:39.731756 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2026-03-29 01:00:39.731765 | orchestrator | 2026-03-29 01:00:39.731772 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2026-03-29 01:00:39.731779 | orchestrator | Sunday 29 March 2026 00:59:24 +0000 (0:00:00.218) 0:00:00.516 ********** 2026-03-29 01:00:39.731786 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/configuration) 2026-03-29 01:00:39.731794 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/data) 2026-03-29 01:00:39.731801 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2026-03-29 01:00:39.731808 | orchestrator | 2026-03-29 01:00:39.731814 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2026-03-29 01:00:39.731821 | orchestrator | Sunday 29 March 2026 00:59:26 +0000 (0:00:01.460) 0:00:01.976 ********** 2026-03-29 01:00:39.731828 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2026-03-29 01:00:39.731835 | orchestrator | 2026-03-29 01:00:39.731841 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2026-03-29 01:00:39.731848 | orchestrator | Sunday 29 March 2026 00:59:27 +0000 (0:00:01.103) 0:00:03.080 ********** 2026-03-29 01:00:39.731854 | orchestrator | changed: [testbed-manager] 2026-03-29 01:00:39.731861 | orchestrator | 2026-03-29 01:00:39.731867 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2026-03-29 01:00:39.731873 | orchestrator | Sunday 29 March 2026 00:59:28 +0000 (0:00:00.897) 0:00:03.977 ********** 2026-03-29 01:00:39.731879 | orchestrator | changed: [testbed-manager] 2026-03-29 01:00:39.731885 | orchestrator | 2026-03-29 01:00:39.731890 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2026-03-29 01:00:39.731896 | orchestrator | Sunday 29 March 2026 00:59:29 +0000 (0:00:01.018) 0:00:04.995 ********** 2026-03-29 01:00:39.731902 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage cephclient service (10 retries left). 2026-03-29 01:00:39.731907 | orchestrator | ok: [testbed-manager] 2026-03-29 01:00:39.731914 | orchestrator | 2026-03-29 01:00:39.731920 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2026-03-29 01:00:39.731926 | orchestrator | Sunday 29 March 2026 01:00:09 +0000 (0:00:40.319) 0:00:45.315 ********** 2026-03-29 01:00:39.731932 | orchestrator | changed: [testbed-manager] => (item=ceph) 2026-03-29 01:00:39.731939 | orchestrator | changed: [testbed-manager] => (item=ceph-authtool) 2026-03-29 01:00:39.731945 | orchestrator | changed: [testbed-manager] => (item=rados) 2026-03-29 01:00:39.731951 | orchestrator | changed: [testbed-manager] => (item=radosgw-admin) 2026-03-29 01:00:39.731957 | orchestrator | changed: [testbed-manager] => (item=rbd) 2026-03-29 01:00:39.731964 | orchestrator | 2026-03-29 01:00:39.731970 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2026-03-29 01:00:39.731976 | orchestrator | Sunday 29 March 2026 01:00:13 +0000 (0:00:04.254) 0:00:49.569 ********** 2026-03-29 01:00:39.731983 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2026-03-29 01:00:39.731989 | orchestrator | 2026-03-29 01:00:39.732019 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2026-03-29 01:00:39.732026 | orchestrator | Sunday 29 March 2026 01:00:14 +0000 (0:00:00.684) 0:00:50.254 ********** 2026-03-29 01:00:39.732032 | orchestrator | skipping: [testbed-manager] 2026-03-29 01:00:39.732038 | orchestrator | 2026-03-29 01:00:39.732045 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2026-03-29 01:00:39.732052 | orchestrator | Sunday 29 March 2026 01:00:14 +0000 (0:00:00.125) 0:00:50.379 ********** 2026-03-29 01:00:39.732059 | orchestrator | skipping: [testbed-manager] 2026-03-29 01:00:39.732065 | orchestrator | 2026-03-29 01:00:39.732072 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Restart cephclient service] ******* 2026-03-29 01:00:39.732079 | orchestrator | Sunday 29 March 2026 01:00:14 +0000 (0:00:00.306) 0:00:50.686 ********** 2026-03-29 01:00:39.732085 | orchestrator | changed: [testbed-manager] 2026-03-29 01:00:39.732091 | orchestrator | 2026-03-29 01:00:39.732098 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Ensure that all containers are up] *** 2026-03-29 01:00:39.732105 | orchestrator | Sunday 29 March 2026 01:00:16 +0000 (0:00:01.559) 0:00:52.245 ********** 2026-03-29 01:00:39.732111 | orchestrator | changed: [testbed-manager] 2026-03-29 01:00:39.732117 | orchestrator | 2026-03-29 01:00:39.732137 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Wait for an healthy service] ****** 2026-03-29 01:00:39.732145 | orchestrator | Sunday 29 March 2026 01:00:17 +0000 (0:00:00.744) 0:00:52.990 ********** 2026-03-29 01:00:39.732151 | orchestrator | changed: [testbed-manager] 2026-03-29 01:00:39.732157 | orchestrator | 2026-03-29 01:00:39.732164 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Copy bash completion scripts] ***** 2026-03-29 01:00:39.732170 | orchestrator | Sunday 29 March 2026 01:00:17 +0000 (0:00:00.573) 0:00:53.563 ********** 2026-03-29 01:00:39.732177 | orchestrator | ok: [testbed-manager] => (item=ceph) 2026-03-29 01:00:39.732184 | orchestrator | ok: [testbed-manager] => (item=rados) 2026-03-29 01:00:39.732191 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2026-03-29 01:00:39.732198 | orchestrator | ok: [testbed-manager] => (item=rbd) 2026-03-29 01:00:39.732204 | orchestrator | 2026-03-29 01:00:39.732211 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 01:00:39.732218 | orchestrator | testbed-manager : ok=12  changed=8  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-29 01:00:39.732290 | orchestrator | 2026-03-29 01:00:39.732298 | orchestrator | 2026-03-29 01:00:39.732320 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-29 01:00:39.732327 | orchestrator | Sunday 29 March 2026 01:00:19 +0000 (0:00:01.495) 0:00:55.059 ********** 2026-03-29 01:00:39.732333 | orchestrator | =============================================================================== 2026-03-29 01:00:39.732339 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------ 40.32s 2026-03-29 01:00:39.732344 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 4.25s 2026-03-29 01:00:39.732350 | orchestrator | osism.services.cephclient : Restart cephclient service ------------------ 1.56s 2026-03-29 01:00:39.732356 | orchestrator | osism.services.cephclient : Copy bash completion scripts ---------------- 1.50s 2026-03-29 01:00:39.732362 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.46s 2026-03-29 01:00:39.732448 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 1.10s 2026-03-29 01:00:39.732454 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 1.02s 2026-03-29 01:00:39.732461 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 0.90s 2026-03-29 01:00:39.732467 | orchestrator | osism.services.cephclient : Ensure that all containers are up ----------- 0.74s 2026-03-29 01:00:39.732473 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.68s 2026-03-29 01:00:39.732479 | orchestrator | osism.services.cephclient : Wait for an healthy service ----------------- 0.57s 2026-03-29 01:00:39.732485 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 0.31s 2026-03-29 01:00:39.732504 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.22s 2026-03-29 01:00:39.732511 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.13s 2026-03-29 01:00:39.732518 | orchestrator | 2026-03-29 01:00:39.732524 | orchestrator | 2026-03-29 01:00:39.732530 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-29 01:00:39.732536 | orchestrator | 2026-03-29 01:00:39.732542 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-29 01:00:39.732548 | orchestrator | Sunday 29 March 2026 01:00:23 +0000 (0:00:00.189) 0:00:00.189 ********** 2026-03-29 01:00:39.733012 | orchestrator | ok: [testbed-node-0] 2026-03-29 01:00:39.733029 | orchestrator | ok: [testbed-node-1] 2026-03-29 01:00:39.733035 | orchestrator | ok: [testbed-node-2] 2026-03-29 01:00:39.733041 | orchestrator | 2026-03-29 01:00:39.733047 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-29 01:00:39.733052 | orchestrator | Sunday 29 March 2026 01:00:23 +0000 (0:00:00.434) 0:00:00.623 ********** 2026-03-29 01:00:39.733058 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2026-03-29 01:00:39.733065 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2026-03-29 01:00:39.733072 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2026-03-29 01:00:39.733079 | orchestrator | 2026-03-29 01:00:39.733086 | orchestrator | PLAY [Wait for the Keystone service] ******************************************* 2026-03-29 01:00:39.733093 | orchestrator | 2026-03-29 01:00:39.733099 | orchestrator | TASK [Waiting for Keystone public port to be UP] ******************************* 2026-03-29 01:00:39.733104 | orchestrator | Sunday 29 March 2026 01:00:24 +0000 (0:00:00.527) 0:00:01.151 ********** 2026-03-29 01:00:39.733110 | orchestrator | ok: [testbed-node-0] 2026-03-29 01:00:39.733116 | orchestrator | ok: [testbed-node-1] 2026-03-29 01:00:39.733122 | orchestrator | ok: [testbed-node-2] 2026-03-29 01:00:39.733128 | orchestrator | 2026-03-29 01:00:39.733134 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 01:00:39.733141 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-29 01:00:39.733149 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-29 01:00:39.733155 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-29 01:00:39.733162 | orchestrator | 2026-03-29 01:00:39.733168 | orchestrator | 2026-03-29 01:00:39.733174 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-29 01:00:39.733181 | orchestrator | Sunday 29 March 2026 01:00:25 +0000 (0:00:01.054) 0:00:02.206 ********** 2026-03-29 01:00:39.733187 | orchestrator | =============================================================================== 2026-03-29 01:00:39.733194 | orchestrator | Waiting for Keystone public port to be UP ------------------------------- 1.05s 2026-03-29 01:00:39.733213 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.53s 2026-03-29 01:00:39.733220 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.43s 2026-03-29 01:00:39.733226 | orchestrator | 2026-03-29 01:00:39.733233 | orchestrator | 2026-03-29 01:00:39.733239 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-29 01:00:39.733246 | orchestrator | 2026-03-29 01:00:39.733252 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-29 01:00:39.733258 | orchestrator | Sunday 29 March 2026 00:58:02 +0000 (0:00:00.276) 0:00:00.276 ********** 2026-03-29 01:00:39.733264 | orchestrator | ok: [testbed-node-0] 2026-03-29 01:00:39.733271 | orchestrator | ok: [testbed-node-1] 2026-03-29 01:00:39.733277 | orchestrator | ok: [testbed-node-2] 2026-03-29 01:00:39.733283 | orchestrator | 2026-03-29 01:00:39.733290 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-29 01:00:39.733296 | orchestrator | Sunday 29 March 2026 00:58:02 +0000 (0:00:00.245) 0:00:00.522 ********** 2026-03-29 01:00:39.733315 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2026-03-29 01:00:39.733320 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2026-03-29 01:00:39.733327 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2026-03-29 01:00:39.733335 | orchestrator | 2026-03-29 01:00:39.733341 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2026-03-29 01:00:39.733348 | orchestrator | 2026-03-29 01:00:39.733421 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-29 01:00:39.733430 | orchestrator | Sunday 29 March 2026 00:58:02 +0000 (0:00:00.245) 0:00:00.768 ********** 2026-03-29 01:00:39.733437 | orchestrator | included: /ansible/roles/keystone/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 01:00:39.733443 | orchestrator | 2026-03-29 01:00:39.733450 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2026-03-29 01:00:39.733456 | orchestrator | Sunday 29 March 2026 00:58:03 +0000 (0:00:00.565) 0:00:01.333 ********** 2026-03-29 01:00:39.733467 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-29 01:00:39.733477 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-29 01:00:39.733491 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-29 01:00:39.733507 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-29 01:00:39.733538 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-29 01:00:39.733546 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-29 01:00:39.733554 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-29 01:00:39.733562 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-29 01:00:39.733568 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-29 01:00:39.733575 | orchestrator | 2026-03-29 01:00:39.733582 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2026-03-29 01:00:39.733593 | orchestrator | Sunday 29 March 2026 00:58:05 +0000 (0:00:02.147) 0:00:03.480 ********** 2026-03-29 01:00:39.733603 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:00:39.733610 | orchestrator | 2026-03-29 01:00:39.733616 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2026-03-29 01:00:39.733623 | orchestrator | Sunday 29 March 2026 00:58:05 +0000 (0:00:00.110) 0:00:03.590 ********** 2026-03-29 01:00:39.733630 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:00:39.733637 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:00:39.733644 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:00:39.733651 | orchestrator | 2026-03-29 01:00:39.733657 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2026-03-29 01:00:39.733664 | orchestrator | Sunday 29 March 2026 00:58:05 +0000 (0:00:00.288) 0:00:03.879 ********** 2026-03-29 01:00:39.733674 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-29 01:00:39.733683 | orchestrator | 2026-03-29 01:00:39.733689 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-29 01:00:39.733698 | orchestrator | Sunday 29 March 2026 00:58:06 +0000 (0:00:00.827) 0:00:04.707 ********** 2026-03-29 01:00:39.733710 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 01:00:39.733770 | orchestrator | 2026-03-29 01:00:39.733777 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2026-03-29 01:00:39.733800 | orchestrator | Sunday 29 March 2026 00:58:07 +0000 (0:00:00.611) 0:00:05.319 ********** 2026-03-29 01:00:39.733810 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-29 01:00:39.733819 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-29 01:00:39.733829 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-29 01:00:39.733849 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-29 01:00:39.733872 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-29 01:00:39.733884 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-29 01:00:39.733897 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-29 01:00:39.733910 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-29 01:00:39.733922 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-29 01:00:39.733942 | orchestrator | 2026-03-29 01:00:39.733952 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2026-03-29 01:00:39.733959 | orchestrator | Sunday 29 March 2026 00:58:09 +0000 (0:00:02.843) 0:00:08.163 ********** 2026-03-29 01:00:39.733970 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-29 01:00:39.733982 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-29 01:00:39.733990 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-29 01:00:39.733996 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:00:39.734003 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-29 01:00:39.734065 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-29 01:00:39.734080 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-29 01:00:39.734086 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:00:39.734101 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-29 01:00:39.734108 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-29 01:00:39.734115 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-29 01:00:39.734121 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:00:39.734127 | orchestrator | 2026-03-29 01:00:39.734133 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2026-03-29 01:00:39.734139 | orchestrator | Sunday 29 March 2026 00:58:10 +0000 (0:00:00.742) 0:00:08.906 ********** 2026-03-29 01:00:39.734145 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-29 01:00:39.734161 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-29 01:00:39.734168 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-29 01:00:39.734174 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:00:39.734186 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-29 01:00:39.734193 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-29 01:00:39.734199 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-29 01:00:39.734210 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:00:39.734216 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-29 01:00:39.734226 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-29 01:00:39.734236 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-29 01:00:39.734242 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:00:39.734248 | orchestrator | 2026-03-29 01:00:39.734254 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2026-03-29 01:00:39.734260 | orchestrator | Sunday 29 March 2026 00:58:11 +0000 (0:00:00.944) 0:00:09.850 ********** 2026-03-29 01:00:39.734267 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-29 01:00:39.734278 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-29 01:00:39.734288 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-29 01:00:39.734300 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-29 01:00:39.734308 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-29 01:00:39.734314 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-29 01:00:39.734325 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-29 01:00:39.734331 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-29 01:00:39.734337 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-29 01:00:39.734343 | orchestrator | 2026-03-29 01:00:39.734349 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2026-03-29 01:00:39.734355 | orchestrator | Sunday 29 March 2026 00:58:14 +0000 (0:00:03.318) 0:00:13.168 ********** 2026-03-29 01:00:39.734430 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-29 01:00:39.734451 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-29 01:00:39.734458 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-29 01:00:39.734470 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-29 01:00:39.734480 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-29 01:00:39.734487 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-29 01:00:39.734500 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-29 01:00:39.734515 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-29 01:00:39.734530 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-29 01:00:39.734536 | orchestrator | 2026-03-29 01:00:39.734542 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2026-03-29 01:00:39.734548 | orchestrator | Sunday 29 March 2026 00:58:20 +0000 (0:00:05.893) 0:00:19.062 ********** 2026-03-29 01:00:39.734555 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:00:39.734562 | orchestrator | changed: [testbed-node-1] 2026-03-29 01:00:39.734569 | orchestrator | changed: [testbed-node-2] 2026-03-29 01:00:39.734700 | orchestrator | 2026-03-29 01:00:39.734710 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2026-03-29 01:00:39.734716 | orchestrator | Sunday 29 March 2026 00:58:22 +0000 (0:00:01.485) 0:00:20.547 ********** 2026-03-29 01:00:39.734722 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:00:39.734729 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:00:39.734735 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:00:39.734742 | orchestrator | 2026-03-29 01:00:39.734749 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2026-03-29 01:00:39.734755 | orchestrator | Sunday 29 March 2026 00:58:23 +0000 (0:00:01.113) 0:00:21.661 ********** 2026-03-29 01:00:39.734762 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:00:39.734769 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:00:39.734775 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:00:39.734780 | orchestrator | 2026-03-29 01:00:39.734786 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2026-03-29 01:00:39.734792 | orchestrator | Sunday 29 March 2026 00:58:23 +0000 (0:00:00.312) 0:00:21.973 ********** 2026-03-29 01:00:39.734798 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:00:39.734804 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:00:39.734810 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:00:39.734816 | orchestrator | 2026-03-29 01:00:39.734823 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2026-03-29 01:00:39.734829 | orchestrator | Sunday 29 March 2026 00:58:24 +0000 (0:00:00.353) 0:00:22.327 ********** 2026-03-29 01:00:39.734842 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-29 01:00:39.734862 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-29 01:00:39.734870 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-29 01:00:39.734877 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:00:39.734884 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-29 01:00:39.734892 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-29 01:00:39.734906 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-29 01:00:39.734914 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:00:39.734926 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-29 01:00:39.734939 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-29 01:00:39.734946 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-29 01:00:39.734953 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:00:39.734959 | orchestrator | 2026-03-29 01:00:39.734966 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-29 01:00:39.734972 | orchestrator | Sunday 29 March 2026 00:58:24 +0000 (0:00:00.604) 0:00:22.932 ********** 2026-03-29 01:00:39.734979 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:00:39.734985 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:00:39.734992 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:00:39.734998 | orchestrator | 2026-03-29 01:00:39.735005 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2026-03-29 01:00:39.735012 | orchestrator | Sunday 29 March 2026 00:58:25 +0000 (0:00:00.613) 0:00:23.545 ********** 2026-03-29 01:00:39.735019 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-03-29 01:00:39.735026 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-03-29 01:00:39.735033 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-03-29 01:00:39.735040 | orchestrator | 2026-03-29 01:00:39.735046 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2026-03-29 01:00:39.735053 | orchestrator | Sunday 29 March 2026 00:58:26 +0000 (0:00:01.609) 0:00:25.154 ********** 2026-03-29 01:00:39.735060 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-29 01:00:39.735067 | orchestrator | 2026-03-29 01:00:39.735073 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2026-03-29 01:00:39.735079 | orchestrator | Sunday 29 March 2026 00:58:27 +0000 (0:00:01.013) 0:00:26.168 ********** 2026-03-29 01:00:39.735086 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:00:39.735092 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:00:39.735099 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:00:39.735105 | orchestrator | 2026-03-29 01:00:39.735112 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2026-03-29 01:00:39.735124 | orchestrator | Sunday 29 March 2026 00:58:28 +0000 (0:00:00.604) 0:00:26.773 ********** 2026-03-29 01:00:39.735130 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-29 01:00:39.735137 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-03-29 01:00:39.735143 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-03-29 01:00:39.735150 | orchestrator | 2026-03-29 01:00:39.735160 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2026-03-29 01:00:39.735167 | orchestrator | Sunday 29 March 2026 00:58:29 +0000 (0:00:01.347) 0:00:28.120 ********** 2026-03-29 01:00:39.735174 | orchestrator | ok: [testbed-node-0] 2026-03-29 01:00:39.735180 | orchestrator | ok: [testbed-node-1] 2026-03-29 01:00:39.735186 | orchestrator | ok: [testbed-node-2] 2026-03-29 01:00:39.735193 | orchestrator | 2026-03-29 01:00:39.735200 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2026-03-29 01:00:39.735206 | orchestrator | Sunday 29 March 2026 00:58:30 +0000 (0:00:00.536) 0:00:28.657 ********** 2026-03-29 01:00:39.735213 | orchestrator | changed: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-03-29 01:00:39.735220 | orchestrator | changed: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-03-29 01:00:39.735226 | orchestrator | changed: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-03-29 01:00:39.735234 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-03-29 01:00:39.735240 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-03-29 01:00:39.735252 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-03-29 01:00:39.735259 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-03-29 01:00:39.735266 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-03-29 01:00:39.735273 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-03-29 01:00:39.735280 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-03-29 01:00:39.735287 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-03-29 01:00:39.735293 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-03-29 01:00:39.735299 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-03-29 01:00:39.735306 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-03-29 01:00:39.735312 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-03-29 01:00:39.735318 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-29 01:00:39.735325 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-29 01:00:39.735331 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-29 01:00:39.735338 | orchestrator | changed: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-29 01:00:39.735345 | orchestrator | changed: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-29 01:00:39.735351 | orchestrator | changed: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-29 01:00:39.735357 | orchestrator | 2026-03-29 01:00:39.735363 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2026-03-29 01:00:39.735417 | orchestrator | Sunday 29 March 2026 00:58:39 +0000 (0:00:08.922) 0:00:37.579 ********** 2026-03-29 01:00:39.735424 | orchestrator | changed: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-29 01:00:39.735437 | orchestrator | changed: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-29 01:00:39.735444 | orchestrator | changed: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-29 01:00:39.735450 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-29 01:00:39.735457 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-29 01:00:39.735464 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-29 01:00:39.735470 | orchestrator | 2026-03-29 01:00:39.735476 | orchestrator | TASK [keystone : Check keystone containers] ************************************ 2026-03-29 01:00:39.735482 | orchestrator | Sunday 29 March 2026 00:58:41 +0000 (0:00:02.471) 0:00:40.051 ********** 2026-03-29 01:00:39.735493 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-29 01:00:39.735508 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-29 01:00:39.735516 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-29 01:00:39.735524 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-29 01:00:39.735536 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-29 01:00:39.735545 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-29 01:00:39.735553 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-29 01:00:39.735564 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-29 01:00:39.735571 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-29 01:00:39.735578 | orchestrator | 2026-03-29 01:00:39.735584 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-29 01:00:39.735590 | orchestrator | Sunday 29 March 2026 00:58:43 +0000 (0:00:02.039) 0:00:42.090 ********** 2026-03-29 01:00:39.735596 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:00:39.735603 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:00:39.735625 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:00:39.735632 | orchestrator | 2026-03-29 01:00:39.735638 | orchestrator | TASK [keystone : Creating keystone database] *********************************** 2026-03-29 01:00:39.735644 | orchestrator | Sunday 29 March 2026 00:58:44 +0000 (0:00:00.502) 0:00:42.593 ********** 2026-03-29 01:00:39.735651 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:00:39.735657 | orchestrator | 2026-03-29 01:00:39.735663 | orchestrator | TASK [keystone : Creating Keystone database user and setting permissions] ****** 2026-03-29 01:00:39.735669 | orchestrator | Sunday 29 March 2026 00:58:46 +0000 (0:00:02.139) 0:00:44.732 ********** 2026-03-29 01:00:39.735675 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:00:39.735681 | orchestrator | 2026-03-29 01:00:39.735687 | orchestrator | TASK [keystone : Checking for any running keystone_fernet containers] ********** 2026-03-29 01:00:39.735693 | orchestrator | Sunday 29 March 2026 00:58:49 +0000 (0:00:02.832) 0:00:47.565 ********** 2026-03-29 01:00:39.735699 | orchestrator | ok: [testbed-node-1] 2026-03-29 01:00:39.735705 | orchestrator | ok: [testbed-node-0] 2026-03-29 01:00:39.735711 | orchestrator | ok: [testbed-node-2] 2026-03-29 01:00:39.735718 | orchestrator | 2026-03-29 01:00:39.735724 | orchestrator | TASK [keystone : Group nodes where keystone_fernet is running] ***************** 2026-03-29 01:00:39.735730 | orchestrator | Sunday 29 March 2026 00:58:50 +0000 (0:00:00.942) 0:00:48.507 ********** 2026-03-29 01:00:39.735736 | orchestrator | ok: [testbed-node-0] 2026-03-29 01:00:39.735742 | orchestrator | ok: [testbed-node-1] 2026-03-29 01:00:39.735747 | orchestrator | ok: [testbed-node-2] 2026-03-29 01:00:39.735754 | orchestrator | 2026-03-29 01:00:39.735760 | orchestrator | TASK [keystone : Fail if any hosts need bootstrapping and not all hosts targeted] *** 2026-03-29 01:00:39.735766 | orchestrator | Sunday 29 March 2026 00:58:50 +0000 (0:00:00.336) 0:00:48.844 ********** 2026-03-29 01:00:39.735772 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:00:39.735778 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:00:39.735784 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:00:39.735790 | orchestrator | 2026-03-29 01:00:39.735796 | orchestrator | TASK [keystone : Running Keystone bootstrap container] ************************* 2026-03-29 01:00:39.735803 | orchestrator | Sunday 29 March 2026 00:58:50 +0000 (0:00:00.319) 0:00:49.164 ********** 2026-03-29 01:00:39.735808 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:00:39.735814 | orchestrator | 2026-03-29 01:00:39.735820 | orchestrator | TASK [keystone : Running Keystone fernet bootstrap container] ****************** 2026-03-29 01:00:39.735826 | orchestrator | Sunday 29 March 2026 00:59:04 +0000 (0:00:13.438) 0:01:02.602 ********** 2026-03-29 01:00:39.735832 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:00:39.735839 | orchestrator | 2026-03-29 01:00:39.735845 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-03-29 01:00:39.735850 | orchestrator | Sunday 29 March 2026 00:59:15 +0000 (0:00:10.923) 0:01:13.526 ********** 2026-03-29 01:00:39.735856 | orchestrator | 2026-03-29 01:00:39.735862 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-03-29 01:00:39.735869 | orchestrator | Sunday 29 March 2026 00:59:15 +0000 (0:00:00.068) 0:01:13.594 ********** 2026-03-29 01:00:39.735875 | orchestrator | 2026-03-29 01:00:39.735882 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-03-29 01:00:39.735888 | orchestrator | Sunday 29 March 2026 00:59:15 +0000 (0:00:00.065) 0:01:13.660 ********** 2026-03-29 01:00:39.735895 | orchestrator | 2026-03-29 01:00:39.735902 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-ssh container] ******************** 2026-03-29 01:00:39.735908 | orchestrator | Sunday 29 March 2026 00:59:15 +0000 (0:00:00.076) 0:01:13.737 ********** 2026-03-29 01:00:39.735953 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:00:39.735961 | orchestrator | changed: [testbed-node-1] 2026-03-29 01:00:39.735967 | orchestrator | changed: [testbed-node-2] 2026-03-29 01:00:39.735974 | orchestrator | 2026-03-29 01:00:39.735981 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-fernet container] ***************** 2026-03-29 01:00:39.735987 | orchestrator | Sunday 29 March 2026 00:59:24 +0000 (0:00:08.674) 0:01:22.412 ********** 2026-03-29 01:00:39.735999 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:00:39.736006 | orchestrator | changed: [testbed-node-1] 2026-03-29 01:00:39.736013 | orchestrator | changed: [testbed-node-2] 2026-03-29 01:00:39.736019 | orchestrator | 2026-03-29 01:00:39.736026 | orchestrator | RUNNING HANDLER [keystone : Restart keystone container] ************************ 2026-03-29 01:00:39.736032 | orchestrator | Sunday 29 March 2026 00:59:29 +0000 (0:00:05.157) 0:01:27.569 ********** 2026-03-29 01:00:39.736043 | orchestrator | changed: [testbed-node-1] 2026-03-29 01:00:39.736050 | orchestrator | changed: [testbed-node-2] 2026-03-29 01:00:39.736057 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:00:39.736063 | orchestrator | 2026-03-29 01:00:39.736070 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-29 01:00:39.736076 | orchestrator | Sunday 29 March 2026 00:59:37 +0000 (0:00:08.035) 0:01:35.605 ********** 2026-03-29 01:00:39.736083 | orchestrator | included: /ansible/roles/keystone/tasks/distribute_fernet.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 01:00:39.736089 | orchestrator | 2026-03-29 01:00:39.736095 | orchestrator | TASK [keystone : Waiting for Keystone SSH port to be UP] *********************** 2026-03-29 01:00:39.736101 | orchestrator | Sunday 29 March 2026 00:59:37 +0000 (0:00:00.507) 0:01:36.113 ********** 2026-03-29 01:00:39.736107 | orchestrator | ok: [testbed-node-0] 2026-03-29 01:00:39.736114 | orchestrator | ok: [testbed-node-2] 2026-03-29 01:00:39.736121 | orchestrator | ok: [testbed-node-1] 2026-03-29 01:00:39.736127 | orchestrator | 2026-03-29 01:00:39.736134 | orchestrator | TASK [keystone : Run key distribution] ***************************************** 2026-03-29 01:00:39.736140 | orchestrator | Sunday 29 March 2026 00:59:38 +0000 (0:00:00.773) 0:01:36.886 ********** 2026-03-29 01:00:39.736147 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:00:39.736154 | orchestrator | 2026-03-29 01:00:39.736160 | orchestrator | TASK [keystone : Creating admin project, user, role, service, and endpoint] **** 2026-03-29 01:00:39.736167 | orchestrator | Sunday 29 March 2026 00:59:40 +0000 (0:00:01.627) 0:01:38.514 ********** 2026-03-29 01:00:39.736173 | orchestrator | changed: [testbed-node-0] => (item=RegionOne) 2026-03-29 01:00:39.736181 | orchestrator | 2026-03-29 01:00:39.736187 | orchestrator | TASK [service-ks-register : keystone | Creating services] ********************** 2026-03-29 01:00:39.736194 | orchestrator | Sunday 29 March 2026 00:59:54 +0000 (0:00:13.819) 0:01:52.334 ********** 2026-03-29 01:00:39.736201 | orchestrator | changed: [testbed-node-0] => (item=keystone (identity)) 2026-03-29 01:00:39.736207 | orchestrator | 2026-03-29 01:00:39.736214 | orchestrator | TASK [service-ks-register : keystone | Creating endpoints] ********************* 2026-03-29 01:00:39.736221 | orchestrator | Sunday 29 March 2026 01:00:23 +0000 (0:00:29.374) 0:02:21.708 ********** 2026-03-29 01:00:39.736227 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api-int.testbed.osism.xyz:5000 -> internal) 2026-03-29 01:00:39.736235 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api.testbed.osism.xyz:5000 -> public) 2026-03-29 01:00:39.736241 | orchestrator | 2026-03-29 01:00:39.736247 | orchestrator | TASK [service-ks-register : keystone | Creating projects] ********************** 2026-03-29 01:00:39.736253 | orchestrator | Sunday 29 March 2026 01:00:32 +0000 (0:00:08.781) 0:02:30.489 ********** 2026-03-29 01:00:39.736260 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:00:39.736266 | orchestrator | 2026-03-29 01:00:39.736273 | orchestrator | TASK [service-ks-register : keystone | Creating users] ************************* 2026-03-29 01:00:39.736279 | orchestrator | Sunday 29 March 2026 01:00:32 +0000 (0:00:00.210) 0:02:30.700 ********** 2026-03-29 01:00:39.736286 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:00:39.736293 | orchestrator | 2026-03-29 01:00:39.736300 | orchestrator | TASK [service-ks-register : keystone | Creating roles] ************************* 2026-03-29 01:00:39.736306 | orchestrator | Sunday 29 March 2026 01:00:32 +0000 (0:00:00.190) 0:02:30.890 ********** 2026-03-29 01:00:39.736312 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:00:39.736319 | orchestrator | 2026-03-29 01:00:39.736326 | orchestrator | TASK [service-ks-register : keystone | Granting user roles] ******************** 2026-03-29 01:00:39.736336 | orchestrator | Sunday 29 March 2026 01:00:32 +0000 (0:00:00.275) 0:02:31.165 ********** 2026-03-29 01:00:39.736343 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:00:39.736350 | orchestrator | 2026-03-29 01:00:39.736358 | orchestrator | TASK [keystone : Creating default user role] *********************************** 2026-03-29 01:00:39.736385 | orchestrator | Sunday 29 March 2026 01:00:33 +0000 (0:00:00.807) 0:02:31.972 ********** 2026-03-29 01:00:39.736393 | orchestrator | ok: [testbed-node-0] 2026-03-29 01:00:39.736400 | orchestrator | 2026-03-29 01:00:39.736406 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-29 01:00:39.736413 | orchestrator | Sunday 29 March 2026 01:00:37 +0000 (0:00:03.604) 0:02:35.577 ********** 2026-03-29 01:00:39.736420 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:00:39.736428 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:00:39.736435 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:00:39.736442 | orchestrator | 2026-03-29 01:00:39.736449 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 01:00:39.736456 | orchestrator | testbed-node-0 : ok=33  changed=19  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-03-29 01:00:39.736469 | orchestrator | testbed-node-1 : ok=22  changed=12  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-29 01:00:39.736476 | orchestrator | testbed-node-2 : ok=22  changed=12  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-29 01:00:39.736483 | orchestrator | 2026-03-29 01:00:39.736489 | orchestrator | 2026-03-29 01:00:39.736495 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-29 01:00:39.736502 | orchestrator | Sunday 29 March 2026 01:00:38 +0000 (0:00:00.640) 0:02:36.218 ********** 2026-03-29 01:00:39.736509 | orchestrator | =============================================================================== 2026-03-29 01:00:39.736515 | orchestrator | service-ks-register : keystone | Creating services --------------------- 29.37s 2026-03-29 01:00:39.736521 | orchestrator | keystone : Creating admin project, user, role, service, and endpoint --- 13.82s 2026-03-29 01:00:39.736528 | orchestrator | keystone : Running Keystone bootstrap container ------------------------ 13.44s 2026-03-29 01:00:39.736537 | orchestrator | keystone : Running Keystone fernet bootstrap container ----------------- 10.92s 2026-03-29 01:00:39.736549 | orchestrator | keystone : Copying files for keystone-fernet ---------------------------- 8.92s 2026-03-29 01:00:39.736556 | orchestrator | service-ks-register : keystone | Creating endpoints --------------------- 8.78s 2026-03-29 01:00:39.736561 | orchestrator | keystone : Restart keystone-ssh container ------------------------------- 8.67s 2026-03-29 01:00:39.736568 | orchestrator | keystone : Restart keystone container ----------------------------------- 8.04s 2026-03-29 01:00:39.736574 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 5.89s 2026-03-29 01:00:39.736581 | orchestrator | keystone : Restart keystone-fernet container ---------------------------- 5.16s 2026-03-29 01:00:39.736588 | orchestrator | keystone : Creating default user role ----------------------------------- 3.60s 2026-03-29 01:00:39.736595 | orchestrator | keystone : Copying over config.json files for services ------------------ 3.32s 2026-03-29 01:00:39.736601 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 2.84s 2026-03-29 01:00:39.736608 | orchestrator | keystone : Creating Keystone database user and setting permissions ------ 2.83s 2026-03-29 01:00:39.736615 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 2.47s 2026-03-29 01:00:39.736621 | orchestrator | keystone : Ensuring config directories exist ---------------------------- 2.15s 2026-03-29 01:00:39.736627 | orchestrator | keystone : Creating keystone database ----------------------------------- 2.14s 2026-03-29 01:00:39.736633 | orchestrator | keystone : Check keystone containers ------------------------------------ 2.04s 2026-03-29 01:00:39.736638 | orchestrator | keystone : Run key distribution ----------------------------------------- 1.63s 2026-03-29 01:00:39.736650 | orchestrator | keystone : Copying over wsgi-keystone.conf ------------------------------ 1.61s 2026-03-29 01:00:39.736657 | orchestrator | 2026-03-29 01:00:39 | INFO  | Task b5f979b1-cb3d-4ba2-b8aa-c6e08ca35893 is in state STARTED 2026-03-29 01:00:39.736663 | orchestrator | 2026-03-29 01:00:39 | INFO  | Task 60ca6c37-86a5-42bd-92c5-06c0dec8515f is in state STARTED 2026-03-29 01:00:39.736670 | orchestrator | 2026-03-29 01:00:39 | INFO  | Task 4ee82f15-bb76-4bf5-9135-7a1b7ae7758f is in state STARTED 2026-03-29 01:00:39.736676 | orchestrator | 2026-03-29 01:00:39 | INFO  | Task 16d40639-f2c8-4df4-ad0f-208f6999aa2a is in state STARTED 2026-03-29 01:00:39.736682 | orchestrator | 2026-03-29 01:00:39 | INFO  | Task 1349bb5e-0833-43cc-95ae-00c3ffd7c70d is in state STARTED 2026-03-29 01:00:39.736689 | orchestrator | 2026-03-29 01:00:39 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:00:42.791070 | orchestrator | 2026-03-29 01:00:42 | INFO  | Task b5f979b1-cb3d-4ba2-b8aa-c6e08ca35893 is in state STARTED 2026-03-29 01:00:42.791173 | orchestrator | 2026-03-29 01:00:42 | INFO  | Task 60ca6c37-86a5-42bd-92c5-06c0dec8515f is in state STARTED 2026-03-29 01:00:42.791184 | orchestrator | 2026-03-29 01:00:42 | INFO  | Task 4ee82f15-bb76-4bf5-9135-7a1b7ae7758f is in state STARTED 2026-03-29 01:00:42.791188 | orchestrator | 2026-03-29 01:00:42 | INFO  | Task 16d40639-f2c8-4df4-ad0f-208f6999aa2a is in state STARTED 2026-03-29 01:00:42.791192 | orchestrator | 2026-03-29 01:00:42 | INFO  | Task 1349bb5e-0833-43cc-95ae-00c3ffd7c70d is in state STARTED 2026-03-29 01:00:42.791197 | orchestrator | 2026-03-29 01:00:42 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:00:45.814179 | orchestrator | 2026-03-29 01:00:45 | INFO  | Task b5f979b1-cb3d-4ba2-b8aa-c6e08ca35893 is in state STARTED 2026-03-29 01:00:45.814765 | orchestrator | 2026-03-29 01:00:45 | INFO  | Task 60ca6c37-86a5-42bd-92c5-06c0dec8515f is in state STARTED 2026-03-29 01:00:45.816576 | orchestrator | 2026-03-29 01:00:45 | INFO  | Task 4ee82f15-bb76-4bf5-9135-7a1b7ae7758f is in state STARTED 2026-03-29 01:00:45.819716 | orchestrator | 2026-03-29 01:00:45 | INFO  | Task 16d40639-f2c8-4df4-ad0f-208f6999aa2a is in state STARTED 2026-03-29 01:00:45.821038 | orchestrator | 2026-03-29 01:00:45 | INFO  | Task 1349bb5e-0833-43cc-95ae-00c3ffd7c70d is in state STARTED 2026-03-29 01:00:45.821137 | orchestrator | 2026-03-29 01:00:45 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:00:48.891421 | orchestrator | 2026-03-29 01:00:48 | INFO  | Task b5f979b1-cb3d-4ba2-b8aa-c6e08ca35893 is in state STARTED 2026-03-29 01:00:48.893645 | orchestrator | 2026-03-29 01:00:48 | INFO  | Task 60ca6c37-86a5-42bd-92c5-06c0dec8515f is in state STARTED 2026-03-29 01:00:48.895947 | orchestrator | 2026-03-29 01:00:48 | INFO  | Task 4ee82f15-bb76-4bf5-9135-7a1b7ae7758f is in state STARTED 2026-03-29 01:00:48.896716 | orchestrator | 2026-03-29 01:00:48 | INFO  | Task 16d40639-f2c8-4df4-ad0f-208f6999aa2a is in state STARTED 2026-03-29 01:00:48.898104 | orchestrator | 2026-03-29 01:00:48 | INFO  | Task 1349bb5e-0833-43cc-95ae-00c3ffd7c70d is in state STARTED 2026-03-29 01:00:48.898140 | orchestrator | 2026-03-29 01:00:48 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:00:51.981928 | orchestrator | 2026-03-29 01:00:51 | INFO  | Task b5f979b1-cb3d-4ba2-b8aa-c6e08ca35893 is in state STARTED 2026-03-29 01:00:51.981990 | orchestrator | 2026-03-29 01:00:51 | INFO  | Task 60ca6c37-86a5-42bd-92c5-06c0dec8515f is in state STARTED 2026-03-29 01:00:51.981998 | orchestrator | 2026-03-29 01:00:51 | INFO  | Task 4ee82f15-bb76-4bf5-9135-7a1b7ae7758f is in state STARTED 2026-03-29 01:00:51.982003 | orchestrator | 2026-03-29 01:00:51 | INFO  | Task 16d40639-f2c8-4df4-ad0f-208f6999aa2a is in state STARTED 2026-03-29 01:00:51.982052 | orchestrator | 2026-03-29 01:00:51 | INFO  | Task 1349bb5e-0833-43cc-95ae-00c3ffd7c70d is in state STARTED 2026-03-29 01:00:51.982059 | orchestrator | 2026-03-29 01:00:51 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:00:55.002087 | orchestrator | 2026-03-29 01:00:55 | INFO  | Task b5f979b1-cb3d-4ba2-b8aa-c6e08ca35893 is in state STARTED 2026-03-29 01:00:55.002187 | orchestrator | 2026-03-29 01:00:55 | INFO  | Task 60ca6c37-86a5-42bd-92c5-06c0dec8515f is in state STARTED 2026-03-29 01:00:55.003242 | orchestrator | 2026-03-29 01:00:55 | INFO  | Task 4ee82f15-bb76-4bf5-9135-7a1b7ae7758f is in state STARTED 2026-03-29 01:00:55.003698 | orchestrator | 2026-03-29 01:00:55 | INFO  | Task 16d40639-f2c8-4df4-ad0f-208f6999aa2a is in state STARTED 2026-03-29 01:00:55.004293 | orchestrator | 2026-03-29 01:00:55 | INFO  | Task 1349bb5e-0833-43cc-95ae-00c3ffd7c70d is in state STARTED 2026-03-29 01:00:55.004329 | orchestrator | 2026-03-29 01:00:55 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:00:58.292814 | orchestrator | 2026-03-29 01:00:58 | INFO  | Task b5f979b1-cb3d-4ba2-b8aa-c6e08ca35893 is in state STARTED 2026-03-29 01:00:58.292870 | orchestrator | 2026-03-29 01:00:58 | INFO  | Task 60ca6c37-86a5-42bd-92c5-06c0dec8515f is in state STARTED 2026-03-29 01:00:58.292876 | orchestrator | 2026-03-29 01:00:58 | INFO  | Task 4ee82f15-bb76-4bf5-9135-7a1b7ae7758f is in state STARTED 2026-03-29 01:00:58.292880 | orchestrator | 2026-03-29 01:00:58 | INFO  | Task 16d40639-f2c8-4df4-ad0f-208f6999aa2a is in state STARTED 2026-03-29 01:00:58.292885 | orchestrator | 2026-03-29 01:00:58 | INFO  | Task 1349bb5e-0833-43cc-95ae-00c3ffd7c70d is in state STARTED 2026-03-29 01:00:58.292889 | orchestrator | 2026-03-29 01:00:58 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:01:01.080065 | orchestrator | 2026-03-29 01:01:01 | INFO  | Task b5f979b1-cb3d-4ba2-b8aa-c6e08ca35893 is in state STARTED 2026-03-29 01:01:01.080198 | orchestrator | 2026-03-29 01:01:01 | INFO  | Task 60ca6c37-86a5-42bd-92c5-06c0dec8515f is in state STARTED 2026-03-29 01:01:01.080927 | orchestrator | 2026-03-29 01:01:01 | INFO  | Task 4ee82f15-bb76-4bf5-9135-7a1b7ae7758f is in state STARTED 2026-03-29 01:01:01.081748 | orchestrator | 2026-03-29 01:01:01 | INFO  | Task 16d40639-f2c8-4df4-ad0f-208f6999aa2a is in state STARTED 2026-03-29 01:01:01.082208 | orchestrator | 2026-03-29 01:01:01 | INFO  | Task 1349bb5e-0833-43cc-95ae-00c3ffd7c70d is in state STARTED 2026-03-29 01:01:01.082233 | orchestrator | 2026-03-29 01:01:01 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:01:04.109936 | orchestrator | 2026-03-29 01:01:04 | INFO  | Task b5f979b1-cb3d-4ba2-b8aa-c6e08ca35893 is in state STARTED 2026-03-29 01:01:04.111192 | orchestrator | 2026-03-29 01:01:04 | INFO  | Task 60ca6c37-86a5-42bd-92c5-06c0dec8515f is in state STARTED 2026-03-29 01:01:04.111893 | orchestrator | 2026-03-29 01:01:04 | INFO  | Task 4ee82f15-bb76-4bf5-9135-7a1b7ae7758f is in state STARTED 2026-03-29 01:01:04.112211 | orchestrator | 2026-03-29 01:01:04 | INFO  | Task 16d40639-f2c8-4df4-ad0f-208f6999aa2a is in state STARTED 2026-03-29 01:01:04.113018 | orchestrator | 2026-03-29 01:01:04 | INFO  | Task 1349bb5e-0833-43cc-95ae-00c3ffd7c70d is in state STARTED 2026-03-29 01:01:04.113038 | orchestrator | 2026-03-29 01:01:04 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:01:07.134000 | orchestrator | 2026-03-29 01:01:07 | INFO  | Task b5f979b1-cb3d-4ba2-b8aa-c6e08ca35893 is in state STARTED 2026-03-29 01:01:07.134081 | orchestrator | 2026-03-29 01:01:07 | INFO  | Task 60ca6c37-86a5-42bd-92c5-06c0dec8515f is in state SUCCESS 2026-03-29 01:01:07.136519 | orchestrator | 2026-03-29 01:01:07 | INFO  | Task 4ee82f15-bb76-4bf5-9135-7a1b7ae7758f is in state STARTED 2026-03-29 01:01:07.137289 | orchestrator | 2026-03-29 01:01:07 | INFO  | Task 16d40639-f2c8-4df4-ad0f-208f6999aa2a is in state STARTED 2026-03-29 01:01:07.137323 | orchestrator | 2026-03-29 01:01:07 | INFO  | Task 1349bb5e-0833-43cc-95ae-00c3ffd7c70d is in state STARTED 2026-03-29 01:01:07.137340 | orchestrator | 2026-03-29 01:01:07 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:01:10.188308 | orchestrator | 2026-03-29 01:01:10 | INFO  | Task b5f979b1-cb3d-4ba2-b8aa-c6e08ca35893 is in state STARTED 2026-03-29 01:01:10.188566 | orchestrator | 2026-03-29 01:01:10 | INFO  | Task 4ee82f15-bb76-4bf5-9135-7a1b7ae7758f is in state STARTED 2026-03-29 01:01:10.189188 | orchestrator | 2026-03-29 01:01:10 | INFO  | Task 16d40639-f2c8-4df4-ad0f-208f6999aa2a is in state STARTED 2026-03-29 01:01:10.190946 | orchestrator | 2026-03-29 01:01:10 | INFO  | Task 1349bb5e-0833-43cc-95ae-00c3ffd7c70d is in state STARTED 2026-03-29 01:01:10.194355 | orchestrator | 2026-03-29 01:01:10 | INFO  | Task 05bed01a-cd1b-437f-92a2-658822fb3875 is in state STARTED 2026-03-29 01:01:10.194424 | orchestrator | 2026-03-29 01:01:10 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:01:13.241923 | orchestrator | 2026-03-29 01:01:13 | INFO  | Task b5f979b1-cb3d-4ba2-b8aa-c6e08ca35893 is in state STARTED 2026-03-29 01:01:13.241995 | orchestrator | 2026-03-29 01:01:13 | INFO  | Task 4ee82f15-bb76-4bf5-9135-7a1b7ae7758f is in state STARTED 2026-03-29 01:01:13.242001 | orchestrator | 2026-03-29 01:01:13 | INFO  | Task 16d40639-f2c8-4df4-ad0f-208f6999aa2a is in state STARTED 2026-03-29 01:01:13.242005 | orchestrator | 2026-03-29 01:01:13 | INFO  | Task 1349bb5e-0833-43cc-95ae-00c3ffd7c70d is in state STARTED 2026-03-29 01:01:13.242010 | orchestrator | 2026-03-29 01:01:13 | INFO  | Task 05bed01a-cd1b-437f-92a2-658822fb3875 is in state STARTED 2026-03-29 01:01:13.242045 | orchestrator | 2026-03-29 01:01:13 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:01:16.262925 | orchestrator | 2026-03-29 01:01:16 | INFO  | Task b5f979b1-cb3d-4ba2-b8aa-c6e08ca35893 is in state STARTED 2026-03-29 01:01:16.263004 | orchestrator | 2026-03-29 01:01:16 | INFO  | Task 4ee82f15-bb76-4bf5-9135-7a1b7ae7758f is in state SUCCESS 2026-03-29 01:01:16.263016 | orchestrator | 2026-03-29 01:01:16.263023 | orchestrator | 2026-03-29 01:01:16.263029 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-29 01:01:16.263036 | orchestrator | 2026-03-29 01:01:16.263042 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-29 01:01:16.263049 | orchestrator | Sunday 29 March 2026 01:00:29 +0000 (0:00:00.313) 0:00:00.313 ********** 2026-03-29 01:01:16.263056 | orchestrator | ok: [testbed-node-0] 2026-03-29 01:01:16.263063 | orchestrator | ok: [testbed-node-1] 2026-03-29 01:01:16.263070 | orchestrator | ok: [testbed-node-2] 2026-03-29 01:01:16.263076 | orchestrator | ok: [testbed-node-3] 2026-03-29 01:01:16.263080 | orchestrator | ok: [testbed-node-4] 2026-03-29 01:01:16.263084 | orchestrator | ok: [testbed-node-5] 2026-03-29 01:01:16.263089 | orchestrator | ok: [testbed-manager] 2026-03-29 01:01:16.263092 | orchestrator | 2026-03-29 01:01:16.263096 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-29 01:01:16.263101 | orchestrator | Sunday 29 March 2026 01:00:30 +0000 (0:00:00.735) 0:00:01.049 ********** 2026-03-29 01:01:16.263105 | orchestrator | ok: [testbed-node-0] => (item=enable_ceph_rgw_True) 2026-03-29 01:01:16.263109 | orchestrator | ok: [testbed-node-1] => (item=enable_ceph_rgw_True) 2026-03-29 01:01:16.263113 | orchestrator | ok: [testbed-node-2] => (item=enable_ceph_rgw_True) 2026-03-29 01:01:16.263117 | orchestrator | ok: [testbed-node-3] => (item=enable_ceph_rgw_True) 2026-03-29 01:01:16.263139 | orchestrator | ok: [testbed-node-4] => (item=enable_ceph_rgw_True) 2026-03-29 01:01:16.263143 | orchestrator | ok: [testbed-node-5] => (item=enable_ceph_rgw_True) 2026-03-29 01:01:16.263148 | orchestrator | ok: [testbed-manager] => (item=enable_ceph_rgw_True) 2026-03-29 01:01:16.263151 | orchestrator | 2026-03-29 01:01:16.263155 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2026-03-29 01:01:16.263159 | orchestrator | 2026-03-29 01:01:16.263163 | orchestrator | TASK [ceph-rgw : include_tasks] ************************************************ 2026-03-29 01:01:16.263166 | orchestrator | Sunday 29 March 2026 01:00:31 +0000 (0:00:00.868) 0:00:01.917 ********** 2026-03-29 01:01:16.263172 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2026-03-29 01:01:16.263177 | orchestrator | 2026-03-29 01:01:16.263191 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating services] ********************** 2026-03-29 01:01:16.263195 | orchestrator | Sunday 29 March 2026 01:00:33 +0000 (0:00:02.221) 0:00:04.139 ********** 2026-03-29 01:01:16.263199 | orchestrator | changed: [testbed-node-0] => (item=swift (object-store)) 2026-03-29 01:01:16.263203 | orchestrator | 2026-03-29 01:01:16.263206 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating endpoints] ********************* 2026-03-29 01:01:16.263210 | orchestrator | Sunday 29 March 2026 01:00:37 +0000 (0:00:04.189) 0:00:08.328 ********** 2026-03-29 01:01:16.263215 | orchestrator | changed: [testbed-node-0] => (item=swift -> https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> internal) 2026-03-29 01:01:16.263221 | orchestrator | changed: [testbed-node-0] => (item=swift -> https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> public) 2026-03-29 01:01:16.263225 | orchestrator | 2026-03-29 01:01:16.263229 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating projects] ********************** 2026-03-29 01:01:16.263232 | orchestrator | Sunday 29 March 2026 01:00:45 +0000 (0:00:07.779) 0:00:16.108 ********** 2026-03-29 01:01:16.263236 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-29 01:01:16.263240 | orchestrator | 2026-03-29 01:01:16.263244 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating users] ************************* 2026-03-29 01:01:16.263248 | orchestrator | Sunday 29 March 2026 01:00:48 +0000 (0:00:03.047) 0:00:19.155 ********** 2026-03-29 01:01:16.263252 | orchestrator | changed: [testbed-node-0] => (item=ceph_rgw -> service) 2026-03-29 01:01:16.263256 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-29 01:01:16.263260 | orchestrator | 2026-03-29 01:01:16.263263 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating roles] ************************* 2026-03-29 01:01:16.263267 | orchestrator | Sunday 29 March 2026 01:00:52 +0000 (0:00:04.148) 0:00:23.304 ********** 2026-03-29 01:01:16.263432 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-29 01:01:16.263447 | orchestrator | changed: [testbed-node-0] => (item=ResellerAdmin) 2026-03-29 01:01:16.263453 | orchestrator | 2026-03-29 01:01:16.263459 | orchestrator | TASK [service-ks-register : ceph-rgw | Granting user roles] ******************** 2026-03-29 01:01:16.263463 | orchestrator | Sunday 29 March 2026 01:01:00 +0000 (0:00:07.148) 0:00:30.452 ********** 2026-03-29 01:01:16.263467 | orchestrator | changed: [testbed-node-0] => (item=ceph_rgw -> service -> admin) 2026-03-29 01:01:16.263471 | orchestrator | 2026-03-29 01:01:16.263475 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 01:01:16.263479 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-29 01:01:16.263484 | orchestrator | testbed-node-0 : ok=9  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-29 01:01:16.263488 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-29 01:01:16.263492 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-29 01:01:16.263515 | orchestrator | testbed-node-3 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-29 01:01:16.263519 | orchestrator | testbed-node-4 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-29 01:01:16.263523 | orchestrator | testbed-node-5 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-29 01:01:16.263527 | orchestrator | 2026-03-29 01:01:16.263530 | orchestrator | 2026-03-29 01:01:16.263535 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-29 01:01:16.263539 | orchestrator | Sunday 29 March 2026 01:01:06 +0000 (0:00:06.012) 0:00:36.465 ********** 2026-03-29 01:01:16.263542 | orchestrator | =============================================================================== 2026-03-29 01:01:16.263546 | orchestrator | service-ks-register : ceph-rgw | Creating endpoints --------------------- 7.78s 2026-03-29 01:01:16.263550 | orchestrator | service-ks-register : ceph-rgw | Creating roles ------------------------- 7.15s 2026-03-29 01:01:16.263554 | orchestrator | service-ks-register : ceph-rgw | Granting user roles -------------------- 6.01s 2026-03-29 01:01:16.263557 | orchestrator | service-ks-register : ceph-rgw | Creating services ---------------------- 4.19s 2026-03-29 01:01:16.263561 | orchestrator | service-ks-register : ceph-rgw | Creating users ------------------------- 4.15s 2026-03-29 01:01:16.263565 | orchestrator | service-ks-register : ceph-rgw | Creating projects ---------------------- 3.05s 2026-03-29 01:01:16.263569 | orchestrator | ceph-rgw : include_tasks ------------------------------------------------ 2.22s 2026-03-29 01:01:16.263572 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.87s 2026-03-29 01:01:16.263576 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.74s 2026-03-29 01:01:16.263580 | orchestrator | 2026-03-29 01:01:16.263584 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-03-29 01:01:16.263587 | orchestrator | 2.16.14 2026-03-29 01:01:16.263592 | orchestrator | 2026-03-29 01:01:16.263595 | orchestrator | PLAY [Bootstraph ceph dashboard] *********************************************** 2026-03-29 01:01:16.263599 | orchestrator | 2026-03-29 01:01:16.263603 | orchestrator | TASK [Disable the ceph dashboard] ********************************************** 2026-03-29 01:01:16.263607 | orchestrator | Sunday 29 March 2026 01:00:24 +0000 (0:00:00.210) 0:00:00.210 ********** 2026-03-29 01:01:16.263610 | orchestrator | changed: [testbed-manager] 2026-03-29 01:01:16.263614 | orchestrator | 2026-03-29 01:01:16.263623 | orchestrator | TASK [Set mgr/dashboard/ssl to false] ****************************************** 2026-03-29 01:01:16.263626 | orchestrator | Sunday 29 March 2026 01:00:26 +0000 (0:00:02.405) 0:00:02.616 ********** 2026-03-29 01:01:16.263630 | orchestrator | changed: [testbed-manager] 2026-03-29 01:01:16.263634 | orchestrator | 2026-03-29 01:01:16.263640 | orchestrator | TASK [Set mgr/dashboard/server_port to 7000] *********************************** 2026-03-29 01:01:16.263646 | orchestrator | Sunday 29 March 2026 01:00:27 +0000 (0:00:01.205) 0:00:03.821 ********** 2026-03-29 01:01:16.263652 | orchestrator | changed: [testbed-manager] 2026-03-29 01:01:16.263658 | orchestrator | 2026-03-29 01:01:16.263664 | orchestrator | TASK [Set mgr/dashboard/server_addr to 0.0.0.0] ******************************** 2026-03-29 01:01:16.263670 | orchestrator | Sunday 29 March 2026 01:00:29 +0000 (0:00:01.370) 0:00:05.192 ********** 2026-03-29 01:01:16.263676 | orchestrator | changed: [testbed-manager] 2026-03-29 01:01:16.263682 | orchestrator | 2026-03-29 01:01:16.263687 | orchestrator | TASK [Set mgr/dashboard/standby_behaviour to error] **************************** 2026-03-29 01:01:16.263694 | orchestrator | Sunday 29 March 2026 01:00:30 +0000 (0:00:01.436) 0:00:06.629 ********** 2026-03-29 01:01:16.263699 | orchestrator | changed: [testbed-manager] 2026-03-29 01:01:16.263705 | orchestrator | 2026-03-29 01:01:16.263711 | orchestrator | TASK [Set mgr/dashboard/standby_error_status_code to 404] ********************** 2026-03-29 01:01:16.263717 | orchestrator | Sunday 29 March 2026 01:00:31 +0000 (0:00:00.939) 0:00:07.568 ********** 2026-03-29 01:01:16.263729 | orchestrator | changed: [testbed-manager] 2026-03-29 01:01:16.263736 | orchestrator | 2026-03-29 01:01:16.263743 | orchestrator | TASK [Enable the ceph dashboard] *********************************************** 2026-03-29 01:01:16.263748 | orchestrator | Sunday 29 March 2026 01:00:32 +0000 (0:00:00.889) 0:00:08.457 ********** 2026-03-29 01:01:16.263754 | orchestrator | changed: [testbed-manager] 2026-03-29 01:01:16.263761 | orchestrator | 2026-03-29 01:01:16.263767 | orchestrator | TASK [Write ceph_dashboard_password to temporary file] ************************* 2026-03-29 01:01:16.263773 | orchestrator | Sunday 29 March 2026 01:00:33 +0000 (0:00:01.215) 0:00:09.673 ********** 2026-03-29 01:01:16.263779 | orchestrator | changed: [testbed-manager] 2026-03-29 01:01:16.263786 | orchestrator | 2026-03-29 01:01:16.263792 | orchestrator | TASK [Create admin user] ******************************************************* 2026-03-29 01:01:16.263798 | orchestrator | Sunday 29 March 2026 01:00:34 +0000 (0:00:01.019) 0:00:10.692 ********** 2026-03-29 01:01:16.263805 | orchestrator | changed: [testbed-manager] 2026-03-29 01:01:16.263811 | orchestrator | 2026-03-29 01:01:16.263818 | orchestrator | TASK [Remove temporary file for ceph_dashboard_password] *********************** 2026-03-29 01:01:16.263824 | orchestrator | Sunday 29 March 2026 01:00:48 +0000 (0:00:13.925) 0:00:24.617 ********** 2026-03-29 01:01:16.263830 | orchestrator | skipping: [testbed-manager] 2026-03-29 01:01:16.263836 | orchestrator | 2026-03-29 01:01:16.263842 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-03-29 01:01:16.263849 | orchestrator | 2026-03-29 01:01:16.263855 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-03-29 01:01:16.263862 | orchestrator | Sunday 29 March 2026 01:00:48 +0000 (0:00:00.158) 0:00:24.776 ********** 2026-03-29 01:01:16.263868 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:01:16.263874 | orchestrator | 2026-03-29 01:01:16.263881 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-03-29 01:01:16.263887 | orchestrator | 2026-03-29 01:01:16.263893 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-03-29 01:01:16.263899 | orchestrator | Sunday 29 March 2026 01:01:01 +0000 (0:00:12.944) 0:00:37.720 ********** 2026-03-29 01:01:16.263913 | orchestrator | changed: [testbed-node-1] 2026-03-29 01:01:16.263920 | orchestrator | 2026-03-29 01:01:16.263926 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-03-29 01:01:16.263933 | orchestrator | 2026-03-29 01:01:16.263939 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-03-29 01:01:16.263945 | orchestrator | Sunday 29 March 2026 01:01:03 +0000 (0:00:01.618) 0:00:39.339 ********** 2026-03-29 01:01:16.263951 | orchestrator | changed: [testbed-node-2] 2026-03-29 01:01:16.263958 | orchestrator | 2026-03-29 01:01:16.264032 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 01:01:16.264039 | orchestrator | testbed-manager : ok=9  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-29 01:01:16.264045 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-29 01:01:16.264052 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-29 01:01:16.264060 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-29 01:01:16.264066 | orchestrator | 2026-03-29 01:01:16.264073 | orchestrator | 2026-03-29 01:01:16.264080 | orchestrator | 2026-03-29 01:01:16.264087 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-29 01:01:16.264094 | orchestrator | Sunday 29 March 2026 01:01:14 +0000 (0:00:11.418) 0:00:50.757 ********** 2026-03-29 01:01:16.264100 | orchestrator | =============================================================================== 2026-03-29 01:01:16.264107 | orchestrator | Restart ceph manager service ------------------------------------------- 25.98s 2026-03-29 01:01:16.264119 | orchestrator | Create admin user ------------------------------------------------------ 13.93s 2026-03-29 01:01:16.264125 | orchestrator | Disable the ceph dashboard ---------------------------------------------- 2.41s 2026-03-29 01:01:16.264131 | orchestrator | Set mgr/dashboard/server_addr to 0.0.0.0 -------------------------------- 1.44s 2026-03-29 01:01:16.264137 | orchestrator | Set mgr/dashboard/server_port to 7000 ----------------------------------- 1.37s 2026-03-29 01:01:16.264143 | orchestrator | Enable the ceph dashboard ----------------------------------------------- 1.22s 2026-03-29 01:01:16.264149 | orchestrator | Set mgr/dashboard/ssl to false ------------------------------------------ 1.21s 2026-03-29 01:01:16.264160 | orchestrator | Write ceph_dashboard_password to temporary file ------------------------- 1.02s 2026-03-29 01:01:16.264167 | orchestrator | Set mgr/dashboard/standby_behaviour to error ---------------------------- 0.94s 2026-03-29 01:01:16.264172 | orchestrator | Set mgr/dashboard/standby_error_status_code to 404 ---------------------- 0.89s 2026-03-29 01:01:16.264178 | orchestrator | Remove temporary file for ceph_dashboard_password ----------------------- 0.16s 2026-03-29 01:01:16.264184 | orchestrator | 2026-03-29 01:01:16 | INFO  | Task 16d40639-f2c8-4df4-ad0f-208f6999aa2a is in state STARTED 2026-03-29 01:01:16.264195 | orchestrator | 2026-03-29 01:01:16 | INFO  | Task 1349bb5e-0833-43cc-95ae-00c3ffd7c70d is in state STARTED 2026-03-29 01:01:16.264877 | orchestrator | 2026-03-29 01:01:16 | INFO  | Task 05bed01a-cd1b-437f-92a2-658822fb3875 is in state STARTED 2026-03-29 01:01:16.264926 | orchestrator | 2026-03-29 01:01:16 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:01:19.291769 | orchestrator | 2026-03-29 01:01:19 | INFO  | Task b5f979b1-cb3d-4ba2-b8aa-c6e08ca35893 is in state STARTED 2026-03-29 01:01:19.291830 | orchestrator | 2026-03-29 01:01:19 | INFO  | Task 16d40639-f2c8-4df4-ad0f-208f6999aa2a is in state STARTED 2026-03-29 01:01:19.291839 | orchestrator | 2026-03-29 01:01:19 | INFO  | Task 1349bb5e-0833-43cc-95ae-00c3ffd7c70d is in state STARTED 2026-03-29 01:01:19.291853 | orchestrator | 2026-03-29 01:01:19 | INFO  | Task 05bed01a-cd1b-437f-92a2-658822fb3875 is in state STARTED 2026-03-29 01:01:19.291860 | orchestrator | 2026-03-29 01:01:19 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:01:22.316253 | orchestrator | 2026-03-29 01:01:22 | INFO  | Task b5f979b1-cb3d-4ba2-b8aa-c6e08ca35893 is in state STARTED 2026-03-29 01:01:22.316484 | orchestrator | 2026-03-29 01:01:22 | INFO  | Task 16d40639-f2c8-4df4-ad0f-208f6999aa2a is in state STARTED 2026-03-29 01:01:22.317018 | orchestrator | 2026-03-29 01:01:22 | INFO  | Task 1349bb5e-0833-43cc-95ae-00c3ffd7c70d is in state STARTED 2026-03-29 01:01:22.317713 | orchestrator | 2026-03-29 01:01:22 | INFO  | Task 05bed01a-cd1b-437f-92a2-658822fb3875 is in state STARTED 2026-03-29 01:01:22.317745 | orchestrator | 2026-03-29 01:01:22 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:01:25.341840 | orchestrator | 2026-03-29 01:01:25 | INFO  | Task b5f979b1-cb3d-4ba2-b8aa-c6e08ca35893 is in state STARTED 2026-03-29 01:01:25.343139 | orchestrator | 2026-03-29 01:01:25 | INFO  | Task 16d40639-f2c8-4df4-ad0f-208f6999aa2a is in state STARTED 2026-03-29 01:01:25.343599 | orchestrator | 2026-03-29 01:01:25 | INFO  | Task 1349bb5e-0833-43cc-95ae-00c3ffd7c70d is in state STARTED 2026-03-29 01:01:25.344193 | orchestrator | 2026-03-29 01:01:25 | INFO  | Task 05bed01a-cd1b-437f-92a2-658822fb3875 is in state STARTED 2026-03-29 01:01:25.344219 | orchestrator | 2026-03-29 01:01:25 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:01:28.395947 | orchestrator | 2026-03-29 01:01:28 | INFO  | Task b5f979b1-cb3d-4ba2-b8aa-c6e08ca35893 is in state STARTED 2026-03-29 01:01:28.396045 | orchestrator | 2026-03-29 01:01:28 | INFO  | Task 16d40639-f2c8-4df4-ad0f-208f6999aa2a is in state STARTED 2026-03-29 01:01:28.396084 | orchestrator | 2026-03-29 01:01:28 | INFO  | Task 1349bb5e-0833-43cc-95ae-00c3ffd7c70d is in state STARTED 2026-03-29 01:01:28.396091 | orchestrator | 2026-03-29 01:01:28 | INFO  | Task 05bed01a-cd1b-437f-92a2-658822fb3875 is in state STARTED 2026-03-29 01:01:28.396098 | orchestrator | 2026-03-29 01:01:28 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:01:31.406082 | orchestrator | 2026-03-29 01:01:31 | INFO  | Task b5f979b1-cb3d-4ba2-b8aa-c6e08ca35893 is in state STARTED 2026-03-29 01:01:31.406166 | orchestrator | 2026-03-29 01:01:31 | INFO  | Task 16d40639-f2c8-4df4-ad0f-208f6999aa2a is in state STARTED 2026-03-29 01:01:31.406708 | orchestrator | 2026-03-29 01:01:31 | INFO  | Task 1349bb5e-0833-43cc-95ae-00c3ffd7c70d is in state STARTED 2026-03-29 01:01:31.407527 | orchestrator | 2026-03-29 01:01:31 | INFO  | Task 05bed01a-cd1b-437f-92a2-658822fb3875 is in state STARTED 2026-03-29 01:01:31.407572 | orchestrator | 2026-03-29 01:01:31 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:01:34.437108 | orchestrator | 2026-03-29 01:01:34 | INFO  | Task b5f979b1-cb3d-4ba2-b8aa-c6e08ca35893 is in state STARTED 2026-03-29 01:01:34.437195 | orchestrator | 2026-03-29 01:01:34 | INFO  | Task 16d40639-f2c8-4df4-ad0f-208f6999aa2a is in state STARTED 2026-03-29 01:01:34.437203 | orchestrator | 2026-03-29 01:01:34 | INFO  | Task 1349bb5e-0833-43cc-95ae-00c3ffd7c70d is in state STARTED 2026-03-29 01:01:34.437223 | orchestrator | 2026-03-29 01:01:34 | INFO  | Task 05bed01a-cd1b-437f-92a2-658822fb3875 is in state STARTED 2026-03-29 01:01:34.437228 | orchestrator | 2026-03-29 01:01:34 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:01:37.463552 | orchestrator | 2026-03-29 01:01:37 | INFO  | Task b5f979b1-cb3d-4ba2-b8aa-c6e08ca35893 is in state STARTED 2026-03-29 01:01:37.465830 | orchestrator | 2026-03-29 01:01:37 | INFO  | Task 16d40639-f2c8-4df4-ad0f-208f6999aa2a is in state STARTED 2026-03-29 01:01:37.467007 | orchestrator | 2026-03-29 01:01:37 | INFO  | Task 1349bb5e-0833-43cc-95ae-00c3ffd7c70d is in state STARTED 2026-03-29 01:01:37.469916 | orchestrator | 2026-03-29 01:01:37 | INFO  | Task 05bed01a-cd1b-437f-92a2-658822fb3875 is in state STARTED 2026-03-29 01:01:37.470255 | orchestrator | 2026-03-29 01:01:37 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:01:40.494582 | orchestrator | 2026-03-29 01:01:40 | INFO  | Task b5f979b1-cb3d-4ba2-b8aa-c6e08ca35893 is in state STARTED 2026-03-29 01:01:40.494777 | orchestrator | 2026-03-29 01:01:40 | INFO  | Task 16d40639-f2c8-4df4-ad0f-208f6999aa2a is in state STARTED 2026-03-29 01:01:40.495529 | orchestrator | 2026-03-29 01:01:40 | INFO  | Task 1349bb5e-0833-43cc-95ae-00c3ffd7c70d is in state STARTED 2026-03-29 01:01:40.496083 | orchestrator | 2026-03-29 01:01:40 | INFO  | Task 05bed01a-cd1b-437f-92a2-658822fb3875 is in state STARTED 2026-03-29 01:01:40.496106 | orchestrator | 2026-03-29 01:01:40 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:01:43.528002 | orchestrator | 2026-03-29 01:01:43 | INFO  | Task b5f979b1-cb3d-4ba2-b8aa-c6e08ca35893 is in state STARTED 2026-03-29 01:01:43.528388 | orchestrator | 2026-03-29 01:01:43 | INFO  | Task 16d40639-f2c8-4df4-ad0f-208f6999aa2a is in state STARTED 2026-03-29 01:01:43.529586 | orchestrator | 2026-03-29 01:01:43 | INFO  | Task 1349bb5e-0833-43cc-95ae-00c3ffd7c70d is in state STARTED 2026-03-29 01:01:43.530206 | orchestrator | 2026-03-29 01:01:43 | INFO  | Task 05bed01a-cd1b-437f-92a2-658822fb3875 is in state STARTED 2026-03-29 01:01:43.530248 | orchestrator | 2026-03-29 01:01:43 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:01:46.550795 | orchestrator | 2026-03-29 01:01:46 | INFO  | Task b5f979b1-cb3d-4ba2-b8aa-c6e08ca35893 is in state STARTED 2026-03-29 01:01:46.551117 | orchestrator | 2026-03-29 01:01:46 | INFO  | Task 16d40639-f2c8-4df4-ad0f-208f6999aa2a is in state STARTED 2026-03-29 01:01:46.552140 | orchestrator | 2026-03-29 01:01:46 | INFO  | Task 1349bb5e-0833-43cc-95ae-00c3ffd7c70d is in state STARTED 2026-03-29 01:01:46.553114 | orchestrator | 2026-03-29 01:01:46 | INFO  | Task 05bed01a-cd1b-437f-92a2-658822fb3875 is in state STARTED 2026-03-29 01:01:46.553142 | orchestrator | 2026-03-29 01:01:46 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:01:49.574153 | orchestrator | 2026-03-29 01:01:49 | INFO  | Task b5f979b1-cb3d-4ba2-b8aa-c6e08ca35893 is in state STARTED 2026-03-29 01:01:49.574379 | orchestrator | 2026-03-29 01:01:49 | INFO  | Task 16d40639-f2c8-4df4-ad0f-208f6999aa2a is in state STARTED 2026-03-29 01:01:49.575467 | orchestrator | 2026-03-29 01:01:49 | INFO  | Task 1349bb5e-0833-43cc-95ae-00c3ffd7c70d is in state STARTED 2026-03-29 01:01:49.575717 | orchestrator | 2026-03-29 01:01:49 | INFO  | Task 05bed01a-cd1b-437f-92a2-658822fb3875 is in state STARTED 2026-03-29 01:01:49.575905 | orchestrator | 2026-03-29 01:01:49 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:01:52.602613 | orchestrator | 2026-03-29 01:01:52 | INFO  | Task b5f979b1-cb3d-4ba2-b8aa-c6e08ca35893 is in state STARTED 2026-03-29 01:01:52.602698 | orchestrator | 2026-03-29 01:01:52 | INFO  | Task 16d40639-f2c8-4df4-ad0f-208f6999aa2a is in state STARTED 2026-03-29 01:01:52.603046 | orchestrator | 2026-03-29 01:01:52 | INFO  | Task 1349bb5e-0833-43cc-95ae-00c3ffd7c70d is in state STARTED 2026-03-29 01:01:52.603935 | orchestrator | 2026-03-29 01:01:52 | INFO  | Task 05bed01a-cd1b-437f-92a2-658822fb3875 is in state STARTED 2026-03-29 01:01:52.603977 | orchestrator | 2026-03-29 01:01:52 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:01:55.624864 | orchestrator | 2026-03-29 01:01:55 | INFO  | Task b5f979b1-cb3d-4ba2-b8aa-c6e08ca35893 is in state STARTED 2026-03-29 01:01:55.624955 | orchestrator | 2026-03-29 01:01:55 | INFO  | Task 16d40639-f2c8-4df4-ad0f-208f6999aa2a is in state STARTED 2026-03-29 01:01:55.624971 | orchestrator | 2026-03-29 01:01:55 | INFO  | Task 1349bb5e-0833-43cc-95ae-00c3ffd7c70d is in state STARTED 2026-03-29 01:01:55.625779 | orchestrator | 2026-03-29 01:01:55 | INFO  | Task 05bed01a-cd1b-437f-92a2-658822fb3875 is in state STARTED 2026-03-29 01:01:55.625826 | orchestrator | 2026-03-29 01:01:55 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:01:58.654854 | orchestrator | 2026-03-29 01:01:58 | INFO  | Task b5f979b1-cb3d-4ba2-b8aa-c6e08ca35893 is in state STARTED 2026-03-29 01:01:58.656068 | orchestrator | 2026-03-29 01:01:58 | INFO  | Task 16d40639-f2c8-4df4-ad0f-208f6999aa2a is in state STARTED 2026-03-29 01:01:58.656771 | orchestrator | 2026-03-29 01:01:58 | INFO  | Task 1349bb5e-0833-43cc-95ae-00c3ffd7c70d is in state STARTED 2026-03-29 01:01:58.657765 | orchestrator | 2026-03-29 01:01:58 | INFO  | Task 05bed01a-cd1b-437f-92a2-658822fb3875 is in state STARTED 2026-03-29 01:01:58.658572 | orchestrator | 2026-03-29 01:01:58 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:02:01.701498 | orchestrator | 2026-03-29 01:02:01 | INFO  | Task b5f979b1-cb3d-4ba2-b8aa-c6e08ca35893 is in state STARTED 2026-03-29 01:02:01.701931 | orchestrator | 2026-03-29 01:02:01 | INFO  | Task 16d40639-f2c8-4df4-ad0f-208f6999aa2a is in state STARTED 2026-03-29 01:02:01.702648 | orchestrator | 2026-03-29 01:02:01 | INFO  | Task 1349bb5e-0833-43cc-95ae-00c3ffd7c70d is in state STARTED 2026-03-29 01:02:01.703760 | orchestrator | 2026-03-29 01:02:01 | INFO  | Task 05bed01a-cd1b-437f-92a2-658822fb3875 is in state STARTED 2026-03-29 01:02:01.703809 | orchestrator | 2026-03-29 01:02:01 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:02:04.727649 | orchestrator | 2026-03-29 01:02:04 | INFO  | Task b5f979b1-cb3d-4ba2-b8aa-c6e08ca35893 is in state STARTED 2026-03-29 01:02:04.727728 | orchestrator | 2026-03-29 01:02:04 | INFO  | Task 16d40639-f2c8-4df4-ad0f-208f6999aa2a is in state STARTED 2026-03-29 01:02:04.728308 | orchestrator | 2026-03-29 01:02:04 | INFO  | Task 1349bb5e-0833-43cc-95ae-00c3ffd7c70d is in state STARTED 2026-03-29 01:02:04.730193 | orchestrator | 2026-03-29 01:02:04 | INFO  | Task 05bed01a-cd1b-437f-92a2-658822fb3875 is in state STARTED 2026-03-29 01:02:04.730239 | orchestrator | 2026-03-29 01:02:04 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:02:07.775364 | orchestrator | 2026-03-29 01:02:07 | INFO  | Task b5f979b1-cb3d-4ba2-b8aa-c6e08ca35893 is in state STARTED 2026-03-29 01:02:07.775500 | orchestrator | 2026-03-29 01:02:07 | INFO  | Task 16d40639-f2c8-4df4-ad0f-208f6999aa2a is in state STARTED 2026-03-29 01:02:07.775520 | orchestrator | 2026-03-29 01:02:07 | INFO  | Task 1349bb5e-0833-43cc-95ae-00c3ffd7c70d is in state STARTED 2026-03-29 01:02:07.776017 | orchestrator | 2026-03-29 01:02:07 | INFO  | Task 05bed01a-cd1b-437f-92a2-658822fb3875 is in state STARTED 2026-03-29 01:02:07.776050 | orchestrator | 2026-03-29 01:02:07 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:02:10.804220 | orchestrator | 2026-03-29 01:02:10 | INFO  | Task b5f979b1-cb3d-4ba2-b8aa-c6e08ca35893 is in state STARTED 2026-03-29 01:02:10.805207 | orchestrator | 2026-03-29 01:02:10 | INFO  | Task 16d40639-f2c8-4df4-ad0f-208f6999aa2a is in state STARTED 2026-03-29 01:02:10.806683 | orchestrator | 2026-03-29 01:02:10 | INFO  | Task 1349bb5e-0833-43cc-95ae-00c3ffd7c70d is in state STARTED 2026-03-29 01:02:10.807406 | orchestrator | 2026-03-29 01:02:10 | INFO  | Task 05bed01a-cd1b-437f-92a2-658822fb3875 is in state STARTED 2026-03-29 01:02:10.807439 | orchestrator | 2026-03-29 01:02:10 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:02:13.848147 | orchestrator | 2026-03-29 01:02:13 | INFO  | Task b5f979b1-cb3d-4ba2-b8aa-c6e08ca35893 is in state STARTED 2026-03-29 01:02:13.848305 | orchestrator | 2026-03-29 01:02:13 | INFO  | Task 16d40639-f2c8-4df4-ad0f-208f6999aa2a is in state STARTED 2026-03-29 01:02:13.849318 | orchestrator | 2026-03-29 01:02:13 | INFO  | Task 1349bb5e-0833-43cc-95ae-00c3ffd7c70d is in state STARTED 2026-03-29 01:02:13.851061 | orchestrator | 2026-03-29 01:02:13 | INFO  | Task 05bed01a-cd1b-437f-92a2-658822fb3875 is in state STARTED 2026-03-29 01:02:13.851647 | orchestrator | 2026-03-29 01:02:13 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:02:16.896619 | orchestrator | 2026-03-29 01:02:16 | INFO  | Task b5f979b1-cb3d-4ba2-b8aa-c6e08ca35893 is in state STARTED 2026-03-29 01:02:16.900776 | orchestrator | 2026-03-29 01:02:16 | INFO  | Task 16d40639-f2c8-4df4-ad0f-208f6999aa2a is in state STARTED 2026-03-29 01:02:16.901198 | orchestrator | 2026-03-29 01:02:16 | INFO  | Task 1349bb5e-0833-43cc-95ae-00c3ffd7c70d is in state STARTED 2026-03-29 01:02:16.902762 | orchestrator | 2026-03-29 01:02:16 | INFO  | Task 05bed01a-cd1b-437f-92a2-658822fb3875 is in state STARTED 2026-03-29 01:02:16.902830 | orchestrator | 2026-03-29 01:02:16 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:02:19.933980 | orchestrator | 2026-03-29 01:02:19 | INFO  | Task b5f979b1-cb3d-4ba2-b8aa-c6e08ca35893 is in state STARTED 2026-03-29 01:02:19.935429 | orchestrator | 2026-03-29 01:02:19 | INFO  | Task 16d40639-f2c8-4df4-ad0f-208f6999aa2a is in state STARTED 2026-03-29 01:02:19.936188 | orchestrator | 2026-03-29 01:02:19 | INFO  | Task 1349bb5e-0833-43cc-95ae-00c3ffd7c70d is in state STARTED 2026-03-29 01:02:19.936912 | orchestrator | 2026-03-29 01:02:19 | INFO  | Task 05bed01a-cd1b-437f-92a2-658822fb3875 is in state STARTED 2026-03-29 01:02:19.936951 | orchestrator | 2026-03-29 01:02:19 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:02:22.975066 | orchestrator | 2026-03-29 01:02:22 | INFO  | Task b5f979b1-cb3d-4ba2-b8aa-c6e08ca35893 is in state STARTED 2026-03-29 01:02:22.976537 | orchestrator | 2026-03-29 01:02:22 | INFO  | Task 16d40639-f2c8-4df4-ad0f-208f6999aa2a is in state STARTED 2026-03-29 01:02:22.981253 | orchestrator | 2026-03-29 01:02:22 | INFO  | Task 1349bb5e-0833-43cc-95ae-00c3ffd7c70d is in state STARTED 2026-03-29 01:02:22.982763 | orchestrator | 2026-03-29 01:02:22 | INFO  | Task 05bed01a-cd1b-437f-92a2-658822fb3875 is in state STARTED 2026-03-29 01:02:22.982905 | orchestrator | 2026-03-29 01:02:22 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:02:26.023777 | orchestrator | 2026-03-29 01:02:26 | INFO  | Task b5f979b1-cb3d-4ba2-b8aa-c6e08ca35893 is in state STARTED 2026-03-29 01:02:26.025828 | orchestrator | 2026-03-29 01:02:26 | INFO  | Task 16d40639-f2c8-4df4-ad0f-208f6999aa2a is in state STARTED 2026-03-29 01:02:26.028815 | orchestrator | 2026-03-29 01:02:26 | INFO  | Task 1349bb5e-0833-43cc-95ae-00c3ffd7c70d is in state STARTED 2026-03-29 01:02:26.031513 | orchestrator | 2026-03-29 01:02:26 | INFO  | Task 05bed01a-cd1b-437f-92a2-658822fb3875 is in state STARTED 2026-03-29 01:02:26.031856 | orchestrator | 2026-03-29 01:02:26 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:02:29.071422 | orchestrator | 2026-03-29 01:02:29 | INFO  | Task b5f979b1-cb3d-4ba2-b8aa-c6e08ca35893 is in state STARTED 2026-03-29 01:02:29.072798 | orchestrator | 2026-03-29 01:02:29 | INFO  | Task 16d40639-f2c8-4df4-ad0f-208f6999aa2a is in state STARTED 2026-03-29 01:02:29.073896 | orchestrator | 2026-03-29 01:02:29 | INFO  | Task 1349bb5e-0833-43cc-95ae-00c3ffd7c70d is in state STARTED 2026-03-29 01:02:29.076632 | orchestrator | 2026-03-29 01:02:29 | INFO  | Task 05bed01a-cd1b-437f-92a2-658822fb3875 is in state STARTED 2026-03-29 01:02:29.076676 | orchestrator | 2026-03-29 01:02:29 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:02:32.098615 | orchestrator | 2026-03-29 01:02:32 | INFO  | Task b5f979b1-cb3d-4ba2-b8aa-c6e08ca35893 is in state STARTED 2026-03-29 01:02:32.101392 | orchestrator | 2026-03-29 01:02:32 | INFO  | Task 16d40639-f2c8-4df4-ad0f-208f6999aa2a is in state STARTED 2026-03-29 01:02:32.102214 | orchestrator | 2026-03-29 01:02:32 | INFO  | Task 1349bb5e-0833-43cc-95ae-00c3ffd7c70d is in state STARTED 2026-03-29 01:02:32.102795 | orchestrator | 2026-03-29 01:02:32 | INFO  | Task 05bed01a-cd1b-437f-92a2-658822fb3875 is in state STARTED 2026-03-29 01:02:32.102830 | orchestrator | 2026-03-29 01:02:32 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:02:35.148460 | orchestrator | 2026-03-29 01:02:35 | INFO  | Task b5f979b1-cb3d-4ba2-b8aa-c6e08ca35893 is in state STARTED 2026-03-29 01:02:35.149852 | orchestrator | 2026-03-29 01:02:35 | INFO  | Task 16d40639-f2c8-4df4-ad0f-208f6999aa2a is in state STARTED 2026-03-29 01:02:35.151460 | orchestrator | 2026-03-29 01:02:35 | INFO  | Task 1349bb5e-0833-43cc-95ae-00c3ffd7c70d is in state STARTED 2026-03-29 01:02:35.153625 | orchestrator | 2026-03-29 01:02:35 | INFO  | Task 05bed01a-cd1b-437f-92a2-658822fb3875 is in state STARTED 2026-03-29 01:02:35.153666 | orchestrator | 2026-03-29 01:02:35 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:02:38.190080 | orchestrator | 2026-03-29 01:02:38 | INFO  | Task b5f979b1-cb3d-4ba2-b8aa-c6e08ca35893 is in state STARTED 2026-03-29 01:02:38.191796 | orchestrator | 2026-03-29 01:02:38 | INFO  | Task 16d40639-f2c8-4df4-ad0f-208f6999aa2a is in state STARTED 2026-03-29 01:02:38.192567 | orchestrator | 2026-03-29 01:02:38 | INFO  | Task 1349bb5e-0833-43cc-95ae-00c3ffd7c70d is in state STARTED 2026-03-29 01:02:38.193655 | orchestrator | 2026-03-29 01:02:38 | INFO  | Task 05bed01a-cd1b-437f-92a2-658822fb3875 is in state STARTED 2026-03-29 01:02:38.193710 | orchestrator | 2026-03-29 01:02:38 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:02:41.245247 | orchestrator | 2026-03-29 01:02:41 | INFO  | Task b5f979b1-cb3d-4ba2-b8aa-c6e08ca35893 is in state STARTED 2026-03-29 01:02:41.246106 | orchestrator | 2026-03-29 01:02:41 | INFO  | Task 16d40639-f2c8-4df4-ad0f-208f6999aa2a is in state STARTED 2026-03-29 01:02:41.247277 | orchestrator | 2026-03-29 01:02:41 | INFO  | Task 1349bb5e-0833-43cc-95ae-00c3ffd7c70d is in state STARTED 2026-03-29 01:02:41.248568 | orchestrator | 2026-03-29 01:02:41 | INFO  | Task 05bed01a-cd1b-437f-92a2-658822fb3875 is in state STARTED 2026-03-29 01:02:41.248686 | orchestrator | 2026-03-29 01:02:41 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:02:44.293700 | orchestrator | 2026-03-29 01:02:44 | INFO  | Task b5f979b1-cb3d-4ba2-b8aa-c6e08ca35893 is in state STARTED 2026-03-29 01:02:44.294672 | orchestrator | 2026-03-29 01:02:44 | INFO  | Task 16d40639-f2c8-4df4-ad0f-208f6999aa2a is in state STARTED 2026-03-29 01:02:44.295487 | orchestrator | 2026-03-29 01:02:44 | INFO  | Task 1349bb5e-0833-43cc-95ae-00c3ffd7c70d is in state STARTED 2026-03-29 01:02:44.297369 | orchestrator | 2026-03-29 01:02:44 | INFO  | Task 05bed01a-cd1b-437f-92a2-658822fb3875 is in state STARTED 2026-03-29 01:02:44.297640 | orchestrator | 2026-03-29 01:02:44 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:02:47.342276 | orchestrator | 2026-03-29 01:02:47 | INFO  | Task b5f979b1-cb3d-4ba2-b8aa-c6e08ca35893 is in state STARTED 2026-03-29 01:02:47.344844 | orchestrator | 2026-03-29 01:02:47 | INFO  | Task 16d40639-f2c8-4df4-ad0f-208f6999aa2a is in state STARTED 2026-03-29 01:02:47.347166 | orchestrator | 2026-03-29 01:02:47 | INFO  | Task 1349bb5e-0833-43cc-95ae-00c3ffd7c70d is in state STARTED 2026-03-29 01:02:47.349019 | orchestrator | 2026-03-29 01:02:47 | INFO  | Task 05bed01a-cd1b-437f-92a2-658822fb3875 is in state STARTED 2026-03-29 01:02:47.349083 | orchestrator | 2026-03-29 01:02:47 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:02:50.404441 | orchestrator | 2026-03-29 01:02:50 | INFO  | Task b5f979b1-cb3d-4ba2-b8aa-c6e08ca35893 is in state STARTED 2026-03-29 01:02:50.407278 | orchestrator | 2026-03-29 01:02:50 | INFO  | Task 16d40639-f2c8-4df4-ad0f-208f6999aa2a is in state STARTED 2026-03-29 01:02:50.410923 | orchestrator | 2026-03-29 01:02:50 | INFO  | Task 1349bb5e-0833-43cc-95ae-00c3ffd7c70d is in state STARTED 2026-03-29 01:02:50.412604 | orchestrator | 2026-03-29 01:02:50 | INFO  | Task 05bed01a-cd1b-437f-92a2-658822fb3875 is in state STARTED 2026-03-29 01:02:50.412675 | orchestrator | 2026-03-29 01:02:50 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:02:53.441903 | orchestrator | 2026-03-29 01:02:53 | INFO  | Task b5f979b1-cb3d-4ba2-b8aa-c6e08ca35893 is in state STARTED 2026-03-29 01:02:53.442370 | orchestrator | 2026-03-29 01:02:53 | INFO  | Task 16d40639-f2c8-4df4-ad0f-208f6999aa2a is in state STARTED 2026-03-29 01:02:53.443132 | orchestrator | 2026-03-29 01:02:53 | INFO  | Task 1349bb5e-0833-43cc-95ae-00c3ffd7c70d is in state STARTED 2026-03-29 01:02:53.444000 | orchestrator | 2026-03-29 01:02:53 | INFO  | Task 05bed01a-cd1b-437f-92a2-658822fb3875 is in state STARTED 2026-03-29 01:02:53.444058 | orchestrator | 2026-03-29 01:02:53 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:02:56.472696 | orchestrator | 2026-03-29 01:02:56 | INFO  | Task b5f979b1-cb3d-4ba2-b8aa-c6e08ca35893 is in state STARTED 2026-03-29 01:02:56.475551 | orchestrator | 2026-03-29 01:02:56 | INFO  | Task 16d40639-f2c8-4df4-ad0f-208f6999aa2a is in state STARTED 2026-03-29 01:02:56.475610 | orchestrator | 2026-03-29 01:02:56 | INFO  | Task 1349bb5e-0833-43cc-95ae-00c3ffd7c70d is in state STARTED 2026-03-29 01:02:56.476202 | orchestrator | 2026-03-29 01:02:56 | INFO  | Task 05bed01a-cd1b-437f-92a2-658822fb3875 is in state STARTED 2026-03-29 01:02:56.477515 | orchestrator | 2026-03-29 01:02:56 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:02:59.521656 | orchestrator | 2026-03-29 01:02:59 | INFO  | Task b5f979b1-cb3d-4ba2-b8aa-c6e08ca35893 is in state STARTED 2026-03-29 01:02:59.523587 | orchestrator | 2026-03-29 01:02:59 | INFO  | Task 16d40639-f2c8-4df4-ad0f-208f6999aa2a is in state STARTED 2026-03-29 01:02:59.524540 | orchestrator | 2026-03-29 01:02:59 | INFO  | Task 1349bb5e-0833-43cc-95ae-00c3ffd7c70d is in state STARTED 2026-03-29 01:02:59.527472 | orchestrator | 2026-03-29 01:02:59 | INFO  | Task 05bed01a-cd1b-437f-92a2-658822fb3875 is in state STARTED 2026-03-29 01:02:59.527516 | orchestrator | 2026-03-29 01:02:59 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:03:02.582377 | orchestrator | 2026-03-29 01:03:02 | INFO  | Task b5f979b1-cb3d-4ba2-b8aa-c6e08ca35893 is in state STARTED 2026-03-29 01:03:02.582744 | orchestrator | 2026-03-29 01:03:02 | INFO  | Task 16d40639-f2c8-4df4-ad0f-208f6999aa2a is in state STARTED 2026-03-29 01:03:02.583456 | orchestrator | 2026-03-29 01:03:02 | INFO  | Task 1349bb5e-0833-43cc-95ae-00c3ffd7c70d is in state STARTED 2026-03-29 01:03:02.584178 | orchestrator | 2026-03-29 01:03:02 | INFO  | Task 05bed01a-cd1b-437f-92a2-658822fb3875 is in state STARTED 2026-03-29 01:03:02.584212 | orchestrator | 2026-03-29 01:03:02 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:03:05.629556 | orchestrator | 2026-03-29 01:03:05 | INFO  | Task b5f979b1-cb3d-4ba2-b8aa-c6e08ca35893 is in state STARTED 2026-03-29 01:03:05.629750 | orchestrator | 2026-03-29 01:03:05 | INFO  | Task 16d40639-f2c8-4df4-ad0f-208f6999aa2a is in state STARTED 2026-03-29 01:03:05.630406 | orchestrator | 2026-03-29 01:03:05 | INFO  | Task 1349bb5e-0833-43cc-95ae-00c3ffd7c70d is in state STARTED 2026-03-29 01:03:05.631027 | orchestrator | 2026-03-29 01:03:05 | INFO  | Task 05bed01a-cd1b-437f-92a2-658822fb3875 is in state STARTED 2026-03-29 01:03:05.631059 | orchestrator | 2026-03-29 01:03:05 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:03:08.660121 | orchestrator | 2026-03-29 01:03:08 | INFO  | Task b5f979b1-cb3d-4ba2-b8aa-c6e08ca35893 is in state STARTED 2026-03-29 01:03:08.660246 | orchestrator | 2026-03-29 01:03:08 | INFO  | Task 16d40639-f2c8-4df4-ad0f-208f6999aa2a is in state STARTED 2026-03-29 01:03:08.660832 | orchestrator | 2026-03-29 01:03:08 | INFO  | Task 1349bb5e-0833-43cc-95ae-00c3ffd7c70d is in state STARTED 2026-03-29 01:03:08.661362 | orchestrator | 2026-03-29 01:03:08 | INFO  | Task 05bed01a-cd1b-437f-92a2-658822fb3875 is in state STARTED 2026-03-29 01:03:08.661403 | orchestrator | 2026-03-29 01:03:08 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:03:11.700151 | orchestrator | 2026-03-29 01:03:11 | INFO  | Task b5f979b1-cb3d-4ba2-b8aa-c6e08ca35893 is in state STARTED 2026-03-29 01:03:11.700199 | orchestrator | 2026-03-29 01:03:11 | INFO  | Task 16d40639-f2c8-4df4-ad0f-208f6999aa2a is in state STARTED 2026-03-29 01:03:11.700790 | orchestrator | 2026-03-29 01:03:11 | INFO  | Task 1349bb5e-0833-43cc-95ae-00c3ffd7c70d is in state STARTED 2026-03-29 01:03:11.702459 | orchestrator | 2026-03-29 01:03:11 | INFO  | Task 05bed01a-cd1b-437f-92a2-658822fb3875 is in state STARTED 2026-03-29 01:03:11.702490 | orchestrator | 2026-03-29 01:03:11 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:03:14.776360 | orchestrator | 2026-03-29 01:03:14 | INFO  | Task b5f979b1-cb3d-4ba2-b8aa-c6e08ca35893 is in state STARTED 2026-03-29 01:03:14.778476 | orchestrator | 2026-03-29 01:03:14 | INFO  | Task 16d40639-f2c8-4df4-ad0f-208f6999aa2a is in state STARTED 2026-03-29 01:03:14.779649 | orchestrator | 2026-03-29 01:03:14 | INFO  | Task 1349bb5e-0833-43cc-95ae-00c3ffd7c70d is in state STARTED 2026-03-29 01:03:14.780708 | orchestrator | 2026-03-29 01:03:14 | INFO  | Task 05bed01a-cd1b-437f-92a2-658822fb3875 is in state STARTED 2026-03-29 01:03:14.780862 | orchestrator | 2026-03-29 01:03:14 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:03:17.816257 | orchestrator | 2026-03-29 01:03:17 | INFO  | Task b5f979b1-cb3d-4ba2-b8aa-c6e08ca35893 is in state STARTED 2026-03-29 01:03:17.816518 | orchestrator | 2026-03-29 01:03:17 | INFO  | Task 16d40639-f2c8-4df4-ad0f-208f6999aa2a is in state STARTED 2026-03-29 01:03:17.817207 | orchestrator | 2026-03-29 01:03:17 | INFO  | Task 1349bb5e-0833-43cc-95ae-00c3ffd7c70d is in state STARTED 2026-03-29 01:03:17.817725 | orchestrator | 2026-03-29 01:03:17 | INFO  | Task 05bed01a-cd1b-437f-92a2-658822fb3875 is in state STARTED 2026-03-29 01:03:17.817879 | orchestrator | 2026-03-29 01:03:17 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:03:20.866049 | orchestrator | 2026-03-29 01:03:20 | INFO  | Task b5f979b1-cb3d-4ba2-b8aa-c6e08ca35893 is in state STARTED 2026-03-29 01:03:20.868558 | orchestrator | 2026-03-29 01:03:20 | INFO  | Task 16d40639-f2c8-4df4-ad0f-208f6999aa2a is in state STARTED 2026-03-29 01:03:20.870374 | orchestrator | 2026-03-29 01:03:20 | INFO  | Task 1349bb5e-0833-43cc-95ae-00c3ffd7c70d is in state STARTED 2026-03-29 01:03:20.872076 | orchestrator | 2026-03-29 01:03:20 | INFO  | Task 05bed01a-cd1b-437f-92a2-658822fb3875 is in state STARTED 2026-03-29 01:03:20.872109 | orchestrator | 2026-03-29 01:03:20 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:03:23.918853 | orchestrator | 2026-03-29 01:03:23 | INFO  | Task b5f979b1-cb3d-4ba2-b8aa-c6e08ca35893 is in state STARTED 2026-03-29 01:03:23.919874 | orchestrator | 2026-03-29 01:03:23 | INFO  | Task 16d40639-f2c8-4df4-ad0f-208f6999aa2a is in state STARTED 2026-03-29 01:03:23.921287 | orchestrator | 2026-03-29 01:03:23 | INFO  | Task 1349bb5e-0833-43cc-95ae-00c3ffd7c70d is in state STARTED 2026-03-29 01:03:23.922483 | orchestrator | 2026-03-29 01:03:23 | INFO  | Task 05bed01a-cd1b-437f-92a2-658822fb3875 is in state STARTED 2026-03-29 01:03:23.922517 | orchestrator | 2026-03-29 01:03:23 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:03:26.975735 | orchestrator | 2026-03-29 01:03:26 | INFO  | Task b5f979b1-cb3d-4ba2-b8aa-c6e08ca35893 is in state STARTED 2026-03-29 01:03:26.979193 | orchestrator | 2026-03-29 01:03:26 | INFO  | Task 16d40639-f2c8-4df4-ad0f-208f6999aa2a is in state STARTED 2026-03-29 01:03:26.979238 | orchestrator | 2026-03-29 01:03:26 | INFO  | Task 1349bb5e-0833-43cc-95ae-00c3ffd7c70d is in state STARTED 2026-03-29 01:03:26.981002 | orchestrator | 2026-03-29 01:03:26 | INFO  | Task 05bed01a-cd1b-437f-92a2-658822fb3875 is in state STARTED 2026-03-29 01:03:26.981246 | orchestrator | 2026-03-29 01:03:26 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:03:30.022917 | orchestrator | 2026-03-29 01:03:30 | INFO  | Task b5f979b1-cb3d-4ba2-b8aa-c6e08ca35893 is in state STARTED 2026-03-29 01:03:30.022994 | orchestrator | 2026-03-29 01:03:30 | INFO  | Task 16d40639-f2c8-4df4-ad0f-208f6999aa2a is in state STARTED 2026-03-29 01:03:30.023006 | orchestrator | 2026-03-29 01:03:30 | INFO  | Task 1349bb5e-0833-43cc-95ae-00c3ffd7c70d is in state STARTED 2026-03-29 01:03:30.023012 | orchestrator | 2026-03-29 01:03:30 | INFO  | Task 05bed01a-cd1b-437f-92a2-658822fb3875 is in state STARTED 2026-03-29 01:03:30.026439 | orchestrator | 2026-03-29 01:03:30 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:03:33.057471 | orchestrator | 2026-03-29 01:03:33 | INFO  | Task b5f979b1-cb3d-4ba2-b8aa-c6e08ca35893 is in state STARTED 2026-03-29 01:03:33.059491 | orchestrator | 2026-03-29 01:03:33 | INFO  | Task 16d40639-f2c8-4df4-ad0f-208f6999aa2a is in state STARTED 2026-03-29 01:03:33.062237 | orchestrator | 2026-03-29 01:03:33.062283 | orchestrator | 2026-03-29 01:03:33.062291 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-29 01:03:33.062298 | orchestrator | 2026-03-29 01:03:33.062305 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-29 01:03:33.062311 | orchestrator | Sunday 29 March 2026 01:00:30 +0000 (0:00:00.347) 0:00:00.347 ********** 2026-03-29 01:03:33.062316 | orchestrator | ok: [testbed-node-0] 2026-03-29 01:03:33.062321 | orchestrator | ok: [testbed-node-1] 2026-03-29 01:03:33.062325 | orchestrator | ok: [testbed-node-2] 2026-03-29 01:03:33.062332 | orchestrator | 2026-03-29 01:03:33.062338 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-29 01:03:33.062344 | orchestrator | Sunday 29 March 2026 01:00:30 +0000 (0:00:00.313) 0:00:00.660 ********** 2026-03-29 01:03:33.062350 | orchestrator | ok: [testbed-node-0] => (item=enable_glance_True) 2026-03-29 01:03:33.062358 | orchestrator | ok: [testbed-node-1] => (item=enable_glance_True) 2026-03-29 01:03:33.062364 | orchestrator | ok: [testbed-node-2] => (item=enable_glance_True) 2026-03-29 01:03:33.062370 | orchestrator | 2026-03-29 01:03:33.062378 | orchestrator | PLAY [Apply role glance] ******************************************************* 2026-03-29 01:03:33.062382 | orchestrator | 2026-03-29 01:03:33.062386 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-03-29 01:03:33.062389 | orchestrator | Sunday 29 March 2026 01:00:30 +0000 (0:00:00.332) 0:00:00.993 ********** 2026-03-29 01:03:33.062393 | orchestrator | included: /ansible/roles/glance/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 01:03:33.062398 | orchestrator | 2026-03-29 01:03:33.062402 | orchestrator | TASK [service-ks-register : glance | Creating services] ************************ 2026-03-29 01:03:33.062406 | orchestrator | Sunday 29 March 2026 01:00:31 +0000 (0:00:00.703) 0:00:01.696 ********** 2026-03-29 01:03:33.062410 | orchestrator | changed: [testbed-node-0] => (item=glance (image)) 2026-03-29 01:03:33.062427 | orchestrator | 2026-03-29 01:03:33.062432 | orchestrator | TASK [service-ks-register : glance | Creating endpoints] *********************** 2026-03-29 01:03:33.062436 | orchestrator | Sunday 29 March 2026 01:00:35 +0000 (0:00:04.199) 0:00:05.896 ********** 2026-03-29 01:03:33.062440 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api-int.testbed.osism.xyz:9292 -> internal) 2026-03-29 01:03:33.062444 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api.testbed.osism.xyz:9292 -> public) 2026-03-29 01:03:33.062448 | orchestrator | 2026-03-29 01:03:33.062452 | orchestrator | TASK [service-ks-register : glance | Creating projects] ************************ 2026-03-29 01:03:33.062456 | orchestrator | Sunday 29 March 2026 01:00:43 +0000 (0:00:07.847) 0:00:13.744 ********** 2026-03-29 01:03:33.062460 | orchestrator | changed: [testbed-node-0] => (item=service) 2026-03-29 01:03:33.062464 | orchestrator | 2026-03-29 01:03:33.062467 | orchestrator | TASK [service-ks-register : glance | Creating users] *************************** 2026-03-29 01:03:33.062471 | orchestrator | Sunday 29 March 2026 01:00:47 +0000 (0:00:03.476) 0:00:17.220 ********** 2026-03-29 01:03:33.062487 | orchestrator | changed: [testbed-node-0] => (item=glance -> service) 2026-03-29 01:03:33.062491 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-29 01:03:33.062495 | orchestrator | 2026-03-29 01:03:33.062499 | orchestrator | TASK [service-ks-register : glance | Creating roles] *************************** 2026-03-29 01:03:33.062503 | orchestrator | Sunday 29 March 2026 01:00:50 +0000 (0:00:03.451) 0:00:20.672 ********** 2026-03-29 01:03:33.062512 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-29 01:03:33.062516 | orchestrator | 2026-03-29 01:03:33.062520 | orchestrator | TASK [service-ks-register : glance | Granting user roles] ********************** 2026-03-29 01:03:33.062524 | orchestrator | Sunday 29 March 2026 01:00:54 +0000 (0:00:03.753) 0:00:24.425 ********** 2026-03-29 01:03:33.062528 | orchestrator | changed: [testbed-node-0] => (item=glance -> service -> admin) 2026-03-29 01:03:33.062531 | orchestrator | 2026-03-29 01:03:33.062535 | orchestrator | TASK [glance : Ensuring config directories exist] ****************************** 2026-03-29 01:03:33.062539 | orchestrator | Sunday 29 March 2026 01:00:58 +0000 (0:00:04.305) 0:00:28.730 ********** 2026-03-29 01:03:33.062555 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-29 01:03:33.062561 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-29 01:03:33.062570 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-29 01:03:33.062575 | orchestrator | 2026-03-29 01:03:33.062579 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-03-29 01:03:33.062583 | orchestrator | Sunday 29 March 2026 01:01:02 +0000 (0:00:04.229) 0:00:32.960 ********** 2026-03-29 01:03:33.062587 | orchestrator | included: /ansible/roles/glance/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 01:03:33.062591 | orchestrator | 2026-03-29 01:03:33.062595 | orchestrator | TASK [glance : Ensuring glance service ceph config subdir exists] ************** 2026-03-29 01:03:33.062601 | orchestrator | Sunday 29 March 2026 01:01:03 +0000 (0:00:00.548) 0:00:33.508 ********** 2026-03-29 01:03:33.062605 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:03:33.062609 | orchestrator | changed: [testbed-node-2] 2026-03-29 01:03:33.062613 | orchestrator | changed: [testbed-node-1] 2026-03-29 01:03:33.062617 | orchestrator | 2026-03-29 01:03:33.062621 | orchestrator | TASK [glance : Copy over multiple ceph configs for Glance] ********************* 2026-03-29 01:03:33.062625 | orchestrator | Sunday 29 March 2026 01:01:06 +0000 (0:00:03.544) 0:00:37.052 ********** 2026-03-29 01:03:33.062629 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-03-29 01:03:33.062633 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-03-29 01:03:33.062636 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-03-29 01:03:33.062640 | orchestrator | 2026-03-29 01:03:33.062644 | orchestrator | TASK [glance : Copy over ceph Glance keyrings] ********************************* 2026-03-29 01:03:33.062648 | orchestrator | Sunday 29 March 2026 01:01:08 +0000 (0:00:01.520) 0:00:38.573 ********** 2026-03-29 01:03:33.062651 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-03-29 01:03:33.062656 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-03-29 01:03:33.062666 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-03-29 01:03:33.062672 | orchestrator | 2026-03-29 01:03:33.062678 | orchestrator | TASK [glance : Ensuring config directory has correct owner and permission] ***** 2026-03-29 01:03:33.062683 | orchestrator | Sunday 29 March 2026 01:01:09 +0000 (0:00:01.277) 0:00:39.851 ********** 2026-03-29 01:03:33.062690 | orchestrator | ok: [testbed-node-0] 2026-03-29 01:03:33.062695 | orchestrator | ok: [testbed-node-1] 2026-03-29 01:03:33.062701 | orchestrator | ok: [testbed-node-2] 2026-03-29 01:03:33.062707 | orchestrator | 2026-03-29 01:03:33.062713 | orchestrator | TASK [glance : Check if policies shall be overwritten] ************************* 2026-03-29 01:03:33.062719 | orchestrator | Sunday 29 March 2026 01:01:10 +0000 (0:00:00.698) 0:00:40.549 ********** 2026-03-29 01:03:33.062725 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:03:33.062732 | orchestrator | 2026-03-29 01:03:33.062738 | orchestrator | TASK [glance : Set glance policy file] ***************************************** 2026-03-29 01:03:33.062745 | orchestrator | Sunday 29 March 2026 01:01:10 +0000 (0:00:00.090) 0:00:40.640 ********** 2026-03-29 01:03:33.062749 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:03:33.062753 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:03:33.062757 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:03:33.062764 | orchestrator | 2026-03-29 01:03:33.062770 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-03-29 01:03:33.062776 | orchestrator | Sunday 29 March 2026 01:01:10 +0000 (0:00:00.220) 0:00:40.860 ********** 2026-03-29 01:03:33.062782 | orchestrator | included: /ansible/roles/glance/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 01:03:33.062789 | orchestrator | 2026-03-29 01:03:33.062797 | orchestrator | TASK [service-cert-copy : glance | Copying over extra CA certificates] ********* 2026-03-29 01:03:33.062807 | orchestrator | Sunday 29 March 2026 01:01:11 +0000 (0:00:00.543) 0:00:41.403 ********** 2026-03-29 01:03:33.062814 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-29 01:03:33.062827 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-29 01:03:33.062842 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-29 01:03:33.062849 | orchestrator | 2026-03-29 01:03:33.062855 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS certificate] *** 2026-03-29 01:03:33.062861 | orchestrator | Sunday 29 March 2026 01:01:14 +0000 (0:00:03.495) 0:00:44.899 ********** 2026-03-29 01:03:33.062873 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-29 01:03:33.062884 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:03:33.062894 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-29 01:03:33.062901 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:03:33.062912 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-29 01:03:33.062923 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:03:33.062929 | orchestrator | 2026-03-29 01:03:33.062935 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS key] ****** 2026-03-29 01:03:33.062942 | orchestrator | Sunday 29 March 2026 01:01:20 +0000 (0:00:05.488) 0:00:50.387 ********** 2026-03-29 01:03:33.062949 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-29 01:03:33.062956 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:03:33.062965 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-29 01:03:33.062977 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:03:33.062990 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-29 01:03:33.062997 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:03:33.063004 | orchestrator | 2026-03-29 01:03:33.063011 | orchestrator | TASK [glance : Creating TLS backend PEM File] ********************************** 2026-03-29 01:03:33.063017 | orchestrator | Sunday 29 March 2026 01:01:24 +0000 (0:00:04.253) 0:00:54.640 ********** 2026-03-29 01:03:33.063024 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:03:33.063031 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:03:33.063038 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:03:33.063044 | orchestrator | 2026-03-29 01:03:33.063051 | orchestrator | TASK [glance : Copying over config.json files for services] ******************** 2026-03-29 01:03:33.063058 | orchestrator | Sunday 29 March 2026 01:01:28 +0000 (0:00:03.720) 0:00:58.360 ********** 2026-03-29 01:03:33.063068 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-29 01:03:33.063085 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-29 01:03:33.063093 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-29 01:03:33.063131 | orchestrator | 2026-03-29 01:03:33.063139 | orchestrator | TASK [glance : Copying over glance-api.conf] *********************************** 2026-03-29 01:03:33.063146 | orchestrator | Sunday 29 March 2026 01:01:32 +0000 (0:00:04.364) 0:01:02.725 ********** 2026-03-29 01:03:33.063153 | orchestrator | changed: [testbed-node-2] 2026-03-29 01:03:33.063160 | orchestrator | changed: [testbed-node-1] 2026-03-29 01:03:33.063172 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:03:33.063177 | orchestrator | 2026-03-29 01:03:33.063182 | orchestrator | TASK [glance : Copying over glance-cache.conf for glance_api] ****************** 2026-03-29 01:03:33.063188 | orchestrator | Sunday 29 March 2026 01:01:37 +0000 (0:00:05.430) 0:01:08.156 ********** 2026-03-29 01:03:33.063195 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:03:33.063202 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:03:33.063208 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:03:33.063214 | orchestrator | 2026-03-29 01:03:33.063220 | orchestrator | TASK [glance : Copying over glance-swift.conf for glance_api] ****************** 2026-03-29 01:03:33.063226 | orchestrator | Sunday 29 March 2026 01:01:42 +0000 (0:00:04.530) 0:01:12.686 ********** 2026-03-29 01:03:33.063233 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:03:33.063240 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:03:33.063246 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:03:33.063253 | orchestrator | 2026-03-29 01:03:33.063259 | orchestrator | TASK [glance : Copying over glance-image-import.conf] ************************** 2026-03-29 01:03:33.063265 | orchestrator | Sunday 29 March 2026 01:01:45 +0000 (0:00:03.115) 0:01:15.801 ********** 2026-03-29 01:03:33.063272 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:03:33.063278 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:03:33.063287 | orchestrator | skipping: [testbed2026-03-29 01:03:33 | INFO  | Task 1349bb5e-0833-43cc-95ae-00c3ffd7c70d is in state SUCCESS 2026-03-29 01:03:33.063406 | orchestrator | -node-2] 2026-03-29 01:03:33.063415 | orchestrator | 2026-03-29 01:03:33.063419 | orchestrator | TASK [glance : Copying over property-protections-rules.conf] ******************* 2026-03-29 01:03:33.063423 | orchestrator | Sunday 29 March 2026 01:01:49 +0000 (0:00:04.275) 0:01:20.077 ********** 2026-03-29 01:03:33.063427 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:03:33.063431 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:03:33.063434 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:03:33.063438 | orchestrator | 2026-03-29 01:03:33.063442 | orchestrator | TASK [glance : Copying over existing policy file] ****************************** 2026-03-29 01:03:33.063446 | orchestrator | Sunday 29 March 2026 01:01:54 +0000 (0:00:04.493) 0:01:24.570 ********** 2026-03-29 01:03:33.063450 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:03:33.063454 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:03:33.063457 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:03:33.063461 | orchestrator | 2026-03-29 01:03:33.063465 | orchestrator | TASK [glance : Copying over glance-haproxy-tls.cfg] **************************** 2026-03-29 01:03:33.063469 | orchestrator | Sunday 29 March 2026 01:01:54 +0000 (0:00:00.530) 0:01:25.100 ********** 2026-03-29 01:03:33.063473 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-03-29 01:03:33.063476 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:03:33.063480 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-03-29 01:03:33.063484 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:03:33.063488 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-03-29 01:03:33.063492 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:03:33.063495 | orchestrator | 2026-03-29 01:03:33.063499 | orchestrator | TASK [glance : Generating 'hostnqn' file for glance_api] *********************** 2026-03-29 01:03:33.063503 | orchestrator | Sunday 29 March 2026 01:01:59 +0000 (0:00:04.613) 0:01:29.714 ********** 2026-03-29 01:03:33.063507 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:03:33.063511 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:03:33.063514 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:03:33.063518 | orchestrator | 2026-03-29 01:03:33.063522 | orchestrator | TASK [glance : Generating 'hostid' file for glance_api] ************************ 2026-03-29 01:03:33.063526 | orchestrator | Sunday 29 March 2026 01:02:03 +0000 (0:00:04.111) 0:01:33.826 ********** 2026-03-29 01:03:33.063529 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:03:33.063537 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:03:33.063541 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:03:33.063545 | orchestrator | 2026-03-29 01:03:33.063548 | orchestrator | TASK [glance : Check glance containers] **************************************** 2026-03-29 01:03:33.063552 | orchestrator | Sunday 29 March 2026 01:02:07 +0000 (0:00:04.046) 0:01:37.872 ********** 2026-03-29 01:03:33.063560 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-29 01:03:33.063569 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-29 01:03:33.063576 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-29 01:03:33.063593 | orchestrator | 2026-03-29 01:03:33.063604 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-03-29 01:03:33.063614 | orchestrator | Sunday 29 March 2026 01:02:12 +0000 (0:00:04.642) 0:01:42.515 ********** 2026-03-29 01:03:33.063621 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:03:33.063627 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:03:33.063633 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:03:33.063640 | orchestrator | 2026-03-29 01:03:33.063646 | orchestrator | TASK [glance : Creating Glance database] *************************************** 2026-03-29 01:03:33.063652 | orchestrator | Sunday 29 March 2026 01:02:12 +0000 (0:00:00.261) 0:01:42.777 ********** 2026-03-29 01:03:33.063658 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:03:33.063664 | orchestrator | 2026-03-29 01:03:33.063670 | orchestrator | TASK [glance : Creating Glance database user and setting permissions] ********** 2026-03-29 01:03:33.063676 | orchestrator | Sunday 29 March 2026 01:02:15 +0000 (0:00:02.478) 0:01:45.255 ********** 2026-03-29 01:03:33.063683 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:03:33.063689 | orchestrator | 2026-03-29 01:03:33.063696 | orchestrator | TASK [glance : Enable log_bin_trust_function_creators function] **************** 2026-03-29 01:03:33.063702 | orchestrator | Sunday 29 March 2026 01:02:17 +0000 (0:00:02.724) 0:01:47.980 ********** 2026-03-29 01:03:33.063709 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:03:33.063715 | orchestrator | 2026-03-29 01:03:33.063721 | orchestrator | TASK [glance : Running Glance bootstrap container] ***************************** 2026-03-29 01:03:33.063727 | orchestrator | Sunday 29 March 2026 01:02:20 +0000 (0:00:02.471) 0:01:50.452 ********** 2026-03-29 01:03:33.063734 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:03:33.063740 | orchestrator | 2026-03-29 01:03:33.063747 | orchestrator | TASK [glance : Disable log_bin_trust_function_creators function] *************** 2026-03-29 01:03:33.063753 | orchestrator | Sunday 29 March 2026 01:02:49 +0000 (0:00:28.960) 0:02:19.413 ********** 2026-03-29 01:03:33.063759 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:03:33.063765 | orchestrator | 2026-03-29 01:03:33.063775 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-03-29 01:03:33.063782 | orchestrator | Sunday 29 March 2026 01:02:52 +0000 (0:00:02.817) 0:02:22.230 ********** 2026-03-29 01:03:33.063788 | orchestrator | 2026-03-29 01:03:33.063795 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-03-29 01:03:33.063801 | orchestrator | Sunday 29 March 2026 01:02:52 +0000 (0:00:00.121) 0:02:22.352 ********** 2026-03-29 01:03:33.063807 | orchestrator | 2026-03-29 01:03:33.063814 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-03-29 01:03:33.063820 | orchestrator | Sunday 29 March 2026 01:02:52 +0000 (0:00:00.118) 0:02:22.470 ********** 2026-03-29 01:03:33.063833 | orchestrator | 2026-03-29 01:03:33.063839 | orchestrator | RUNNING HANDLER [glance : Restart glance-api container] ************************ 2026-03-29 01:03:33.063845 | orchestrator | Sunday 29 March 2026 01:02:52 +0000 (0:00:00.131) 0:02:22.602 ********** 2026-03-29 01:03:33.063851 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:03:33.063859 | orchestrator | changed: [testbed-node-2] 2026-03-29 01:03:33.063865 | orchestrator | changed: [testbed-node-1] 2026-03-29 01:03:33.063872 | orchestrator | 2026-03-29 01:03:33.063879 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 01:03:33.063886 | orchestrator | testbed-node-0 : ok=26  changed=19  unreachable=0 failed=0 skipped=14  rescued=0 ignored=0 2026-03-29 01:03:33.063894 | orchestrator | testbed-node-1 : ok=15  changed=9  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2026-03-29 01:03:33.063901 | orchestrator | testbed-node-2 : ok=15  changed=9  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2026-03-29 01:03:33.063907 | orchestrator | 2026-03-29 01:03:33.063913 | orchestrator | 2026-03-29 01:03:33.063921 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-29 01:03:33.063927 | orchestrator | Sunday 29 March 2026 01:03:32 +0000 (0:00:40.438) 0:03:03.040 ********** 2026-03-29 01:03:33.063932 | orchestrator | =============================================================================== 2026-03-29 01:03:33.063939 | orchestrator | glance : Restart glance-api container ---------------------------------- 40.44s 2026-03-29 01:03:33.063945 | orchestrator | glance : Running Glance bootstrap container ---------------------------- 28.96s 2026-03-29 01:03:33.063952 | orchestrator | service-ks-register : glance | Creating endpoints ----------------------- 7.85s 2026-03-29 01:03:33.063959 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS certificate --- 5.49s 2026-03-29 01:03:33.063966 | orchestrator | glance : Copying over glance-api.conf ----------------------------------- 5.43s 2026-03-29 01:03:33.063973 | orchestrator | glance : Check glance containers ---------------------------------------- 4.64s 2026-03-29 01:03:33.063980 | orchestrator | glance : Copying over glance-haproxy-tls.cfg ---------------------------- 4.61s 2026-03-29 01:03:33.063990 | orchestrator | glance : Copying over glance-cache.conf for glance_api ------------------ 4.53s 2026-03-29 01:03:33.063998 | orchestrator | glance : Copying over property-protections-rules.conf ------------------- 4.49s 2026-03-29 01:03:33.064005 | orchestrator | glance : Copying over config.json files for services -------------------- 4.36s 2026-03-29 01:03:33.064013 | orchestrator | service-ks-register : glance | Granting user roles ---------------------- 4.31s 2026-03-29 01:03:33.064020 | orchestrator | glance : Copying over glance-image-import.conf -------------------------- 4.28s 2026-03-29 01:03:33.064026 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS key ------ 4.25s 2026-03-29 01:03:33.064033 | orchestrator | glance : Ensuring config directories exist ------------------------------ 4.23s 2026-03-29 01:03:33.064040 | orchestrator | service-ks-register : glance | Creating services ------------------------ 4.20s 2026-03-29 01:03:33.064046 | orchestrator | glance : Generating 'hostnqn' file for glance_api ----------------------- 4.11s 2026-03-29 01:03:33.064053 | orchestrator | glance : Generating 'hostid' file for glance_api ------------------------ 4.05s 2026-03-29 01:03:33.064060 | orchestrator | service-ks-register : glance | Creating roles --------------------------- 3.75s 2026-03-29 01:03:33.064066 | orchestrator | glance : Creating TLS backend PEM File ---------------------------------- 3.72s 2026-03-29 01:03:33.064074 | orchestrator | glance : Ensuring glance service ceph config subdir exists -------------- 3.54s 2026-03-29 01:03:33.064081 | orchestrator | 2026-03-29 01:03:33 | INFO  | Task 05bed01a-cd1b-437f-92a2-658822fb3875 is in state STARTED 2026-03-29 01:03:33.064088 | orchestrator | 2026-03-29 01:03:33 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:03:36.106214 | orchestrator | 2026-03-29 01:03:36 | INFO  | Task b5f979b1-cb3d-4ba2-b8aa-c6e08ca35893 is in state STARTED 2026-03-29 01:03:36.108402 | orchestrator | 2026-03-29 01:03:36 | INFO  | Task 248abb42-edd4-4003-8f12-64ca6a92da80 is in state STARTED 2026-03-29 01:03:36.110292 | orchestrator | 2026-03-29 01:03:36 | INFO  | Task 16d40639-f2c8-4df4-ad0f-208f6999aa2a is in state STARTED 2026-03-29 01:03:36.112800 | orchestrator | 2026-03-29 01:03:36 | INFO  | Task 05bed01a-cd1b-437f-92a2-658822fb3875 is in state STARTED 2026-03-29 01:03:36.112842 | orchestrator | 2026-03-29 01:03:36 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:03:39.146534 | orchestrator | 2026-03-29 01:03:39 | INFO  | Task b5f979b1-cb3d-4ba2-b8aa-c6e08ca35893 is in state STARTED 2026-03-29 01:03:39.146748 | orchestrator | 2026-03-29 01:03:39 | INFO  | Task 248abb42-edd4-4003-8f12-64ca6a92da80 is in state STARTED 2026-03-29 01:03:39.148921 | orchestrator | 2026-03-29 01:03:39 | INFO  | Task 16d40639-f2c8-4df4-ad0f-208f6999aa2a is in state STARTED 2026-03-29 01:03:39.150734 | orchestrator | 2026-03-29 01:03:39 | INFO  | Task 05bed01a-cd1b-437f-92a2-658822fb3875 is in state STARTED 2026-03-29 01:03:39.150954 | orchestrator | 2026-03-29 01:03:39 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:03:42.190238 | orchestrator | 2026-03-29 01:03:42 | INFO  | Task e5e9fde8-f450-428e-b9af-bb5d6525f408 is in state STARTED 2026-03-29 01:03:42.190433 | orchestrator | 2026-03-29 01:03:42 | INFO  | Task b5f979b1-cb3d-4ba2-b8aa-c6e08ca35893 is in state STARTED 2026-03-29 01:03:42.191369 | orchestrator | 2026-03-29 01:03:42 | INFO  | Task 248abb42-edd4-4003-8f12-64ca6a92da80 is in state STARTED 2026-03-29 01:03:42.193796 | orchestrator | 2026-03-29 01:03:42.193854 | orchestrator | 2026-03-29 01:03:42 | INFO  | Task 16d40639-f2c8-4df4-ad0f-208f6999aa2a is in state SUCCESS 2026-03-29 01:03:42.195995 | orchestrator | 2026-03-29 01:03:42.196064 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-29 01:03:42.196071 | orchestrator | 2026-03-29 01:03:42.196076 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-29 01:03:42.196081 | orchestrator | Sunday 29 March 2026 01:00:23 +0000 (0:00:00.356) 0:00:00.357 ********** 2026-03-29 01:03:42.196180 | orchestrator | ok: [testbed-manager] 2026-03-29 01:03:42.196191 | orchestrator | ok: [testbed-node-0] 2026-03-29 01:03:42.196195 | orchestrator | ok: [testbed-node-1] 2026-03-29 01:03:42.196199 | orchestrator | ok: [testbed-node-2] 2026-03-29 01:03:42.196204 | orchestrator | ok: [testbed-node-3] 2026-03-29 01:03:42.196207 | orchestrator | ok: [testbed-node-4] 2026-03-29 01:03:42.196211 | orchestrator | ok: [testbed-node-5] 2026-03-29 01:03:42.196215 | orchestrator | 2026-03-29 01:03:42.196219 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-29 01:03:42.196224 | orchestrator | Sunday 29 March 2026 01:00:24 +0000 (0:00:00.850) 0:00:01.207 ********** 2026-03-29 01:03:42.196228 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2026-03-29 01:03:42.196232 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2026-03-29 01:03:42.196236 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2026-03-29 01:03:42.196240 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2026-03-29 01:03:42.196244 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2026-03-29 01:03:42.196248 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2026-03-29 01:03:42.196251 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2026-03-29 01:03:42.196255 | orchestrator | 2026-03-29 01:03:42.196259 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2026-03-29 01:03:42.196263 | orchestrator | 2026-03-29 01:03:42.196535 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-03-29 01:03:42.196559 | orchestrator | Sunday 29 March 2026 01:00:24 +0000 (0:00:00.848) 0:00:02.055 ********** 2026-03-29 01:03:42.196582 | orchestrator | included: /ansible/roles/prometheus/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-29 01:03:42.196588 | orchestrator | 2026-03-29 01:03:42.196594 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2026-03-29 01:03:42.196600 | orchestrator | Sunday 29 March 2026 01:00:26 +0000 (0:00:01.261) 0:00:03.317 ********** 2026-03-29 01:03:42.196612 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-29 01:03:42.196623 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-29 01:03:42.196631 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-29 01:03:42.196638 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-29 01:03:42.196659 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 01:03:42.196665 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 01:03:42.196677 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 01:03:42.196690 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-29 01:03:42.196696 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-29 01:03:42.196703 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 01:03:42.196712 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 01:03:42.196719 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-29 01:03:42.196733 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-29 01:03:42.196739 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 01:03:42.196754 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-29 01:03:42.196762 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-29 01:03:42.196770 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-29 01:03:42.196777 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-29 01:03:42.196784 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-29 01:03:42.196795 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-29 01:03:42.196801 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-29 01:03:42.196817 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-29 01:03:42.196825 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 01:03:42.196831 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 01:03:42.196838 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-29 01:03:42.196907 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-29 01:03:42.196912 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 01:03:42.196921 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 01:03:42.196930 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-29 01:03:42.196936 | orchestrator | 2026-03-29 01:03:42.196942 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-03-29 01:03:42.196949 | orchestrator | Sunday 29 March 2026 01:00:30 +0000 (0:00:04.315) 0:00:07.632 ********** 2026-03-29 01:03:42.196962 | orchestrator | included: /ansible/roles/prometheus/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-29 01:03:42.196970 | orchestrator | 2026-03-29 01:03:42.196977 | orchestrator | TASK [service-cert-copy : prometheus | Copying over extra CA certificates] ***** 2026-03-29 01:03:42.196983 | orchestrator | Sunday 29 March 2026 01:00:31 +0000 (0:00:01.431) 0:00:09.063 ********** 2026-03-29 01:03:42.196990 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-29 01:03:42.196997 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-29 01:03:42.197004 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-29 01:03:42.197010 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-29 01:03:42.197021 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-29 01:03:42.197033 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-29 01:03:42.197039 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-29 01:03:42.197048 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-29 01:03:42.197055 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-29 01:03:42.197156 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 01:03:42.197162 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 01:03:42.197167 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 01:03:42.197177 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-29 01:03:42.197188 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-29 01:03:42.197196 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-29 01:03:42.197201 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-29 01:03:42.197206 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 01:03:42.197212 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 01:03:42.197216 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 01:03:42.197221 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-29 01:03:42.197238 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-29 01:03:42.197243 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-29 01:03:42.197252 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-29 01:03:42.197257 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-29 01:03:42.197723 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-29 01:03:42.197740 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 01:03:42.197746 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 01:03:42.197789 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 01:03:42.197797 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 01:03:42.197803 | orchestrator | 2026-03-29 01:03:42.197809 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS certificate] *** 2026-03-29 01:03:42.197817 | orchestrator | Sunday 29 March 2026 01:00:37 +0000 (0:00:05.714) 0:00:14.778 ********** 2026-03-29 01:03:42.197830 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-03-29 01:03:42.197837 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-29 01:03:42.197843 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-29 01:03:42.197851 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-03-29 01:03:42.197880 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 01:03:42.197885 | orchestrator | skipping: [testbed-manager] 2026-03-29 01:03:42.197891 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-29 01:03:42.197895 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 01:03:42.197902 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 01:03:42.197907 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-29 01:03:42.197911 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 01:03:42.197915 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:03:42.197919 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-29 01:03:42.197927 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 01:03:42.197942 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 01:03:42.197947 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-29 01:03:42.197951 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 01:03:42.197961 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:03:42.197967 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-29 01:03:42.197974 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 01:03:42.197982 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 01:03:42.197997 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-29 01:03:42.198003 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 01:03:42.198008 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:03:42.198071 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-29 01:03:42.198079 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-29 01:03:42.198147 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-29 01:03:42.198154 | orchestrator | skipping: [testbed-node-4] 2026-03-29 01:03:42.198158 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-29 01:03:42.198217 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-29 01:03:42.198228 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-29 01:03:42.198232 | orchestrator | skipping: [testbed-node-3] 2026-03-29 01:03:42.198236 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-29 01:03:42.198240 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-29 01:03:42.198258 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-29 01:03:42.198262 | orchestrator | skipping: [testbed-node-5] 2026-03-29 01:03:42.198266 | orchestrator | 2026-03-29 01:03:42.198270 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS key] *** 2026-03-29 01:03:42.198274 | orchestrator | Sunday 29 March 2026 01:00:39 +0000 (0:00:01.601) 0:00:16.379 ********** 2026-03-29 01:03:42.198282 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-03-29 01:03:42.198286 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-29 01:03:42.198290 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-29 01:03:42.198304 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-03-29 01:03:42.198313 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 01:03:42.198319 | orchestrator | skipping: [testbed-manager] 2026-03-29 01:03:42.198346 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-29 01:03:42.198355 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 01:03:42.198366 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 01:03:42.198398 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-29 01:03:42.198410 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 01:03:42.198415 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:03:42.198419 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-29 01:03:42.198424 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 01:03:42.198429 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 01:03:42.198446 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-29 01:03:42.198735 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 01:03:42.198763 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-29 01:03:42.198767 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 01:03:42.198781 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 01:03:42.198785 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-29 01:03:42.198789 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 01:03:42.198793 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:03:42.198797 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:03:42.198820 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-29 01:03:42.198825 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-29 01:03:42.198829 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-29 01:03:42.198833 | orchestrator | skipping: [testbed-node-3] 2026-03-29 01:03:42.198840 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-29 01:03:42.198848 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-29 01:03:42.198852 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-29 01:03:42.198856 | orchestrator | skipping: [testbed-node-4] 2026-03-29 01:03:42.198860 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-29 01:03:42.198864 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-29 01:03:42.198880 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-29 01:03:42.198885 | orchestrator | skipping: [testbed-node-5] 2026-03-29 01:03:42.198889 | orchestrator | 2026-03-29 01:03:42.198893 | orchestrator | TASK [prometheus : Copying over config.json files] ***************************** 2026-03-29 01:03:42.198897 | orchestrator | Sunday 29 March 2026 01:00:41 +0000 (0:00:02.063) 0:00:18.443 ********** 2026-03-29 01:03:42.198901 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-29 01:03:42.198910 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-29 01:03:42.198921 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-29 01:03:42.198930 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-29 01:03:42.198941 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-29 01:03:42.198947 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 01:03:42.198973 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-29 01:03:42.198980 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 01:03:42.198986 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-29 01:03:42.199001 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-29 01:03:42.199008 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-29 01:03:42.199015 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 01:03:42.199020 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 01:03:42.199024 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-29 01:03:42.199042 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-29 01:03:42.199047 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-29 01:03:42.199055 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 01:03:42.199062 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-29 01:03:42.199066 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 01:03:42.199070 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-29 01:03:42.199074 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-29 01:03:42.199078 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-29 01:03:42.199117 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-29 01:03:42.199127 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-29 01:03:42.199134 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-29 01:03:42.199138 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 01:03:42.199143 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 01:03:42.199147 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 01:03:42.199151 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 01:03:42.199155 | orchestrator | 2026-03-29 01:03:42.199159 | orchestrator | TASK [prometheus : Find custom prometheus alert rules files] ******************* 2026-03-29 01:03:42.199163 | orchestrator | Sunday 29 March 2026 01:00:47 +0000 (0:00:06.338) 0:00:24.781 ********** 2026-03-29 01:03:42.199169 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-29 01:03:42.199176 | orchestrator | 2026-03-29 01:03:42.199185 | orchestrator | TASK [prometheus : Copying over custom prometheus alert rules files] *********** 2026-03-29 01:03:42.199216 | orchestrator | Sunday 29 March 2026 01:00:48 +0000 (0:00:00.861) 0:00:25.643 ********** 2026-03-29 01:03:42.199223 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1102521, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.7547195, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 01:03:42.199230 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1102521, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.7547195, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 01:03:42.199240 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1102521, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.7547195, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 01:03:42.199247 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1102521, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.7547195, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 01:03:42.199255 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12944, 'inode': 1102547, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.760277, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 01:03:42.199262 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 56929, 'inode': 1102513, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.752719, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 01:03:42.199285 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12944, 'inode': 1102547, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.760277, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 01:03:42.199300 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1102521, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.7547195, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-29 01:03:42.199306 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1102539, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.7591937, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 01:03:42.199318 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12944, 'inode': 1102547, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.760277, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 01:03:42.199325 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12944, 'inode': 1102547, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.760277, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 01:03:42.199330 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1102509, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.7507184, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 01:03:42.199336 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 56929, 'inode': 1102513, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.752719, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 01:03:42.199361 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 56929, 'inode': 1102513, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.752719, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 01:03:42.199374 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1102523, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.7561266, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 01:03:42.199381 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1102521, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.7547195, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 01:03:42.199391 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1102521, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.7547195, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 01:03:42.199399 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1102539, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.7591937, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 01:03:42.199406 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1102539, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.7591937, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 01:03:42.199412 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 56929, 'inode': 1102513, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.752719, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 01:03:42.199424 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1102509, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.7507184, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 01:03:42.199450 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 14018, 'inode': 1102535, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.7587206, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 01:03:42.199455 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12944, 'inode': 1102547, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.760277, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 01:03:42.199463 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12944, 'inode': 1102547, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.760277, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 01:03:42.199468 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1102523, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.7561266, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 01:03:42.199473 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1102509, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.7507184, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 01:03:42.199480 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 14018, 'inode': 1102535, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.7587206, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 01:03:42.199492 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12944, 'inode': 1102547, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.760277, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-29 01:03:42.199520 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 56929, 'inode': 1102513, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.752719, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 01:03:42.199530 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1102539, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.7591937, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 01:03:42.199541 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 56929, 'inode': 1102513, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.752719, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 01:03:42.199547 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1102528, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.75672, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 01:03:42.199554 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1102539, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.7591937, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 01:03:42.199560 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1102523, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.7561266, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 01:03:42.199572 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1102528, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.75672, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 01:03:42.199596 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1102509, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.7507184, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 01:03:42.199604 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1102519, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.7547195, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 01:03:42.199615 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1102509, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.7507184, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 01:03:42.199622 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1102539, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.7591937, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 01:03:42.199629 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1102546, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.7599192, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 01:03:42.199635 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1102519, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.7547195, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 01:03:42.199648 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 14018, 'inode': 1102535, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.7587206, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 01:03:42.199675 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1102523, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.7561266, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 01:03:42.199688 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1102509, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.7507184, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 01:03:42.199701 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1102528, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.75672, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 01:03:42.199708 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 56929, 'inode': 1102513, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.752719, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-29 01:03:42.199715 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1102506, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.7504334, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 01:03:42.199727 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1102523, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.7561266, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 01:03:42.199733 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1102546, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.7599192, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 01:03:42.199759 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 14018, 'inode': 1102535, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.7587206, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 01:03:42.199767 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1102523, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.7561266, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 01:03:42.199777 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1102506, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.7504334, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 01:03:42.199784 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1102519, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.7547195, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 01:03:42.199791 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1102555, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.762622, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 01:03:42.199803 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 14018, 'inode': 1102535, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.7587206, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 01:03:42.199810 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1102543, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.7599192, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 01:03:42.199836 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1102528, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.75672, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 01:03:42.199844 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 14018, 'inode': 1102535, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.7587206, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 01:03:42.199855 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1102555, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.762622, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 01:03:42.199862 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1102546, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.7599192, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 01:03:42.199868 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1102528, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.75672, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 01:03:42.199881 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1102519, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.7547195, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 01:03:42.199886 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1102528, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.75672, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 01:03:42.199908 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1102511, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.751218, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 01:03:42.199915 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1102543, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.7599192, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 01:03:42.199921 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1102506, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.7504334, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 01:03:42.199932 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1102519, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.7547195, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 01:03:42.199940 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5065, 'inode': 1102508, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.7506413, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 01:03:42.199951 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1102539, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.7591937, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-29 01:03:42.199958 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1102546, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.7599192, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 01:03:42.199987 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1102546, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.7599192, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 01:03:42.199994 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1102533, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.7577202, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 01:03:42.200081 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1102519, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.7547195, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 01:03:42.200107 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1102511, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.751218, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 01:03:42.200117 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1102555, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.762622, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 01:03:42.200121 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1102546, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.7599192, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 01:03:42.200125 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1102506, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.7504334, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 01:03:42.200134 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1102531, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.7573328, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 01:03:42.200138 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1102506, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.7504334, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 01:03:42.200146 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5065, 'inode': 1102508, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.7506413, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 01:03:42.200150 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1102506, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.7504334, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 01:03:42.200159 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1102543, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.7599192, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 01:03:42.200166 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1102555, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.762622, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 01:03:42.200173 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1102555, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.762622, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 01:03:42.200188 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1102555, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.762622, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 01:03:42.200195 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1102553, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.7617214, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 01:03:42.200202 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:03:42.200212 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1102511, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.751218, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 01:03:42.200220 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1102533, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.7577202, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 01:03:42.200231 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1102543, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.7599192, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 01:03:42.200239 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1102543, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.7599192, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 01:03:42.200243 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5065, 'inode': 1102508, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.7506413, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 01:03:42.200250 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1102511, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.751218, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 01:03:42.200254 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1102543, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.7599192, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 01:03:42.200263 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1102509, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.7507184, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-29 01:03:42.200275 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1102511, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.751218, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 01:03:42.200283 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1102533, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.7577202, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 01:03:42.200289 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5065, 'inode': 1102508, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.7506413, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 01:03:42.200295 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5065, 'inode': 1102508, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.7506413, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 01:03:42.200307 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1102533, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.7577202, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 01:03:42.200312 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1102531, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.7573328, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 01:03:42.200322 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1102511, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.751218, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 01:03:42.200333 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1102531, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.7573328, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 01:03:42.200341 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1102531, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.7573328, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 01:03:42.200347 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1102553, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.7617214, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 01:03:42.200354 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:03:42.200362 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1102533, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.7577202, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 01:03:42.200372 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5065, 'inode': 1102508, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.7506413, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 01:03:42.200379 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1102531, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.7573328, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 01:03:42.200389 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1102553, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.7617214, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 01:03:42.200400 | orchestrator | skipping: [testbed-node-5] 2026-03-29 01:03:42.200408 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1102553, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.7617214, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 01:03:42.200414 | orchestrator | skipping: [testbed-node-3] 2026-03-29 01:03:42.200420 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1102533, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.7577202, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 01:03:42.200428 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1102553, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.7617214, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 01:03:42.200439 | orchestrator | skipping: [testbed-node-4] 2026-03-29 01:03:42.200445 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1102531, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.7573328, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 01:03:42.200459 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1102523, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.7561266, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-29 01:03:42.200465 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1102553, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.7617214, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 01:03:42.200480 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:03:42.200490 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 14018, 'inode': 1102535, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.7587206, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-29 01:03:42.200497 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1102528, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.75672, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-29 01:03:42.200503 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1102519, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.7547195, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-29 01:03:42.200509 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1102546, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.7599192, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-29 01:03:42.200515 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1102506, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.7504334, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-29 01:03:42.200525 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1102555, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.762622, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-29 01:03:42.200532 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1102543, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.7599192, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-29 01:03:42.200546 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1102511, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.751218, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-29 01:03:42.200552 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5065, 'inode': 1102508, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.7506413, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-29 01:03:42.200559 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1102533, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.7577202, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-29 01:03:42.200565 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1102531, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.7573328, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-29 01:03:42.200571 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1102553, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.7617214, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-29 01:03:42.200577 | orchestrator | 2026-03-29 01:03:42.200630 | orchestrator | TASK [prometheus : Find prometheus common config overrides] ******************** 2026-03-29 01:03:42.200638 | orchestrator | Sunday 29 March 2026 01:01:13 +0000 (0:00:25.360) 0:00:51.003 ********** 2026-03-29 01:03:42.200645 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-29 01:03:42.200651 | orchestrator | 2026-03-29 01:03:42.200658 | orchestrator | TASK [prometheus : Find prometheus host config overrides] ********************** 2026-03-29 01:03:42.200665 | orchestrator | Sunday 29 March 2026 01:01:14 +0000 (0:00:00.734) 0:00:51.738 ********** 2026-03-29 01:03:42.200675 | orchestrator | [WARNING]: Skipped 2026-03-29 01:03:42.200687 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-29 01:03:42.200694 | orchestrator | manager/prometheus.yml.d' path due to this access issue: 2026-03-29 01:03:42.200700 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-29 01:03:42.200738 | orchestrator | manager/prometheus.yml.d' is not a directory 2026-03-29 01:03:42.200744 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-29 01:03:42.200750 | orchestrator | [WARNING]: Skipped 2026-03-29 01:03:42.200756 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-29 01:03:42.200763 | orchestrator | node-0/prometheus.yml.d' path due to this access issue: 2026-03-29 01:03:42.200769 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-29 01:03:42.200775 | orchestrator | node-0/prometheus.yml.d' is not a directory 2026-03-29 01:03:42.200781 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-29 01:03:42.200787 | orchestrator | [WARNING]: Skipped 2026-03-29 01:03:42.200794 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-29 01:03:42.200800 | orchestrator | node-2/prometheus.yml.d' path due to this access issue: 2026-03-29 01:03:42.200806 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-29 01:03:42.200812 | orchestrator | node-2/prometheus.yml.d' is not a directory 2026-03-29 01:03:42.200818 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-03-29 01:03:42.200824 | orchestrator | [WARNING]: Skipped 2026-03-29 01:03:42.200831 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-29 01:03:42.200846 | orchestrator | node-1/prometheus.yml.d' path due to this access issue: 2026-03-29 01:03:42.200855 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-29 01:03:42.200862 | orchestrator | node-1/prometheus.yml.d' is not a directory 2026-03-29 01:03:42.200869 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-03-29 01:03:42.200875 | orchestrator | [WARNING]: Skipped 2026-03-29 01:03:42.200880 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-29 01:03:42.200887 | orchestrator | node-4/prometheus.yml.d' path due to this access issue: 2026-03-29 01:03:42.200893 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-29 01:03:42.200899 | orchestrator | node-4/prometheus.yml.d' is not a directory 2026-03-29 01:03:42.200905 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-29 01:03:42.200911 | orchestrator | [WARNING]: Skipped 2026-03-29 01:03:42.200916 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-29 01:03:42.200923 | orchestrator | node-3/prometheus.yml.d' path due to this access issue: 2026-03-29 01:03:42.200929 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-29 01:03:42.200934 | orchestrator | node-3/prometheus.yml.d' is not a directory 2026-03-29 01:03:42.200940 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-29 01:03:42.200947 | orchestrator | [WARNING]: Skipped 2026-03-29 01:03:42.200953 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-29 01:03:42.200958 | orchestrator | node-5/prometheus.yml.d' path due to this access issue: 2026-03-29 01:03:42.200964 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-29 01:03:42.200971 | orchestrator | node-5/prometheus.yml.d' is not a directory 2026-03-29 01:03:42.200978 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-29 01:03:42.200983 | orchestrator | 2026-03-29 01:03:42.200987 | orchestrator | TASK [prometheus : Copying over prometheus config file] ************************ 2026-03-29 01:03:42.200991 | orchestrator | Sunday 29 March 2026 01:01:17 +0000 (0:00:02.910) 0:00:54.649 ********** 2026-03-29 01:03:42.200995 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-29 01:03:42.201006 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:03:42.201012 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-29 01:03:42.201019 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:03:42.201026 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-29 01:03:42.201031 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:03:42.201038 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-29 01:03:42.201043 | orchestrator | skipping: [testbed-node-3] 2026-03-29 01:03:42.201047 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-29 01:03:42.201051 | orchestrator | skipping: [testbed-node-4] 2026-03-29 01:03:42.201055 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-29 01:03:42.201059 | orchestrator | skipping: [testbed-node-5] 2026-03-29 01:03:42.201062 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2) 2026-03-29 01:03:42.201066 | orchestrator | 2026-03-29 01:03:42.201070 | orchestrator | TASK [prometheus : Copying over prometheus web config file] ******************** 2026-03-29 01:03:42.201074 | orchestrator | Sunday 29 March 2026 01:01:37 +0000 (0:00:19.844) 0:01:14.493 ********** 2026-03-29 01:03:42.201078 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-29 01:03:42.201082 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:03:42.201135 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-29 01:03:42.201149 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:03:42.201153 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-29 01:03:42.201157 | orchestrator | skipping: [testbed-node-3] 2026-03-29 01:03:42.201161 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-29 01:03:42.201165 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:03:42.201168 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-29 01:03:42.201172 | orchestrator | skipping: [testbed-node-4] 2026-03-29 01:03:42.201176 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-29 01:03:42.201180 | orchestrator | skipping: [testbed-node-5] 2026-03-29 01:03:42.201184 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2) 2026-03-29 01:03:42.201188 | orchestrator | 2026-03-29 01:03:42.201192 | orchestrator | TASK [prometheus : Copying over prometheus alertmanager config file] *********** 2026-03-29 01:03:42.201196 | orchestrator | Sunday 29 March 2026 01:01:41 +0000 (0:00:03.859) 0:01:18.353 ********** 2026-03-29 01:03:42.201200 | orchestrator | skipping: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-29 01:03:42.201205 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:03:42.201209 | orchestrator | skipping: [testbed-node-3] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-29 01:03:42.201218 | orchestrator | skipping: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-29 01:03:42.201222 | orchestrator | skipping: [testbed-node-3] 2026-03-29 01:03:42.201226 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:03:42.201230 | orchestrator | skipping: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-29 01:03:42.201234 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:03:42.201238 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml) 2026-03-29 01:03:42.201246 | orchestrator | skipping: [testbed-node-4] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-29 01:03:42.201250 | orchestrator | skipping: [testbed-node-4] 2026-03-29 01:03:42.201254 | orchestrator | skipping: [testbed-node-5] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-29 01:03:42.201258 | orchestrator | skipping: [testbed-node-5] 2026-03-29 01:03:42.201262 | orchestrator | 2026-03-29 01:03:42.201266 | orchestrator | TASK [prometheus : Find custom Alertmanager alert notification templates] ****** 2026-03-29 01:03:42.201270 | orchestrator | Sunday 29 March 2026 01:01:43 +0000 (0:00:01.785) 0:01:20.139 ********** 2026-03-29 01:03:42.201274 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-29 01:03:42.201277 | orchestrator | 2026-03-29 01:03:42.201281 | orchestrator | TASK [prometheus : Copying over custom Alertmanager alert notification templates] *** 2026-03-29 01:03:42.201285 | orchestrator | Sunday 29 March 2026 01:01:44 +0000 (0:00:01.004) 0:01:21.143 ********** 2026-03-29 01:03:42.201289 | orchestrator | skipping: [testbed-manager] 2026-03-29 01:03:42.201296 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:03:42.201301 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:03:42.201309 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:03:42.201318 | orchestrator | skipping: [testbed-node-3] 2026-03-29 01:03:42.201325 | orchestrator | skipping: [testbed-node-4] 2026-03-29 01:03:42.201330 | orchestrator | skipping: [testbed-node-5] 2026-03-29 01:03:42.201336 | orchestrator | 2026-03-29 01:03:42.201342 | orchestrator | TASK [prometheus : Copying over my.cnf for mysqld_exporter] ******************** 2026-03-29 01:03:42.201348 | orchestrator | Sunday 29 March 2026 01:01:44 +0000 (0:00:00.926) 0:01:22.070 ********** 2026-03-29 01:03:42.201354 | orchestrator | skipping: [testbed-manager] 2026-03-29 01:03:42.201359 | orchestrator | skipping: [testbed-node-3] 2026-03-29 01:03:42.201366 | orchestrator | skipping: [testbed-node-4] 2026-03-29 01:03:42.201372 | orchestrator | skipping: [testbed-node-5] 2026-03-29 01:03:42.201378 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:03:42.201384 | orchestrator | changed: [testbed-node-1] 2026-03-29 01:03:42.201390 | orchestrator | changed: [testbed-node-2] 2026-03-29 01:03:42.201396 | orchestrator | 2026-03-29 01:03:42.201402 | orchestrator | TASK [prometheus : Copying cloud config file for openstack exporter] *********** 2026-03-29 01:03:42.201410 | orchestrator | Sunday 29 March 2026 01:01:47 +0000 (0:00:02.467) 0:01:24.538 ********** 2026-03-29 01:03:42.201416 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-29 01:03:42.201422 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-29 01:03:42.201428 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-29 01:03:42.201434 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:03:42.201438 | orchestrator | skipping: [testbed-manager] 2026-03-29 01:03:42.201442 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:03:42.201445 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-29 01:03:42.201449 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:03:42.201453 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-29 01:03:42.201457 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-29 01:03:42.201465 | orchestrator | skipping: [testbed-node-5] 2026-03-29 01:03:42.201470 | orchestrator | skipping: [testbed-node-3] 2026-03-29 01:03:42.201473 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-29 01:03:42.201477 | orchestrator | skipping: [testbed-node-4] 2026-03-29 01:03:42.201481 | orchestrator | 2026-03-29 01:03:42.201485 | orchestrator | TASK [prometheus : Copying config file for blackbox exporter] ****************** 2026-03-29 01:03:42.201494 | orchestrator | Sunday 29 March 2026 01:01:49 +0000 (0:00:01.867) 0:01:26.405 ********** 2026-03-29 01:03:42.201500 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-29 01:03:42.201507 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:03:42.201513 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-29 01:03:42.201520 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-29 01:03:42.201526 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-29 01:03:42.201533 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2) 2026-03-29 01:03:42.201538 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:03:42.201545 | orchestrator | skipping: [testbed-node-3] 2026-03-29 01:03:42.201551 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:03:42.201557 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-29 01:03:42.201566 | orchestrator | skipping: [testbed-node-4] 2026-03-29 01:03:42.201572 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-29 01:03:42.201578 | orchestrator | skipping: [testbed-node-5] 2026-03-29 01:03:42.201584 | orchestrator | 2026-03-29 01:03:42.201589 | orchestrator | TASK [prometheus : Find extra prometheus server config files] ****************** 2026-03-29 01:03:42.201595 | orchestrator | Sunday 29 March 2026 01:01:51 +0000 (0:00:02.676) 0:01:29.082 ********** 2026-03-29 01:03:42.201601 | orchestrator | [WARNING]: Skipped 2026-03-29 01:03:42.201607 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' path 2026-03-29 01:03:42.201612 | orchestrator | due to this access issue: 2026-03-29 01:03:42.201618 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' is 2026-03-29 01:03:42.201624 | orchestrator | not a directory 2026-03-29 01:03:42.201630 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-29 01:03:42.201635 | orchestrator | 2026-03-29 01:03:42.201642 | orchestrator | TASK [prometheus : Create subdirectories for extra config files] *************** 2026-03-29 01:03:42.201648 | orchestrator | Sunday 29 March 2026 01:01:53 +0000 (0:00:01.553) 0:01:30.635 ********** 2026-03-29 01:03:42.201655 | orchestrator | skipping: [testbed-manager] 2026-03-29 01:03:42.201661 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:03:42.201667 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:03:42.201673 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:03:42.201680 | orchestrator | skipping: [testbed-node-3] 2026-03-29 01:03:42.201686 | orchestrator | skipping: [testbed-node-4] 2026-03-29 01:03:42.201693 | orchestrator | skipping: [testbed-node-5] 2026-03-29 01:03:42.201699 | orchestrator | 2026-03-29 01:03:42.201705 | orchestrator | TASK [prometheus : Template extra prometheus server config files] ************** 2026-03-29 01:03:42.201711 | orchestrator | Sunday 29 March 2026 01:01:54 +0000 (0:00:00.933) 0:01:31.569 ********** 2026-03-29 01:03:42.201717 | orchestrator | skipping: [testbed-manager] 2026-03-29 01:03:42.201723 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:03:42.201729 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:03:42.201735 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:03:42.201741 | orchestrator | skipping: [testbed-node-3] 2026-03-29 01:03:42.201748 | orchestrator | skipping: [testbed-node-4] 2026-03-29 01:03:42.201753 | orchestrator | skipping: [testbed-node-5] 2026-03-29 01:03:42.201759 | orchestrator | 2026-03-29 01:03:42.201765 | orchestrator | TASK [prometheus : Check prometheus containers] ******************************** 2026-03-29 01:03:42.201771 | orchestrator | Sunday 29 March 2026 01:01:55 +0000 (0:00:00.871) 0:01:32.441 ********** 2026-03-29 01:03:42.201780 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-29 01:03:42.201802 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-29 01:03:42.201810 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-29 01:03:42.201816 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-29 01:03:42.201828 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 01:03:42.201836 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 01:03:42.201842 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-29 01:03:42.201847 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-29 01:03:42.201855 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-29 01:03:42.201863 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-29 01:03:42.201868 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 01:03:42.201876 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-29 01:03:42.201883 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-29 01:03:42.201887 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 01:03:42.201899 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 01:03:42.201903 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-29 01:03:42.201914 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-29 01:03:42.201921 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-29 01:03:42.201926 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 01:03:42.201942 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-29 01:03:42.201949 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-29 01:03:42.201955 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 01:03:42.201967 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-29 01:03:42.201973 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-29 01:03:42.201987 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 01:03:42.201995 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 01:03:42.202001 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-29 01:03:42.202066 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-29 01:03:42.202076 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 01:03:42.202100 | orchestrator | 2026-03-29 01:03:42.202110 | orchestrator | TASK [prometheus : Creating prometheus database user and setting permissions] *** 2026-03-29 01:03:42.202119 | orchestrator | Sunday 29 March 2026 01:02:00 +0000 (0:00:05.207) 0:01:37.649 ********** 2026-03-29 01:03:42.202125 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-03-29 01:03:42.202131 | orchestrator | skipping: [testbed-manager] 2026-03-29 01:03:42.202137 | orchestrator | 2026-03-29 01:03:42.202144 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-29 01:03:42.202150 | orchestrator | Sunday 29 March 2026 01:02:02 +0000 (0:00:01.680) 0:01:39.329 ********** 2026-03-29 01:03:42.202156 | orchestrator | 2026-03-29 01:03:42.202162 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-29 01:03:42.202169 | orchestrator | Sunday 29 March 2026 01:02:02 +0000 (0:00:00.120) 0:01:39.450 ********** 2026-03-29 01:03:42.202176 | orchestrator | 2026-03-29 01:03:42.202180 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-29 01:03:42.202184 | orchestrator | Sunday 29 March 2026 01:02:02 +0000 (0:00:00.103) 0:01:39.554 ********** 2026-03-29 01:03:42.202187 | orchestrator | 2026-03-29 01:03:42.202191 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-29 01:03:42.202195 | orchestrator | Sunday 29 March 2026 01:02:02 +0000 (0:00:00.094) 0:01:39.649 ********** 2026-03-29 01:03:42.202199 | orchestrator | 2026-03-29 01:03:42.202203 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-29 01:03:42.202207 | orchestrator | Sunday 29 March 2026 01:02:02 +0000 (0:00:00.066) 0:01:39.715 ********** 2026-03-29 01:03:42.202210 | orchestrator | 2026-03-29 01:03:42.202214 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-29 01:03:42.202219 | orchestrator | Sunday 29 March 2026 01:02:02 +0000 (0:00:00.068) 0:01:39.784 ********** 2026-03-29 01:03:42.202225 | orchestrator | 2026-03-29 01:03:42.202231 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-29 01:03:42.202239 | orchestrator | Sunday 29 March 2026 01:02:02 +0000 (0:00:00.069) 0:01:39.853 ********** 2026-03-29 01:03:42.202247 | orchestrator | 2026-03-29 01:03:42.202253 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-server container] ************* 2026-03-29 01:03:42.202259 | orchestrator | Sunday 29 March 2026 01:02:02 +0000 (0:00:00.086) 0:01:39.940 ********** 2026-03-29 01:03:42.202266 | orchestrator | changed: [testbed-manager] 2026-03-29 01:03:42.202272 | orchestrator | 2026-03-29 01:03:42.202278 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-node-exporter container] ****** 2026-03-29 01:03:42.202284 | orchestrator | Sunday 29 March 2026 01:02:18 +0000 (0:00:15.429) 0:01:55.369 ********** 2026-03-29 01:03:42.202289 | orchestrator | changed: [testbed-node-3] 2026-03-29 01:03:42.202296 | orchestrator | changed: [testbed-node-4] 2026-03-29 01:03:42.202301 | orchestrator | changed: [testbed-manager] 2026-03-29 01:03:42.202314 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:03:42.202320 | orchestrator | changed: [testbed-node-5] 2026-03-29 01:03:42.202326 | orchestrator | changed: [testbed-node-2] 2026-03-29 01:03:42.202332 | orchestrator | changed: [testbed-node-1] 2026-03-29 01:03:42.202339 | orchestrator | 2026-03-29 01:03:42.202346 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-mysqld-exporter container] **** 2026-03-29 01:03:42.202352 | orchestrator | Sunday 29 March 2026 01:02:30 +0000 (0:00:12.484) 0:02:07.854 ********** 2026-03-29 01:03:42.202358 | orchestrator | changed: [testbed-node-2] 2026-03-29 01:03:42.202364 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:03:42.202370 | orchestrator | changed: [testbed-node-1] 2026-03-29 01:03:42.202376 | orchestrator | 2026-03-29 01:03:42.202386 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-memcached-exporter container] *** 2026-03-29 01:03:42.202395 | orchestrator | Sunday 29 March 2026 01:02:40 +0000 (0:00:09.351) 0:02:17.205 ********** 2026-03-29 01:03:42.202400 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:03:42.202407 | orchestrator | changed: [testbed-node-1] 2026-03-29 01:03:42.202413 | orchestrator | changed: [testbed-node-2] 2026-03-29 01:03:42.202427 | orchestrator | 2026-03-29 01:03:42.202433 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-cadvisor container] *********** 2026-03-29 01:03:42.202439 | orchestrator | Sunday 29 March 2026 01:02:50 +0000 (0:00:10.195) 0:02:27.401 ********** 2026-03-29 01:03:42.202446 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:03:42.202452 | orchestrator | changed: [testbed-manager] 2026-03-29 01:03:42.202458 | orchestrator | changed: [testbed-node-4] 2026-03-29 01:03:42.202463 | orchestrator | changed: [testbed-node-5] 2026-03-29 01:03:42.202469 | orchestrator | changed: [testbed-node-3] 2026-03-29 01:03:42.202474 | orchestrator | changed: [testbed-node-1] 2026-03-29 01:03:42.202480 | orchestrator | changed: [testbed-node-2] 2026-03-29 01:03:42.202486 | orchestrator | 2026-03-29 01:03:42.202492 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-alertmanager container] ******* 2026-03-29 01:03:42.202504 | orchestrator | Sunday 29 March 2026 01:03:03 +0000 (0:00:13.660) 0:02:41.062 ********** 2026-03-29 01:03:42.202510 | orchestrator | changed: [testbed-manager] 2026-03-29 01:03:42.202516 | orchestrator | 2026-03-29 01:03:42.202522 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-elasticsearch-exporter container] *** 2026-03-29 01:03:42.202527 | orchestrator | Sunday 29 March 2026 01:03:12 +0000 (0:00:08.095) 0:02:49.158 ********** 2026-03-29 01:03:42.202531 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:03:42.202535 | orchestrator | changed: [testbed-node-2] 2026-03-29 01:03:42.202539 | orchestrator | changed: [testbed-node-1] 2026-03-29 01:03:42.202543 | orchestrator | 2026-03-29 01:03:42.202548 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-blackbox-exporter container] *** 2026-03-29 01:03:42.202555 | orchestrator | Sunday 29 March 2026 01:03:24 +0000 (0:00:12.642) 0:03:01.801 ********** 2026-03-29 01:03:42.202561 | orchestrator | changed: [testbed-manager] 2026-03-29 01:03:42.202567 | orchestrator | 2026-03-29 01:03:42.202572 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-libvirt-exporter container] *** 2026-03-29 01:03:42.202578 | orchestrator | Sunday 29 March 2026 01:03:29 +0000 (0:00:04.744) 0:03:06.545 ********** 2026-03-29 01:03:42.202583 | orchestrator | changed: [testbed-node-5] 2026-03-29 01:03:42.202588 | orchestrator | changed: [testbed-node-4] 2026-03-29 01:03:42.202594 | orchestrator | changed: [testbed-node-3] 2026-03-29 01:03:42.202600 | orchestrator | 2026-03-29 01:03:42.202606 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 01:03:42.202613 | orchestrator | testbed-manager : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-03-29 01:03:42.202620 | orchestrator | testbed-node-0 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-03-29 01:03:42.202626 | orchestrator | testbed-node-1 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-03-29 01:03:42.202630 | orchestrator | testbed-node-2 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-03-29 01:03:42.202634 | orchestrator | testbed-node-3 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-29 01:03:42.202638 | orchestrator | testbed-node-4 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-29 01:03:42.202642 | orchestrator | testbed-node-5 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-29 01:03:42.202645 | orchestrator | 2026-03-29 01:03:42.202649 | orchestrator | 2026-03-29 01:03:42.202653 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-29 01:03:42.202657 | orchestrator | Sunday 29 March 2026 01:03:38 +0000 (0:00:09.434) 0:03:15.979 ********** 2026-03-29 01:03:42.202661 | orchestrator | =============================================================================== 2026-03-29 01:03:42.202669 | orchestrator | prometheus : Copying over custom prometheus alert rules files ---------- 25.36s 2026-03-29 01:03:42.202673 | orchestrator | prometheus : Copying over prometheus config file ----------------------- 19.84s 2026-03-29 01:03:42.202677 | orchestrator | prometheus : Restart prometheus-server container ----------------------- 15.43s 2026-03-29 01:03:42.202681 | orchestrator | prometheus : Restart prometheus-cadvisor container --------------------- 13.66s 2026-03-29 01:03:42.202685 | orchestrator | prometheus : Restart prometheus-elasticsearch-exporter container ------- 12.64s 2026-03-29 01:03:42.202689 | orchestrator | prometheus : Restart prometheus-node-exporter container ---------------- 12.48s 2026-03-29 01:03:42.202697 | orchestrator | prometheus : Restart prometheus-memcached-exporter container ----------- 10.20s 2026-03-29 01:03:42.202702 | orchestrator | prometheus : Restart prometheus-libvirt-exporter container -------------- 9.43s 2026-03-29 01:03:42.202706 | orchestrator | prometheus : Restart prometheus-mysqld-exporter container --------------- 9.35s 2026-03-29 01:03:42.202710 | orchestrator | prometheus : Restart prometheus-alertmanager container ------------------ 8.10s 2026-03-29 01:03:42.202713 | orchestrator | prometheus : Copying over config.json files ----------------------------- 6.34s 2026-03-29 01:03:42.202717 | orchestrator | service-cert-copy : prometheus | Copying over extra CA certificates ----- 5.71s 2026-03-29 01:03:42.202721 | orchestrator | prometheus : Check prometheus containers -------------------------------- 5.21s 2026-03-29 01:03:42.202725 | orchestrator | prometheus : Restart prometheus-blackbox-exporter container ------------- 4.74s 2026-03-29 01:03:42.202729 | orchestrator | prometheus : Ensuring config directories exist -------------------------- 4.32s 2026-03-29 01:03:42.202733 | orchestrator | prometheus : Copying over prometheus web config file -------------------- 3.86s 2026-03-29 01:03:42.202736 | orchestrator | prometheus : Find prometheus host config overrides ---------------------- 2.91s 2026-03-29 01:03:42.202740 | orchestrator | prometheus : Copying config file for blackbox exporter ------------------ 2.68s 2026-03-29 01:03:42.202744 | orchestrator | prometheus : Copying over my.cnf for mysqld_exporter -------------------- 2.47s 2026-03-29 01:03:42.202749 | orchestrator | service-cert-copy : prometheus | Copying over backend internal TLS key --- 2.06s 2026-03-29 01:03:42.202755 | orchestrator | 2026-03-29 01:03:42 | INFO  | Task 05bed01a-cd1b-437f-92a2-658822fb3875 is in state STARTED 2026-03-29 01:03:42.202762 | orchestrator | 2026-03-29 01:03:42 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:03:45.243902 | orchestrator | 2026-03-29 01:03:45 | INFO  | Task e5e9fde8-f450-428e-b9af-bb5d6525f408 is in state STARTED 2026-03-29 01:03:45.249674 | orchestrator | 2026-03-29 01:03:45 | INFO  | Task b5f979b1-cb3d-4ba2-b8aa-c6e08ca35893 is in state SUCCESS 2026-03-29 01:03:45.249800 | orchestrator | 2026-03-29 01:03:45.251531 | orchestrator | 2026-03-29 01:03:45.251582 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-29 01:03:45.251589 | orchestrator | 2026-03-29 01:03:45.251593 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-29 01:03:45.251597 | orchestrator | Sunday 29 March 2026 01:00:42 +0000 (0:00:00.436) 0:00:00.436 ********** 2026-03-29 01:03:45.251601 | orchestrator | ok: [testbed-node-0] 2026-03-29 01:03:45.251606 | orchestrator | ok: [testbed-node-1] 2026-03-29 01:03:45.251610 | orchestrator | ok: [testbed-node-2] 2026-03-29 01:03:45.251614 | orchestrator | 2026-03-29 01:03:45.251618 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-29 01:03:45.251621 | orchestrator | Sunday 29 March 2026 01:00:42 +0000 (0:00:00.341) 0:00:00.777 ********** 2026-03-29 01:03:45.251625 | orchestrator | ok: [testbed-node-0] => (item=enable_cinder_True) 2026-03-29 01:03:45.251630 | orchestrator | ok: [testbed-node-1] => (item=enable_cinder_True) 2026-03-29 01:03:45.251634 | orchestrator | ok: [testbed-node-2] => (item=enable_cinder_True) 2026-03-29 01:03:45.251637 | orchestrator | 2026-03-29 01:03:45.251641 | orchestrator | PLAY [Apply role cinder] ******************************************************* 2026-03-29 01:03:45.251645 | orchestrator | 2026-03-29 01:03:45.251660 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-03-29 01:03:45.251664 | orchestrator | Sunday 29 March 2026 01:00:42 +0000 (0:00:00.435) 0:00:01.213 ********** 2026-03-29 01:03:45.251668 | orchestrator | included: /ansible/roles/cinder/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 01:03:45.251672 | orchestrator | 2026-03-29 01:03:45.251676 | orchestrator | TASK [service-ks-register : cinder | Creating services] ************************ 2026-03-29 01:03:45.251680 | orchestrator | Sunday 29 March 2026 01:00:43 +0000 (0:00:01.169) 0:00:02.382 ********** 2026-03-29 01:03:45.251684 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 (volumev3)) 2026-03-29 01:03:45.251688 | orchestrator | 2026-03-29 01:03:45.251692 | orchestrator | TASK [service-ks-register : cinder | Creating endpoints] *********************** 2026-03-29 01:03:45.251696 | orchestrator | Sunday 29 March 2026 01:00:48 +0000 (0:00:04.097) 0:00:06.480 ********** 2026-03-29 01:03:45.251700 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s -> internal) 2026-03-29 01:03:45.251704 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s -> public) 2026-03-29 01:03:45.251708 | orchestrator | 2026-03-29 01:03:45.251712 | orchestrator | TASK [service-ks-register : cinder | Creating projects] ************************ 2026-03-29 01:03:45.251716 | orchestrator | Sunday 29 March 2026 01:00:55 +0000 (0:00:07.190) 0:00:13.670 ********** 2026-03-29 01:03:45.251720 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-29 01:03:45.251724 | orchestrator | 2026-03-29 01:03:45.251728 | orchestrator | TASK [service-ks-register : cinder | Creating users] *************************** 2026-03-29 01:03:45.251732 | orchestrator | Sunday 29 March 2026 01:00:59 +0000 (0:00:03.876) 0:00:17.546 ********** 2026-03-29 01:03:45.251735 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service) 2026-03-29 01:03:45.251739 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-29 01:03:45.251743 | orchestrator | 2026-03-29 01:03:45.251747 | orchestrator | TASK [service-ks-register : cinder | Creating roles] *************************** 2026-03-29 01:03:45.251751 | orchestrator | Sunday 29 March 2026 01:01:03 +0000 (0:00:04.488) 0:00:22.035 ********** 2026-03-29 01:03:45.251754 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-29 01:03:45.251758 | orchestrator | 2026-03-29 01:03:45.251762 | orchestrator | TASK [service-ks-register : cinder | Granting user roles] ********************** 2026-03-29 01:03:45.251766 | orchestrator | Sunday 29 March 2026 01:01:07 +0000 (0:00:03.713) 0:00:25.749 ********** 2026-03-29 01:03:45.251770 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> admin) 2026-03-29 01:03:45.251773 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> service) 2026-03-29 01:03:45.251777 | orchestrator | 2026-03-29 01:03:45.251781 | orchestrator | TASK [cinder : Ensuring config directories exist] ****************************** 2026-03-29 01:03:45.251785 | orchestrator | Sunday 29 March 2026 01:01:15 +0000 (0:00:07.980) 0:00:33.730 ********** 2026-03-29 01:03:45.251790 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-29 01:03:45.251811 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-29 01:03:45.251819 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-29 01:03:45.251823 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-29 01:03:45.251828 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-29 01:03:45.251832 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-29 01:03:45.251838 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-29 01:03:45.251847 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-29 01:03:45.251852 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-29 01:03:45.251856 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-29 01:03:45.251859 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-29 01:03:45.251863 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-29 01:03:45.251874 | orchestrator | 2026-03-29 01:03:45.251878 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-03-29 01:03:45.251890 | orchestrator | Sunday 29 March 2026 01:01:19 +0000 (0:00:04.167) 0:00:37.898 ********** 2026-03-29 01:03:45.251950 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:03:45.251956 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:03:45.251960 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:03:45.251963 | orchestrator | 2026-03-29 01:03:45.251967 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-03-29 01:03:45.251978 | orchestrator | Sunday 29 March 2026 01:01:19 +0000 (0:00:00.230) 0:00:38.128 ********** 2026-03-29 01:03:45.252249 | orchestrator | included: /ansible/roles/cinder/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 01:03:45.252269 | orchestrator | 2026-03-29 01:03:45.252276 | orchestrator | TASK [cinder : Ensuring cinder service ceph config subdirs exists] ************* 2026-03-29 01:03:45.252284 | orchestrator | Sunday 29 March 2026 01:01:20 +0000 (0:00:00.610) 0:00:38.738 ********** 2026-03-29 01:03:45.252322 | orchestrator | changed: [testbed-node-0] => (item=cinder-volume) 2026-03-29 01:03:45.252330 | orchestrator | changed: [testbed-node-1] => (item=cinder-volume) 2026-03-29 01:03:45.252336 | orchestrator | changed: [testbed-node-2] => (item=cinder-volume) 2026-03-29 01:03:45.252342 | orchestrator | changed: [testbed-node-0] => (item=cinder-backup) 2026-03-29 01:03:45.252347 | orchestrator | changed: [testbed-node-1] => (item=cinder-backup) 2026-03-29 01:03:45.252353 | orchestrator | changed: [testbed-node-2] => (item=cinder-backup) 2026-03-29 01:03:45.252358 | orchestrator | 2026-03-29 01:03:45.252364 | orchestrator | TASK [cinder : Copying over multiple ceph.conf for cinder services] ************ 2026-03-29 01:03:45.252397 | orchestrator | Sunday 29 March 2026 01:01:22 +0000 (0:00:02.422) 0:00:41.161 ********** 2026-03-29 01:03:45.252403 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-03-29 01:03:45.252408 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-03-29 01:03:45.252413 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-03-29 01:03:45.252491 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-03-29 01:03:45.252511 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-03-29 01:03:45.252516 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-03-29 01:03:45.252521 | orchestrator | changed: [testbed-node-1] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-03-29 01:03:45.252525 | orchestrator | changed: [testbed-node-2] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-03-29 01:03:45.252533 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-03-29 01:03:45.252548 | orchestrator | changed: [testbed-node-1] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-03-29 01:03:45.252553 | orchestrator | changed: [testbed-node-2] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-03-29 01:03:45.252557 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-03-29 01:03:45.252561 | orchestrator | 2026-03-29 01:03:45.252565 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-volume] ***************** 2026-03-29 01:03:45.252569 | orchestrator | Sunday 29 March 2026 01:01:26 +0000 (0:00:04.020) 0:00:45.181 ********** 2026-03-29 01:03:45.252572 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-03-29 01:03:45.252577 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-03-29 01:03:45.252581 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-03-29 01:03:45.252584 | orchestrator | 2026-03-29 01:03:45.252588 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-backup] ***************** 2026-03-29 01:03:45.252592 | orchestrator | Sunday 29 March 2026 01:01:28 +0000 (0:00:02.013) 0:00:47.195 ********** 2026-03-29 01:03:45.252596 | orchestrator | changed: [testbed-node-0] => (item=ceph.client.cinder.keyring) 2026-03-29 01:03:45.252602 | orchestrator | changed: [testbed-node-1] => (item=ceph.client.cinder.keyring) 2026-03-29 01:03:45.252606 | orchestrator | changed: [testbed-node-2] => (item=ceph.client.cinder.keyring) 2026-03-29 01:03:45.252610 | orchestrator | changed: [testbed-node-1] => (item=ceph.client.cinder-backup.keyring) 2026-03-29 01:03:45.252614 | orchestrator | changed: [testbed-node-0] => (item=ceph.client.cinder-backup.keyring) 2026-03-29 01:03:45.252617 | orchestrator | changed: [testbed-node-2] => (item=ceph.client.cinder-backup.keyring) 2026-03-29 01:03:45.252621 | orchestrator | 2026-03-29 01:03:45.252625 | orchestrator | TASK [cinder : Ensuring config directory has correct owner and permission] ***** 2026-03-29 01:03:45.252629 | orchestrator | Sunday 29 March 2026 01:01:32 +0000 (0:00:03.495) 0:00:50.691 ********** 2026-03-29 01:03:45.252632 | orchestrator | ok: [testbed-node-0] => (item=cinder-volume) 2026-03-29 01:03:45.252636 | orchestrator | ok: [testbed-node-1] => (item=cinder-volume) 2026-03-29 01:03:45.252640 | orchestrator | ok: [testbed-node-2] => (item=cinder-volume) 2026-03-29 01:03:45.252644 | orchestrator | ok: [testbed-node-0] => (item=cinder-backup) 2026-03-29 01:03:45.252647 | orchestrator | ok: [testbed-node-1] => (item=cinder-backup) 2026-03-29 01:03:45.252651 | orchestrator | ok: [testbed-node-2] => (item=cinder-backup) 2026-03-29 01:03:45.252655 | orchestrator | 2026-03-29 01:03:45.252659 | orchestrator | TASK [cinder : Check if policies shall be overwritten] ************************* 2026-03-29 01:03:45.252662 | orchestrator | Sunday 29 March 2026 01:01:33 +0000 (0:00:01.020) 0:00:51.711 ********** 2026-03-29 01:03:45.252666 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:03:45.252670 | orchestrator | 2026-03-29 01:03:45.252674 | orchestrator | TASK [cinder : Set cinder policy file] ***************************************** 2026-03-29 01:03:45.252678 | orchestrator | Sunday 29 March 2026 01:01:33 +0000 (0:00:00.242) 0:00:51.954 ********** 2026-03-29 01:03:45.252681 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:03:45.252685 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:03:45.252689 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:03:45.252693 | orchestrator | 2026-03-29 01:03:45.252696 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-03-29 01:03:45.252708 | orchestrator | Sunday 29 March 2026 01:01:33 +0000 (0:00:00.291) 0:00:52.245 ********** 2026-03-29 01:03:45.252718 | orchestrator | included: /ansible/roles/cinder/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 01:03:45.252722 | orchestrator | 2026-03-29 01:03:45.252726 | orchestrator | TASK [service-cert-copy : cinder | Copying over extra CA certificates] ********* 2026-03-29 01:03:45.252739 | orchestrator | Sunday 29 March 2026 01:01:34 +0000 (0:00:00.565) 0:00:52.811 ********** 2026-03-29 01:03:45.252743 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-29 01:03:45.252749 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-29 01:03:45.252755 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-29 01:03:45.252760 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-29 01:03:45.252767 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-29 01:03:45.252774 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-29 01:03:45.252778 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-29 01:03:45.252790 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-29 01:03:45.252801 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-29 01:03:45.252805 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-29 01:03:45.252811 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-29 01:03:45.252818 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-29 01:03:45.252823 | orchestrator | 2026-03-29 01:03:45.252826 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS certificate] *** 2026-03-29 01:03:45.252830 | orchestrator | Sunday 29 March 2026 01:01:39 +0000 (0:00:04.756) 0:00:57.568 ********** 2026-03-29 01:03:45.252834 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-29 01:03:45.252841 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-29 01:03:45.252845 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-29 01:03:45.252849 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-29 01:03:45.252853 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:03:45.252862 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-29 01:03:45.252867 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-29 01:03:45.252875 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-29 01:03:45.252882 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-29 01:03:45.252892 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:03:45.252900 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-29 01:03:45.252909 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-29 01:03:45.252920 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-29 01:03:45.252926 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-29 01:03:45.252938 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:03:45.252944 | orchestrator | 2026-03-29 01:03:45.252951 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS key] ****** 2026-03-29 01:03:45.252957 | orchestrator | Sunday 29 March 2026 01:01:40 +0000 (0:00:01.383) 0:00:58.951 ********** 2026-03-29 01:03:45.252964 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-29 01:03:45.252970 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-29 01:03:45.252977 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-29 01:03:45.252990 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-29 01:03:45.252996 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:03:45.253004 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-29 01:03:45.253015 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-29 01:03:45.253022 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-29 01:03:45.253029 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-29 01:03:45.253036 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:03:45.253046 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-29 01:03:45.253057 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-29 01:03:45.253067 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-29 01:03:45.253072 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-29 01:03:45.253076 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:03:45.253079 | orchestrator | 2026-03-29 01:03:45.253104 | orchestrator | TASK [cinder : Copying over config.json files for services] ******************** 2026-03-29 01:03:45.253111 | orchestrator | Sunday 29 March 2026 01:01:41 +0000 (0:00:01.220) 0:01:00.171 ********** 2026-03-29 01:03:45.253117 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-29 01:03:45.253127 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-29 01:03:45.253137 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-29 01:03:45.253148 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-29 01:03:45.253154 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-29 01:03:45.253161 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-29 01:03:45.253168 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-29 01:03:45.253178 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-29 01:03:45.253198 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-29 01:03:45.253206 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-29 01:03:45.253212 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-29 01:03:45.253224 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-29 01:03:45.253231 | orchestrator | 2026-03-29 01:03:45.253237 | orchestrator | TASK [cinder : Copying over cinder-wsgi.conf] ********************************** 2026-03-29 01:03:45.253244 | orchestrator | Sunday 29 March 2026 01:01:46 +0000 (0:00:04.716) 0:01:04.888 ********** 2026-03-29 01:03:45.253250 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-03-29 01:03:45.253257 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-03-29 01:03:45.253264 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-03-29 01:03:45.253271 | orchestrator | 2026-03-29 01:03:45.253278 | orchestrator | TASK [cinder : Copying over cinder.conf] *************************************** 2026-03-29 01:03:45.253284 | orchestrator | Sunday 29 March 2026 01:01:49 +0000 (0:00:02.522) 0:01:07.411 ********** 2026-03-29 01:03:45.253297 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-29 01:03:45.253309 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-29 01:03:45.253317 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-29 01:03:45.253323 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-29 01:03:45.253330 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-29 01:03:45.253337 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-29 01:03:45.253355 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-29 01:03:45.253363 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-29 01:03:45.253368 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-29 01:03:45.253372 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-29 01:03:45.253376 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-29 01:03:45.253380 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-29 01:03:45.253387 | orchestrator | 2026-03-29 01:03:45.253390 | orchestrator | TASK [cinder : Generating 'hostnqn' file for cinder_volume] ******************** 2026-03-29 01:03:45.253394 | orchestrator | Sunday 29 March 2026 01:02:03 +0000 (0:00:14.345) 0:01:21.757 ********** 2026-03-29 01:03:45.253400 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:03:45.253404 | orchestrator | changed: [testbed-node-1] 2026-03-29 01:03:45.253408 | orchestrator | changed: [testbed-node-2] 2026-03-29 01:03:45.253412 | orchestrator | 2026-03-29 01:03:45.253416 | orchestrator | TASK [cinder : Generating 'hostid' file for cinder_volume] ********************* 2026-03-29 01:03:45.253422 | orchestrator | Sunday 29 March 2026 01:02:04 +0000 (0:00:01.608) 0:01:23.366 ********** 2026-03-29 01:03:45.253426 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:03:45.253430 | orchestrator | changed: [testbed-node-2] 2026-03-29 01:03:45.253434 | orchestrator | changed: [testbed-node-1] 2026-03-29 01:03:45.253438 | orchestrator | 2026-03-29 01:03:45.253442 | orchestrator | TASK [cinder : Copying over existing policy file] ****************************** 2026-03-29 01:03:45.253445 | orchestrator | Sunday 29 March 2026 01:02:06 +0000 (0:00:01.970) 0:01:25.336 ********** 2026-03-29 01:03:45.253449 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-29 01:03:45.253453 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-29 01:03:45.253458 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-29 01:03:45.253462 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-29 01:03:45.253470 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:03:45.253478 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-29 01:03:45.253482 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-29 01:03:45.253486 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-29 01:03:45.253490 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-29 01:03:45.253494 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:03:45.253498 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-29 01:03:45.253505 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-29 01:03:45.253514 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-29 01:03:45.253518 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-29 01:03:45.253522 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:03:45.253526 | orchestrator | 2026-03-29 01:03:45.253530 | orchestrator | TASK [cinder : Copying over nfs_shares files for cinder_volume] **************** 2026-03-29 01:03:45.253533 | orchestrator | Sunday 29 March 2026 01:02:07 +0000 (0:00:00.939) 0:01:26.275 ********** 2026-03-29 01:03:45.253537 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:03:45.253541 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:03:45.253545 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:03:45.253549 | orchestrator | 2026-03-29 01:03:45.253552 | orchestrator | TASK [cinder : Check cinder containers] **************************************** 2026-03-29 01:03:45.253556 | orchestrator | Sunday 29 March 2026 01:02:08 +0000 (0:00:00.652) 0:01:26.928 ********** 2026-03-29 01:03:45.253560 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-29 01:03:45.253570 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-29 01:03:45.253580 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-29 01:03:45.253596 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-29 01:03:45.253603 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-29 01:03:45.253609 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-29 01:03:45.253615 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-29 01:03:45.253628 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-29 01:03:45.253635 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-29 01:03:45.253650 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-29 01:03:45.253657 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-29 01:03:45.253664 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-29 01:03:45.253675 | orchestrator | 2026-03-29 01:03:45.253683 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-03-29 01:03:45.253692 | orchestrator | Sunday 29 March 2026 01:02:12 +0000 (0:00:03.840) 0:01:30.768 ********** 2026-03-29 01:03:45.253698 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:03:45.253704 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:03:45.253710 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:03:45.253716 | orchestrator | 2026-03-29 01:03:45.253722 | orchestrator | TASK [cinder : Creating Cinder database] *************************************** 2026-03-29 01:03:45.253728 | orchestrator | Sunday 29 March 2026 01:02:12 +0000 (0:00:00.274) 0:01:31.042 ********** 2026-03-29 01:03:45.253734 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:03:45.253740 | orchestrator | 2026-03-29 01:03:45.253747 | orchestrator | TASK [cinder : Creating Cinder database user and setting permissions] ********** 2026-03-29 01:03:45.253754 | orchestrator | Sunday 29 March 2026 01:02:15 +0000 (0:00:02.354) 0:01:33.396 ********** 2026-03-29 01:03:45.253759 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:03:45.253765 | orchestrator | 2026-03-29 01:03:45.253771 | orchestrator | TASK [cinder : Running Cinder bootstrap container] ***************************** 2026-03-29 01:03:45.253777 | orchestrator | Sunday 29 March 2026 01:02:17 +0000 (0:00:02.717) 0:01:36.114 ********** 2026-03-29 01:03:45.253783 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:03:45.253789 | orchestrator | 2026-03-29 01:03:45.253795 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-03-29 01:03:45.253801 | orchestrator | Sunday 29 March 2026 01:02:38 +0000 (0:00:21.140) 0:01:57.255 ********** 2026-03-29 01:03:45.253807 | orchestrator | 2026-03-29 01:03:45.253813 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-03-29 01:03:45.253819 | orchestrator | Sunday 29 March 2026 01:02:38 +0000 (0:00:00.061) 0:01:57.317 ********** 2026-03-29 01:03:45.253825 | orchestrator | 2026-03-29 01:03:45.253831 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-03-29 01:03:45.253837 | orchestrator | Sunday 29 March 2026 01:02:38 +0000 (0:00:00.057) 0:01:57.375 ********** 2026-03-29 01:03:45.253842 | orchestrator | 2026-03-29 01:03:45.253848 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-api container] ************************ 2026-03-29 01:03:45.253854 | orchestrator | Sunday 29 March 2026 01:02:39 +0000 (0:00:00.059) 0:01:57.434 ********** 2026-03-29 01:03:45.253860 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:03:45.253866 | orchestrator | changed: [testbed-node-1] 2026-03-29 01:03:45.253871 | orchestrator | changed: [testbed-node-2] 2026-03-29 01:03:45.253877 | orchestrator | 2026-03-29 01:03:45.253883 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-scheduler container] ****************** 2026-03-29 01:03:45.253889 | orchestrator | Sunday 29 March 2026 01:03:03 +0000 (0:00:24.870) 0:02:22.304 ********** 2026-03-29 01:03:45.253895 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:03:45.253901 | orchestrator | changed: [testbed-node-2] 2026-03-29 01:03:45.253906 | orchestrator | changed: [testbed-node-1] 2026-03-29 01:03:45.253912 | orchestrator | 2026-03-29 01:03:45.253918 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-volume container] ********************* 2026-03-29 01:03:45.253924 | orchestrator | Sunday 29 March 2026 01:03:10 +0000 (0:00:06.581) 0:02:28.886 ********** 2026-03-29 01:03:45.253934 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:03:45.253940 | orchestrator | changed: [testbed-node-2] 2026-03-29 01:03:45.253946 | orchestrator | changed: [testbed-node-1] 2026-03-29 01:03:45.253952 | orchestrator | 2026-03-29 01:03:45.253958 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-backup container] ********************* 2026-03-29 01:03:45.253967 | orchestrator | Sunday 29 March 2026 01:03:32 +0000 (0:00:22.382) 0:02:51.269 ********** 2026-03-29 01:03:45.253973 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:03:45.253979 | orchestrator | changed: [testbed-node-2] 2026-03-29 01:03:45.253985 | orchestrator | changed: [testbed-node-1] 2026-03-29 01:03:45.253995 | orchestrator | 2026-03-29 01:03:45.254001 | orchestrator | RUNNING HANDLER [cinder : Wait for cinder services to update service versions] *** 2026-03-29 01:03:45.254007 | orchestrator | Sunday 29 March 2026 01:03:42 +0000 (0:00:10.120) 0:03:01.390 ********** 2026-03-29 01:03:45.254044 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:03:45.254051 | orchestrator | 2026-03-29 01:03:45.254057 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 01:03:45.254063 | orchestrator | testbed-node-0 : ok=31  changed=23  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-03-29 01:03:45.254070 | orchestrator | testbed-node-1 : ok=22  changed=16  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-29 01:03:45.254076 | orchestrator | testbed-node-2 : ok=22  changed=16  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-29 01:03:45.254093 | orchestrator | 2026-03-29 01:03:45.254100 | orchestrator | 2026-03-29 01:03:45.254106 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-29 01:03:45.254113 | orchestrator | Sunday 29 March 2026 01:03:43 +0000 (0:00:00.214) 0:03:01.605 ********** 2026-03-29 01:03:45.254119 | orchestrator | =============================================================================== 2026-03-29 01:03:45.254125 | orchestrator | cinder : Restart cinder-api container ---------------------------------- 24.87s 2026-03-29 01:03:45.254131 | orchestrator | cinder : Restart cinder-volume container ------------------------------- 22.38s 2026-03-29 01:03:45.254137 | orchestrator | cinder : Running Cinder bootstrap container ---------------------------- 21.14s 2026-03-29 01:03:45.254143 | orchestrator | cinder : Copying over cinder.conf -------------------------------------- 14.34s 2026-03-29 01:03:45.254149 | orchestrator | cinder : Restart cinder-backup container ------------------------------- 10.12s 2026-03-29 01:03:45.254155 | orchestrator | service-ks-register : cinder | Granting user roles ---------------------- 7.98s 2026-03-29 01:03:45.254160 | orchestrator | service-ks-register : cinder | Creating endpoints ----------------------- 7.19s 2026-03-29 01:03:45.254166 | orchestrator | cinder : Restart cinder-scheduler container ----------------------------- 6.58s 2026-03-29 01:03:45.254172 | orchestrator | service-cert-copy : cinder | Copying over extra CA certificates --------- 4.76s 2026-03-29 01:03:45.254178 | orchestrator | cinder : Copying over config.json files for services -------------------- 4.72s 2026-03-29 01:03:45.254184 | orchestrator | service-ks-register : cinder | Creating users --------------------------- 4.49s 2026-03-29 01:03:45.254190 | orchestrator | cinder : Ensuring config directories exist ------------------------------ 4.17s 2026-03-29 01:03:45.254195 | orchestrator | service-ks-register : cinder | Creating services ------------------------ 4.10s 2026-03-29 01:03:45.254201 | orchestrator | cinder : Copying over multiple ceph.conf for cinder services ------------ 4.02s 2026-03-29 01:03:45.254208 | orchestrator | service-ks-register : cinder | Creating projects ------------------------ 3.88s 2026-03-29 01:03:45.254214 | orchestrator | cinder : Check cinder containers ---------------------------------------- 3.84s 2026-03-29 01:03:45.254220 | orchestrator | service-ks-register : cinder | Creating roles --------------------------- 3.71s 2026-03-29 01:03:45.254225 | orchestrator | cinder : Copy over Ceph keyring files for cinder-backup ----------------- 3.50s 2026-03-29 01:03:45.254231 | orchestrator | cinder : Creating Cinder database user and setting permissions ---------- 2.72s 2026-03-29 01:03:45.254237 | orchestrator | cinder : Copying over cinder-wsgi.conf ---------------------------------- 2.52s 2026-03-29 01:03:45.255485 | orchestrator | 2026-03-29 01:03:45 | INFO  | Task 6db50361-5eff-4136-ad47-e737479c1c22 is in state STARTED 2026-03-29 01:03:45.257432 | orchestrator | 2026-03-29 01:03:45 | INFO  | Task 248abb42-edd4-4003-8f12-64ca6a92da80 is in state STARTED 2026-03-29 01:03:45.259378 | orchestrator | 2026-03-29 01:03:45 | INFO  | Task 05bed01a-cd1b-437f-92a2-658822fb3875 is in state STARTED 2026-03-29 01:03:45.259515 | orchestrator | 2026-03-29 01:03:45 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:03:48.301558 | orchestrator | 2026-03-29 01:03:48 | INFO  | Task e5e9fde8-f450-428e-b9af-bb5d6525f408 is in state STARTED 2026-03-29 01:03:48.303418 | orchestrator | 2026-03-29 01:03:48 | INFO  | Task 6db50361-5eff-4136-ad47-e737479c1c22 is in state STARTED 2026-03-29 01:03:48.305065 | orchestrator | 2026-03-29 01:03:48 | INFO  | Task 248abb42-edd4-4003-8f12-64ca6a92da80 is in state STARTED 2026-03-29 01:03:48.306627 | orchestrator | 2026-03-29 01:03:48 | INFO  | Task 05bed01a-cd1b-437f-92a2-658822fb3875 is in state STARTED 2026-03-29 01:03:48.306660 | orchestrator | 2026-03-29 01:03:48 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:03:51.345683 | orchestrator | 2026-03-29 01:03:51 | INFO  | Task e5e9fde8-f450-428e-b9af-bb5d6525f408 is in state STARTED 2026-03-29 01:03:51.346872 | orchestrator | 2026-03-29 01:03:51 | INFO  | Task 6db50361-5eff-4136-ad47-e737479c1c22 is in state STARTED 2026-03-29 01:03:51.348304 | orchestrator | 2026-03-29 01:03:51 | INFO  | Task 248abb42-edd4-4003-8f12-64ca6a92da80 is in state STARTED 2026-03-29 01:03:51.349768 | orchestrator | 2026-03-29 01:03:51 | INFO  | Task 05bed01a-cd1b-437f-92a2-658822fb3875 is in state STARTED 2026-03-29 01:03:51.349796 | orchestrator | 2026-03-29 01:03:51 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:03:54.398664 | orchestrator | 2026-03-29 01:03:54 | INFO  | Task e5e9fde8-f450-428e-b9af-bb5d6525f408 is in state STARTED 2026-03-29 01:03:54.400672 | orchestrator | 2026-03-29 01:03:54 | INFO  | Task 6db50361-5eff-4136-ad47-e737479c1c22 is in state STARTED 2026-03-29 01:03:54.402919 | orchestrator | 2026-03-29 01:03:54 | INFO  | Task 248abb42-edd4-4003-8f12-64ca6a92da80 is in state STARTED 2026-03-29 01:03:54.404663 | orchestrator | 2026-03-29 01:03:54 | INFO  | Task 05bed01a-cd1b-437f-92a2-658822fb3875 is in state STARTED 2026-03-29 01:03:54.404717 | orchestrator | 2026-03-29 01:03:54 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:03:57.436903 | orchestrator | 2026-03-29 01:03:57 | INFO  | Task e5e9fde8-f450-428e-b9af-bb5d6525f408 is in state STARTED 2026-03-29 01:03:57.438054 | orchestrator | 2026-03-29 01:03:57 | INFO  | Task 6db50361-5eff-4136-ad47-e737479c1c22 is in state STARTED 2026-03-29 01:03:57.439335 | orchestrator | 2026-03-29 01:03:57 | INFO  | Task 248abb42-edd4-4003-8f12-64ca6a92da80 is in state STARTED 2026-03-29 01:03:57.440780 | orchestrator | 2026-03-29 01:03:57 | INFO  | Task 05bed01a-cd1b-437f-92a2-658822fb3875 is in state STARTED 2026-03-29 01:03:57.440809 | orchestrator | 2026-03-29 01:03:57 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:04:00.496117 | orchestrator | 2026-03-29 01:04:00 | INFO  | Task e5e9fde8-f450-428e-b9af-bb5d6525f408 is in state STARTED 2026-03-29 01:04:00.496216 | orchestrator | 2026-03-29 01:04:00 | INFO  | Task 6db50361-5eff-4136-ad47-e737479c1c22 is in state STARTED 2026-03-29 01:04:00.497443 | orchestrator | 2026-03-29 01:04:00 | INFO  | Task 248abb42-edd4-4003-8f12-64ca6a92da80 is in state STARTED 2026-03-29 01:04:00.497751 | orchestrator | 2026-03-29 01:04:00 | INFO  | Task 05bed01a-cd1b-437f-92a2-658822fb3875 is in state STARTED 2026-03-29 01:04:00.497778 | orchestrator | 2026-03-29 01:04:00 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:04:03.538985 | orchestrator | 2026-03-29 01:04:03 | INFO  | Task e5e9fde8-f450-428e-b9af-bb5d6525f408 is in state STARTED 2026-03-29 01:04:03.540965 | orchestrator | 2026-03-29 01:04:03 | INFO  | Task 6db50361-5eff-4136-ad47-e737479c1c22 is in state STARTED 2026-03-29 01:04:03.543146 | orchestrator | 2026-03-29 01:04:03 | INFO  | Task 248abb42-edd4-4003-8f12-64ca6a92da80 is in state STARTED 2026-03-29 01:04:03.545764 | orchestrator | 2026-03-29 01:04:03 | INFO  | Task 05bed01a-cd1b-437f-92a2-658822fb3875 is in state STARTED 2026-03-29 01:04:03.545833 | orchestrator | 2026-03-29 01:04:03 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:04:06.583721 | orchestrator | 2026-03-29 01:04:06 | INFO  | Task e5e9fde8-f450-428e-b9af-bb5d6525f408 is in state STARTED 2026-03-29 01:04:06.585351 | orchestrator | 2026-03-29 01:04:06 | INFO  | Task 6db50361-5eff-4136-ad47-e737479c1c22 is in state STARTED 2026-03-29 01:04:06.587588 | orchestrator | 2026-03-29 01:04:06 | INFO  | Task 248abb42-edd4-4003-8f12-64ca6a92da80 is in state STARTED 2026-03-29 01:04:06.590232 | orchestrator | 2026-03-29 01:04:06 | INFO  | Task 05bed01a-cd1b-437f-92a2-658822fb3875 is in state STARTED 2026-03-29 01:04:06.590296 | orchestrator | 2026-03-29 01:04:06 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:04:09.632578 | orchestrator | 2026-03-29 01:04:09 | INFO  | Task e5e9fde8-f450-428e-b9af-bb5d6525f408 is in state STARTED 2026-03-29 01:04:09.634222 | orchestrator | 2026-03-29 01:04:09 | INFO  | Task 6db50361-5eff-4136-ad47-e737479c1c22 is in state STARTED 2026-03-29 01:04:09.635023 | orchestrator | 2026-03-29 01:04:09 | INFO  | Task 248abb42-edd4-4003-8f12-64ca6a92da80 is in state STARTED 2026-03-29 01:04:09.637131 | orchestrator | 2026-03-29 01:04:09 | INFO  | Task 05bed01a-cd1b-437f-92a2-658822fb3875 is in state STARTED 2026-03-29 01:04:09.637173 | orchestrator | 2026-03-29 01:04:09 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:04:12.673318 | orchestrator | 2026-03-29 01:04:12 | INFO  | Task e5e9fde8-f450-428e-b9af-bb5d6525f408 is in state STARTED 2026-03-29 01:04:12.673624 | orchestrator | 2026-03-29 01:04:12 | INFO  | Task 6db50361-5eff-4136-ad47-e737479c1c22 is in state STARTED 2026-03-29 01:04:12.674574 | orchestrator | 2026-03-29 01:04:12 | INFO  | Task 248abb42-edd4-4003-8f12-64ca6a92da80 is in state STARTED 2026-03-29 01:04:12.675227 | orchestrator | 2026-03-29 01:04:12 | INFO  | Task 05bed01a-cd1b-437f-92a2-658822fb3875 is in state STARTED 2026-03-29 01:04:12.675267 | orchestrator | 2026-03-29 01:04:12 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:04:15.710221 | orchestrator | 2026-03-29 01:04:15 | INFO  | Task e5e9fde8-f450-428e-b9af-bb5d6525f408 is in state STARTED 2026-03-29 01:04:15.710550 | orchestrator | 2026-03-29 01:04:15 | INFO  | Task 6db50361-5eff-4136-ad47-e737479c1c22 is in state STARTED 2026-03-29 01:04:15.711360 | orchestrator | 2026-03-29 01:04:15 | INFO  | Task 248abb42-edd4-4003-8f12-64ca6a92da80 is in state STARTED 2026-03-29 01:04:15.712072 | orchestrator | 2026-03-29 01:04:15 | INFO  | Task 05bed01a-cd1b-437f-92a2-658822fb3875 is in state STARTED 2026-03-29 01:04:15.712132 | orchestrator | 2026-03-29 01:04:15 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:04:18.739461 | orchestrator | 2026-03-29 01:04:18 | INFO  | Task e5e9fde8-f450-428e-b9af-bb5d6525f408 is in state STARTED 2026-03-29 01:04:18.739957 | orchestrator | 2026-03-29 01:04:18 | INFO  | Task 6db50361-5eff-4136-ad47-e737479c1c22 is in state STARTED 2026-03-29 01:04:18.741971 | orchestrator | 2026-03-29 01:04:18 | INFO  | Task 248abb42-edd4-4003-8f12-64ca6a92da80 is in state STARTED 2026-03-29 01:04:18.742540 | orchestrator | 2026-03-29 01:04:18 | INFO  | Task 05bed01a-cd1b-437f-92a2-658822fb3875 is in state STARTED 2026-03-29 01:04:18.742567 | orchestrator | 2026-03-29 01:04:18 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:04:21.792676 | orchestrator | 2026-03-29 01:04:21 | INFO  | Task e5e9fde8-f450-428e-b9af-bb5d6525f408 is in state STARTED 2026-03-29 01:04:21.795012 | orchestrator | 2026-03-29 01:04:21 | INFO  | Task 6db50361-5eff-4136-ad47-e737479c1c22 is in state STARTED 2026-03-29 01:04:21.796923 | orchestrator | 2026-03-29 01:04:21 | INFO  | Task 248abb42-edd4-4003-8f12-64ca6a92da80 is in state STARTED 2026-03-29 01:04:21.798839 | orchestrator | 2026-03-29 01:04:21 | INFO  | Task 05bed01a-cd1b-437f-92a2-658822fb3875 is in state STARTED 2026-03-29 01:04:21.798897 | orchestrator | 2026-03-29 01:04:21 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:04:24.830121 | orchestrator | 2026-03-29 01:04:24 | INFO  | Task e5e9fde8-f450-428e-b9af-bb5d6525f408 is in state STARTED 2026-03-29 01:04:24.832527 | orchestrator | 2026-03-29 01:04:24 | INFO  | Task 6db50361-5eff-4136-ad47-e737479c1c22 is in state STARTED 2026-03-29 01:04:24.834385 | orchestrator | 2026-03-29 01:04:24 | INFO  | Task 248abb42-edd4-4003-8f12-64ca6a92da80 is in state STARTED 2026-03-29 01:04:24.836476 | orchestrator | 2026-03-29 01:04:24 | INFO  | Task 05bed01a-cd1b-437f-92a2-658822fb3875 is in state STARTED 2026-03-29 01:04:24.836522 | orchestrator | 2026-03-29 01:04:24 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:04:27.868363 | orchestrator | 2026-03-29 01:04:27 | INFO  | Task e5e9fde8-f450-428e-b9af-bb5d6525f408 is in state STARTED 2026-03-29 01:04:27.868727 | orchestrator | 2026-03-29 01:04:27 | INFO  | Task 6db50361-5eff-4136-ad47-e737479c1c22 is in state STARTED 2026-03-29 01:04:27.869390 | orchestrator | 2026-03-29 01:04:27 | INFO  | Task 248abb42-edd4-4003-8f12-64ca6a92da80 is in state STARTED 2026-03-29 01:04:27.870127 | orchestrator | 2026-03-29 01:04:27 | INFO  | Task 05bed01a-cd1b-437f-92a2-658822fb3875 is in state STARTED 2026-03-29 01:04:27.870147 | orchestrator | 2026-03-29 01:04:27 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:04:30.898486 | orchestrator | 2026-03-29 01:04:30 | INFO  | Task e5e9fde8-f450-428e-b9af-bb5d6525f408 is in state STARTED 2026-03-29 01:04:30.898917 | orchestrator | 2026-03-29 01:04:30 | INFO  | Task 6db50361-5eff-4136-ad47-e737479c1c22 is in state STARTED 2026-03-29 01:04:30.899594 | orchestrator | 2026-03-29 01:04:30 | INFO  | Task 248abb42-edd4-4003-8f12-64ca6a92da80 is in state STARTED 2026-03-29 01:04:30.900390 | orchestrator | 2026-03-29 01:04:30 | INFO  | Task 05bed01a-cd1b-437f-92a2-658822fb3875 is in state STARTED 2026-03-29 01:04:30.900412 | orchestrator | 2026-03-29 01:04:30 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:04:33.936243 | orchestrator | 2026-03-29 01:04:33 | INFO  | Task e5e9fde8-f450-428e-b9af-bb5d6525f408 is in state STARTED 2026-03-29 01:04:33.936883 | orchestrator | 2026-03-29 01:04:33 | INFO  | Task 6db50361-5eff-4136-ad47-e737479c1c22 is in state STARTED 2026-03-29 01:04:33.938232 | orchestrator | 2026-03-29 01:04:33 | INFO  | Task 248abb42-edd4-4003-8f12-64ca6a92da80 is in state STARTED 2026-03-29 01:04:33.938902 | orchestrator | 2026-03-29 01:04:33 | INFO  | Task 05bed01a-cd1b-437f-92a2-658822fb3875 is in state STARTED 2026-03-29 01:04:33.939209 | orchestrator | 2026-03-29 01:04:33 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:04:36.962745 | orchestrator | 2026-03-29 01:04:36 | INFO  | Task e5e9fde8-f450-428e-b9af-bb5d6525f408 is in state STARTED 2026-03-29 01:04:36.964115 | orchestrator | 2026-03-29 01:04:36 | INFO  | Task 6db50361-5eff-4136-ad47-e737479c1c22 is in state STARTED 2026-03-29 01:04:36.964800 | orchestrator | 2026-03-29 01:04:36 | INFO  | Task 248abb42-edd4-4003-8f12-64ca6a92da80 is in state STARTED 2026-03-29 01:04:36.965586 | orchestrator | 2026-03-29 01:04:36 | INFO  | Task 05bed01a-cd1b-437f-92a2-658822fb3875 is in state STARTED 2026-03-29 01:04:36.965654 | orchestrator | 2026-03-29 01:04:36 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:04:39.988179 | orchestrator | 2026-03-29 01:04:39 | INFO  | Task e5e9fde8-f450-428e-b9af-bb5d6525f408 is in state STARTED 2026-03-29 01:04:39.989599 | orchestrator | 2026-03-29 01:04:39 | INFO  | Task 6db50361-5eff-4136-ad47-e737479c1c22 is in state STARTED 2026-03-29 01:04:39.990339 | orchestrator | 2026-03-29 01:04:39 | INFO  | Task 248abb42-edd4-4003-8f12-64ca6a92da80 is in state STARTED 2026-03-29 01:04:39.996073 | orchestrator | 2026-03-29 01:04:39 | INFO  | Task 05bed01a-cd1b-437f-92a2-658822fb3875 is in state STARTED 2026-03-29 01:04:39.996164 | orchestrator | 2026-03-29 01:04:39 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:04:43.025380 | orchestrator | 2026-03-29 01:04:43 | INFO  | Task e5e9fde8-f450-428e-b9af-bb5d6525f408 is in state STARTED 2026-03-29 01:04:43.025581 | orchestrator | 2026-03-29 01:04:43 | INFO  | Task 6db50361-5eff-4136-ad47-e737479c1c22 is in state STARTED 2026-03-29 01:04:43.026432 | orchestrator | 2026-03-29 01:04:43 | INFO  | Task 248abb42-edd4-4003-8f12-64ca6a92da80 is in state STARTED 2026-03-29 01:04:43.027730 | orchestrator | 2026-03-29 01:04:43 | INFO  | Task 05bed01a-cd1b-437f-92a2-658822fb3875 is in state STARTED 2026-03-29 01:04:43.027759 | orchestrator | 2026-03-29 01:04:43 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:04:46.063193 | orchestrator | 2026-03-29 01:04:46 | INFO  | Task e5e9fde8-f450-428e-b9af-bb5d6525f408 is in state STARTED 2026-03-29 01:04:46.063476 | orchestrator | 2026-03-29 01:04:46 | INFO  | Task 6db50361-5eff-4136-ad47-e737479c1c22 is in state STARTED 2026-03-29 01:04:46.064202 | orchestrator | 2026-03-29 01:04:46 | INFO  | Task 248abb42-edd4-4003-8f12-64ca6a92da80 is in state STARTED 2026-03-29 01:04:46.064813 | orchestrator | 2026-03-29 01:04:46 | INFO  | Task 05bed01a-cd1b-437f-92a2-658822fb3875 is in state STARTED 2026-03-29 01:04:46.064868 | orchestrator | 2026-03-29 01:04:46 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:04:49.104162 | orchestrator | 2026-03-29 01:04:49 | INFO  | Task e5e9fde8-f450-428e-b9af-bb5d6525f408 is in state STARTED 2026-03-29 01:04:49.104236 | orchestrator | 2026-03-29 01:04:49 | INFO  | Task 6db50361-5eff-4136-ad47-e737479c1c22 is in state STARTED 2026-03-29 01:04:49.104630 | orchestrator | 2026-03-29 01:04:49 | INFO  | Task 248abb42-edd4-4003-8f12-64ca6a92da80 is in state STARTED 2026-03-29 01:04:49.105347 | orchestrator | 2026-03-29 01:04:49 | INFO  | Task 05bed01a-cd1b-437f-92a2-658822fb3875 is in state STARTED 2026-03-29 01:04:49.105611 | orchestrator | 2026-03-29 01:04:49 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:04:52.165426 | orchestrator | 2026-03-29 01:04:52 | INFO  | Task e5e9fde8-f450-428e-b9af-bb5d6525f408 is in state STARTED 2026-03-29 01:04:52.165476 | orchestrator | 2026-03-29 01:04:52 | INFO  | Task 6db50361-5eff-4136-ad47-e737479c1c22 is in state STARTED 2026-03-29 01:04:52.165485 | orchestrator | 2026-03-29 01:04:52 | INFO  | Task 248abb42-edd4-4003-8f12-64ca6a92da80 is in state STARTED 2026-03-29 01:04:52.165492 | orchestrator | 2026-03-29 01:04:52 | INFO  | Task 05bed01a-cd1b-437f-92a2-658822fb3875 is in state STARTED 2026-03-29 01:04:52.165499 | orchestrator | 2026-03-29 01:04:52 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:04:55.174166 | orchestrator | 2026-03-29 01:04:55 | INFO  | Task e5e9fde8-f450-428e-b9af-bb5d6525f408 is in state STARTED 2026-03-29 01:04:55.174386 | orchestrator | 2026-03-29 01:04:55 | INFO  | Task 6db50361-5eff-4136-ad47-e737479c1c22 is in state STARTED 2026-03-29 01:04:55.175148 | orchestrator | 2026-03-29 01:04:55 | INFO  | Task 248abb42-edd4-4003-8f12-64ca6a92da80 is in state STARTED 2026-03-29 01:04:55.177169 | orchestrator | 2026-03-29 01:04:55 | INFO  | Task 05bed01a-cd1b-437f-92a2-658822fb3875 is in state STARTED 2026-03-29 01:04:55.177213 | orchestrator | 2026-03-29 01:04:55 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:04:58.208326 | orchestrator | 2026-03-29 01:04:58 | INFO  | Task e5e9fde8-f450-428e-b9af-bb5d6525f408 is in state STARTED 2026-03-29 01:04:58.208819 | orchestrator | 2026-03-29 01:04:58 | INFO  | Task 6db50361-5eff-4136-ad47-e737479c1c22 is in state STARTED 2026-03-29 01:04:58.209686 | orchestrator | 2026-03-29 01:04:58 | INFO  | Task 248abb42-edd4-4003-8f12-64ca6a92da80 is in state STARTED 2026-03-29 01:04:58.210351 | orchestrator | 2026-03-29 01:04:58 | INFO  | Task 05bed01a-cd1b-437f-92a2-658822fb3875 is in state STARTED 2026-03-29 01:04:58.210374 | orchestrator | 2026-03-29 01:04:58 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:05:01.307649 | orchestrator | 2026-03-29 01:05:01 | INFO  | Task e5e9fde8-f450-428e-b9af-bb5d6525f408 is in state STARTED 2026-03-29 01:05:01.307849 | orchestrator | 2026-03-29 01:05:01 | INFO  | Task 6db50361-5eff-4136-ad47-e737479c1c22 is in state STARTED 2026-03-29 01:05:01.308536 | orchestrator | 2026-03-29 01:05:01 | INFO  | Task 248abb42-edd4-4003-8f12-64ca6a92da80 is in state STARTED 2026-03-29 01:05:01.309108 | orchestrator | 2026-03-29 01:05:01 | INFO  | Task 05bed01a-cd1b-437f-92a2-658822fb3875 is in state STARTED 2026-03-29 01:05:01.309133 | orchestrator | 2026-03-29 01:05:01 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:05:04.331862 | orchestrator | 2026-03-29 01:05:04 | INFO  | Task e5e9fde8-f450-428e-b9af-bb5d6525f408 is in state STARTED 2026-03-29 01:05:04.331925 | orchestrator | 2026-03-29 01:05:04 | INFO  | Task 6db50361-5eff-4136-ad47-e737479c1c22 is in state STARTED 2026-03-29 01:05:04.333432 | orchestrator | 2026-03-29 01:05:04 | INFO  | Task 248abb42-edd4-4003-8f12-64ca6a92da80 is in state STARTED 2026-03-29 01:05:04.333789 | orchestrator | 2026-03-29 01:05:04 | INFO  | Task 05bed01a-cd1b-437f-92a2-658822fb3875 is in state STARTED 2026-03-29 01:05:04.333805 | orchestrator | 2026-03-29 01:05:04 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:05:07.355390 | orchestrator | 2026-03-29 01:05:07 | INFO  | Task e5e9fde8-f450-428e-b9af-bb5d6525f408 is in state STARTED 2026-03-29 01:05:07.355889 | orchestrator | 2026-03-29 01:05:07 | INFO  | Task 6db50361-5eff-4136-ad47-e737479c1c22 is in state STARTED 2026-03-29 01:05:07.356580 | orchestrator | 2026-03-29 01:05:07 | INFO  | Task 248abb42-edd4-4003-8f12-64ca6a92da80 is in state STARTED 2026-03-29 01:05:07.357235 | orchestrator | 2026-03-29 01:05:07 | INFO  | Task 05bed01a-cd1b-437f-92a2-658822fb3875 is in state STARTED 2026-03-29 01:05:07.357254 | orchestrator | 2026-03-29 01:05:07 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:05:10.379107 | orchestrator | 2026-03-29 01:05:10 | INFO  | Task e5e9fde8-f450-428e-b9af-bb5d6525f408 is in state STARTED 2026-03-29 01:05:10.379239 | orchestrator | 2026-03-29 01:05:10 | INFO  | Task 6db50361-5eff-4136-ad47-e737479c1c22 is in state STARTED 2026-03-29 01:05:10.379894 | orchestrator | 2026-03-29 01:05:10 | INFO  | Task 248abb42-edd4-4003-8f12-64ca6a92da80 is in state STARTED 2026-03-29 01:05:10.380416 | orchestrator | 2026-03-29 01:05:10 | INFO  | Task 05bed01a-cd1b-437f-92a2-658822fb3875 is in state STARTED 2026-03-29 01:05:10.380447 | orchestrator | 2026-03-29 01:05:10 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:05:13.408727 | orchestrator | 2026-03-29 01:05:13 | INFO  | Task e5e9fde8-f450-428e-b9af-bb5d6525f408 is in state STARTED 2026-03-29 01:05:13.409170 | orchestrator | 2026-03-29 01:05:13 | INFO  | Task 6db50361-5eff-4136-ad47-e737479c1c22 is in state STARTED 2026-03-29 01:05:13.409697 | orchestrator | 2026-03-29 01:05:13 | INFO  | Task 248abb42-edd4-4003-8f12-64ca6a92da80 is in state STARTED 2026-03-29 01:05:13.411895 | orchestrator | 2026-03-29 01:05:13 | INFO  | Task 05bed01a-cd1b-437f-92a2-658822fb3875 is in state STARTED 2026-03-29 01:05:13.411926 | orchestrator | 2026-03-29 01:05:13 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:05:16.439431 | orchestrator | 2026-03-29 01:05:16 | INFO  | Task e5e9fde8-f450-428e-b9af-bb5d6525f408 is in state STARTED 2026-03-29 01:05:16.439484 | orchestrator | 2026-03-29 01:05:16 | INFO  | Task 6db50361-5eff-4136-ad47-e737479c1c22 is in state STARTED 2026-03-29 01:05:16.439841 | orchestrator | 2026-03-29 01:05:16 | INFO  | Task 248abb42-edd4-4003-8f12-64ca6a92da80 is in state STARTED 2026-03-29 01:05:16.441058 | orchestrator | 2026-03-29 01:05:16 | INFO  | Task 05bed01a-cd1b-437f-92a2-658822fb3875 is in state STARTED 2026-03-29 01:05:16.441102 | orchestrator | 2026-03-29 01:05:16 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:05:19.490793 | orchestrator | 2026-03-29 01:05:19 | INFO  | Task e5e9fde8-f450-428e-b9af-bb5d6525f408 is in state STARTED 2026-03-29 01:05:19.493575 | orchestrator | 2026-03-29 01:05:19 | INFO  | Task 6db50361-5eff-4136-ad47-e737479c1c22 is in state STARTED 2026-03-29 01:05:19.493631 | orchestrator | 2026-03-29 01:05:19 | INFO  | Task 248abb42-edd4-4003-8f12-64ca6a92da80 is in state STARTED 2026-03-29 01:05:19.493638 | orchestrator | 2026-03-29 01:05:19 | INFO  | Task 05bed01a-cd1b-437f-92a2-658822fb3875 is in state STARTED 2026-03-29 01:05:19.493642 | orchestrator | 2026-03-29 01:05:19 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:05:22.518592 | orchestrator | 2026-03-29 01:05:22 | INFO  | Task e5e9fde8-f450-428e-b9af-bb5d6525f408 is in state STARTED 2026-03-29 01:05:22.518654 | orchestrator | 2026-03-29 01:05:22 | INFO  | Task 6db50361-5eff-4136-ad47-e737479c1c22 is in state STARTED 2026-03-29 01:05:22.519030 | orchestrator | 2026-03-29 01:05:22 | INFO  | Task 248abb42-edd4-4003-8f12-64ca6a92da80 is in state STARTED 2026-03-29 01:05:22.519533 | orchestrator | 2026-03-29 01:05:22 | INFO  | Task 05bed01a-cd1b-437f-92a2-658822fb3875 is in state STARTED 2026-03-29 01:05:22.519556 | orchestrator | 2026-03-29 01:05:22 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:05:25.556234 | orchestrator | 2026-03-29 01:05:25 | INFO  | Task e5e9fde8-f450-428e-b9af-bb5d6525f408 is in state STARTED 2026-03-29 01:05:25.559038 | orchestrator | 2026-03-29 01:05:25 | INFO  | Task 6db50361-5eff-4136-ad47-e737479c1c22 is in state STARTED 2026-03-29 01:05:25.562782 | orchestrator | 2026-03-29 01:05:25 | INFO  | Task 248abb42-edd4-4003-8f12-64ca6a92da80 is in state STARTED 2026-03-29 01:05:25.565169 | orchestrator | 2026-03-29 01:05:25 | INFO  | Task 05bed01a-cd1b-437f-92a2-658822fb3875 is in state STARTED 2026-03-29 01:05:25.565551 | orchestrator | 2026-03-29 01:05:25 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:05:28.591692 | orchestrator | 2026-03-29 01:05:28 | INFO  | Task e5e9fde8-f450-428e-b9af-bb5d6525f408 is in state STARTED 2026-03-29 01:05:28.593087 | orchestrator | 2026-03-29 01:05:28 | INFO  | Task 6db50361-5eff-4136-ad47-e737479c1c22 is in state STARTED 2026-03-29 01:05:28.593728 | orchestrator | 2026-03-29 01:05:28 | INFO  | Task 248abb42-edd4-4003-8f12-64ca6a92da80 is in state STARTED 2026-03-29 01:05:28.595593 | orchestrator | 2026-03-29 01:05:28 | INFO  | Task 05bed01a-cd1b-437f-92a2-658822fb3875 is in state STARTED 2026-03-29 01:05:28.595631 | orchestrator | 2026-03-29 01:05:28 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:05:31.626160 | orchestrator | 2026-03-29 01:05:31 | INFO  | Task e5e9fde8-f450-428e-b9af-bb5d6525f408 is in state STARTED 2026-03-29 01:05:31.626525 | orchestrator | 2026-03-29 01:05:31 | INFO  | Task 6db50361-5eff-4136-ad47-e737479c1c22 is in state STARTED 2026-03-29 01:05:31.627310 | orchestrator | 2026-03-29 01:05:31 | INFO  | Task 248abb42-edd4-4003-8f12-64ca6a92da80 is in state STARTED 2026-03-29 01:05:31.628031 | orchestrator | 2026-03-29 01:05:31 | INFO  | Task 05bed01a-cd1b-437f-92a2-658822fb3875 is in state STARTED 2026-03-29 01:05:31.628078 | orchestrator | 2026-03-29 01:05:31 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:05:34.666662 | orchestrator | 2026-03-29 01:05:34 | INFO  | Task e5e9fde8-f450-428e-b9af-bb5d6525f408 is in state STARTED 2026-03-29 01:05:34.668085 | orchestrator | 2026-03-29 01:05:34 | INFO  | Task 6db50361-5eff-4136-ad47-e737479c1c22 is in state STARTED 2026-03-29 01:05:34.668141 | orchestrator | 2026-03-29 01:05:34 | INFO  | Task 248abb42-edd4-4003-8f12-64ca6a92da80 is in state STARTED 2026-03-29 01:05:34.669722 | orchestrator | 2026-03-29 01:05:34 | INFO  | Task 05bed01a-cd1b-437f-92a2-658822fb3875 is in state STARTED 2026-03-29 01:05:34.669772 | orchestrator | 2026-03-29 01:05:34 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:05:37.693841 | orchestrator | 2026-03-29 01:05:37 | INFO  | Task e5e9fde8-f450-428e-b9af-bb5d6525f408 is in state STARTED 2026-03-29 01:05:37.696532 | orchestrator | 2026-03-29 01:05:37 | INFO  | Task 6db50361-5eff-4136-ad47-e737479c1c22 is in state STARTED 2026-03-29 01:05:37.696947 | orchestrator | 2026-03-29 01:05:37 | INFO  | Task 248abb42-edd4-4003-8f12-64ca6a92da80 is in state STARTED 2026-03-29 01:05:37.698335 | orchestrator | 2026-03-29 01:05:37 | INFO  | Task 05bed01a-cd1b-437f-92a2-658822fb3875 is in state STARTED 2026-03-29 01:05:37.698373 | orchestrator | 2026-03-29 01:05:37 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:05:40.726488 | orchestrator | 2026-03-29 01:05:40 | INFO  | Task e5e9fde8-f450-428e-b9af-bb5d6525f408 is in state STARTED 2026-03-29 01:05:40.726765 | orchestrator | 2026-03-29 01:05:40 | INFO  | Task 6db50361-5eff-4136-ad47-e737479c1c22 is in state STARTED 2026-03-29 01:05:40.727909 | orchestrator | 2026-03-29 01:05:40 | INFO  | Task 248abb42-edd4-4003-8f12-64ca6a92da80 is in state STARTED 2026-03-29 01:05:40.728579 | orchestrator | 2026-03-29 01:05:40 | INFO  | Task 05bed01a-cd1b-437f-92a2-658822fb3875 is in state STARTED 2026-03-29 01:05:40.729722 | orchestrator | 2026-03-29 01:05:40 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:05:43.759139 | orchestrator | 2026-03-29 01:05:43 | INFO  | Task e5e9fde8-f450-428e-b9af-bb5d6525f408 is in state SUCCESS 2026-03-29 01:05:43.760281 | orchestrator | 2026-03-29 01:05:43.760317 | orchestrator | 2026-03-29 01:05:43.760324 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-29 01:05:43.760329 | orchestrator | 2026-03-29 01:05:43.760335 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-29 01:05:43.760340 | orchestrator | Sunday 29 March 2026 01:03:42 +0000 (0:00:00.305) 0:00:00.305 ********** 2026-03-29 01:05:43.760345 | orchestrator | ok: [testbed-node-0] 2026-03-29 01:05:43.760351 | orchestrator | ok: [testbed-node-1] 2026-03-29 01:05:43.760356 | orchestrator | ok: [testbed-node-2] 2026-03-29 01:05:43.760362 | orchestrator | 2026-03-29 01:05:43.760367 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-29 01:05:43.760373 | orchestrator | Sunday 29 March 2026 01:03:42 +0000 (0:00:00.277) 0:00:00.582 ********** 2026-03-29 01:05:43.760378 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2026-03-29 01:05:43.760384 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2026-03-29 01:05:43.760404 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2026-03-29 01:05:43.760410 | orchestrator | 2026-03-29 01:05:43.760415 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2026-03-29 01:05:43.760420 | orchestrator | 2026-03-29 01:05:43.760425 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-03-29 01:05:43.760430 | orchestrator | Sunday 29 March 2026 01:03:43 +0000 (0:00:00.287) 0:00:00.870 ********** 2026-03-29 01:05:43.760584 | orchestrator | included: /ansible/roles/barbican/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 01:05:43.760597 | orchestrator | 2026-03-29 01:05:43.760602 | orchestrator | TASK [service-ks-register : barbican | Creating services] ********************** 2026-03-29 01:05:43.760607 | orchestrator | Sunday 29 March 2026 01:03:43 +0000 (0:00:00.600) 0:00:01.471 ********** 2026-03-29 01:05:43.760612 | orchestrator | changed: [testbed-node-0] => (item=barbican (key-manager)) 2026-03-29 01:05:43.760617 | orchestrator | 2026-03-29 01:05:43.760622 | orchestrator | TASK [service-ks-register : barbican | Creating endpoints] ********************* 2026-03-29 01:05:43.760627 | orchestrator | Sunday 29 March 2026 01:03:47 +0000 (0:00:03.747) 0:00:05.219 ********** 2026-03-29 01:05:43.760632 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api-int.testbed.osism.xyz:9311 -> internal) 2026-03-29 01:05:43.760637 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api.testbed.osism.xyz:9311 -> public) 2026-03-29 01:05:43.760642 | orchestrator | 2026-03-29 01:05:43.760647 | orchestrator | TASK [service-ks-register : barbican | Creating projects] ********************** 2026-03-29 01:05:43.760653 | orchestrator | Sunday 29 March 2026 01:03:53 +0000 (0:00:06.324) 0:00:11.543 ********** 2026-03-29 01:05:43.760658 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-29 01:05:43.760663 | orchestrator | 2026-03-29 01:05:43.760668 | orchestrator | TASK [service-ks-register : barbican | Creating users] ************************* 2026-03-29 01:05:43.760673 | orchestrator | Sunday 29 March 2026 01:03:56 +0000 (0:00:03.166) 0:00:14.709 ********** 2026-03-29 01:05:43.760678 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service) 2026-03-29 01:05:43.760683 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-29 01:05:43.760688 | orchestrator | 2026-03-29 01:05:43.760693 | orchestrator | TASK [service-ks-register : barbican | Creating roles] ************************* 2026-03-29 01:05:43.760698 | orchestrator | Sunday 29 March 2026 01:04:00 +0000 (0:00:03.970) 0:00:18.680 ********** 2026-03-29 01:05:43.760704 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-29 01:05:43.760709 | orchestrator | changed: [testbed-node-0] => (item=key-manager:service-admin) 2026-03-29 01:05:43.760714 | orchestrator | changed: [testbed-node-0] => (item=creator) 2026-03-29 01:05:43.760720 | orchestrator | changed: [testbed-node-0] => (item=observer) 2026-03-29 01:05:43.760725 | orchestrator | changed: [testbed-node-0] => (item=audit) 2026-03-29 01:05:43.760731 | orchestrator | 2026-03-29 01:05:43.760736 | orchestrator | TASK [service-ks-register : barbican | Granting user roles] ******************** 2026-03-29 01:05:43.760741 | orchestrator | Sunday 29 March 2026 01:04:18 +0000 (0:00:18.083) 0:00:36.763 ********** 2026-03-29 01:05:43.760747 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service -> admin) 2026-03-29 01:05:43.760760 | orchestrator | 2026-03-29 01:05:43.760765 | orchestrator | TASK [barbican : Ensuring config directories exist] **************************** 2026-03-29 01:05:43.760771 | orchestrator | Sunday 29 March 2026 01:04:22 +0000 (0:00:03.937) 0:00:40.701 ********** 2026-03-29 01:05:43.760787 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-29 01:05:43.760813 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-29 01:05:43.760820 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-29 01:05:43.760826 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-29 01:05:43.760832 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-29 01:05:43.760840 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-29 01:05:43.760855 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-29 01:05:43.760861 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-29 01:05:43.760867 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-29 01:05:43.760872 | orchestrator | 2026-03-29 01:05:43.760877 | orchestrator | TASK [barbican : Ensuring vassals config directories exist] ******************** 2026-03-29 01:05:43.760882 | orchestrator | Sunday 29 March 2026 01:04:25 +0000 (0:00:03.070) 0:00:43.773 ********** 2026-03-29 01:05:43.760887 | orchestrator | changed: [testbed-node-0] => (item=barbican-api/vassals) 2026-03-29 01:05:43.760892 | orchestrator | changed: [testbed-node-1] => (item=barbican-api/vassals) 2026-03-29 01:05:43.760897 | orchestrator | changed: [testbed-node-2] => (item=barbican-api/vassals) 2026-03-29 01:05:43.760902 | orchestrator | 2026-03-29 01:05:43.760908 | orchestrator | TASK [barbican : Check if policies shall be overwritten] *********************** 2026-03-29 01:05:43.760963 | orchestrator | Sunday 29 March 2026 01:04:28 +0000 (0:00:02.104) 0:00:45.877 ********** 2026-03-29 01:05:43.760969 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:05:43.760973 | orchestrator | 2026-03-29 01:05:43.760978 | orchestrator | TASK [barbican : Set barbican policy file] ************************************* 2026-03-29 01:05:43.760983 | orchestrator | Sunday 29 March 2026 01:04:28 +0000 (0:00:00.108) 0:00:45.986 ********** 2026-03-29 01:05:43.760988 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:05:43.760992 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:05:43.760997 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:05:43.761002 | orchestrator | 2026-03-29 01:05:43.761007 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-03-29 01:05:43.761011 | orchestrator | Sunday 29 March 2026 01:04:28 +0000 (0:00:00.304) 0:00:46.290 ********** 2026-03-29 01:05:43.761016 | orchestrator | included: /ansible/roles/barbican/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 01:05:43.761022 | orchestrator | 2026-03-29 01:05:43.761028 | orchestrator | TASK [service-cert-copy : barbican | Copying over extra CA certificates] ******* 2026-03-29 01:05:43.761033 | orchestrator | Sunday 29 March 2026 01:04:30 +0000 (0:00:02.156) 0:00:48.447 ********** 2026-03-29 01:05:43.761047 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-29 01:05:43.761059 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-29 01:05:43.761065 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-29 01:05:43.761071 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-29 01:05:43.761076 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-29 01:05:43.761088 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-29 01:05:43.761093 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-29 01:05:43.761102 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-29 01:05:43.761108 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-29 01:05:43.761113 | orchestrator | 2026-03-29 01:05:43.761118 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS certificate] *** 2026-03-29 01:05:43.761122 | orchestrator | Sunday 29 March 2026 01:04:34 +0000 (0:00:03.645) 0:00:52.093 ********** 2026-03-29 01:05:43.761127 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-29 01:05:43.761133 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-29 01:05:43.761142 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-29 01:05:43.761149 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:05:43.761158 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-29 01:05:43.761163 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-29 01:05:43.761169 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-29 01:05:43.761173 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:05:43.761178 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-29 01:05:43.761186 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-29 01:05:43.761194 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-29 01:05:43.761199 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:05:43.761204 | orchestrator | 2026-03-29 01:05:43.761209 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS key] **** 2026-03-29 01:05:43.761214 | orchestrator | Sunday 29 March 2026 01:04:34 +0000 (0:00:00.692) 0:00:52.786 ********** 2026-03-29 01:05:43.761223 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-29 01:05:43.761228 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-29 01:05:43.761233 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-29 01:05:43.761242 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:05:43.761247 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-29 01:05:43.761257 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-29 01:05:43.761262 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-29 01:05:43.761267 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:05:43.761276 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-29 01:05:43.761281 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-29 01:05:43.761289 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-29 01:05:43.761295 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:05:43.761300 | orchestrator | 2026-03-29 01:05:43.761305 | orchestrator | TASK [barbican : Copying over config.json files for services] ****************** 2026-03-29 01:05:43.761310 | orchestrator | Sunday 29 March 2026 01:04:35 +0000 (0:00:00.789) 0:00:53.575 ********** 2026-03-29 01:05:43.761317 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-29 01:05:43.761325 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-29 01:05:43.761331 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-29 01:05:43.761336 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-29 01:05:43.761345 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-29 01:05:43.761350 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-29 01:05:43.761358 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-29 01:05:43.761366 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-29 01:05:43.761371 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-29 01:05:43.761377 | orchestrator | 2026-03-29 01:05:43.761382 | orchestrator | TASK [barbican : Copying over barbican-api.ini] ******************************** 2026-03-29 01:05:43.761388 | orchestrator | Sunday 29 March 2026 01:04:40 +0000 (0:00:04.387) 0:00:57.963 ********** 2026-03-29 01:05:43.761393 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:05:43.761398 | orchestrator | changed: [testbed-node-1] 2026-03-29 01:05:43.761403 | orchestrator | changed: [testbed-node-2] 2026-03-29 01:05:43.761408 | orchestrator | 2026-03-29 01:05:43.761413 | orchestrator | TASK [barbican : Checking whether barbican-api-paste.ini file exists] ********** 2026-03-29 01:05:43.761473 | orchestrator | Sunday 29 March 2026 01:04:43 +0000 (0:00:03.098) 0:01:01.061 ********** 2026-03-29 01:05:43.761478 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-29 01:05:43.761481 | orchestrator | 2026-03-29 01:05:43.761484 | orchestrator | TASK [barbican : Copying over barbican-api-paste.ini] ************************** 2026-03-29 01:05:43.761487 | orchestrator | Sunday 29 March 2026 01:04:44 +0000 (0:00:00.914) 0:01:01.976 ********** 2026-03-29 01:05:43.761490 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:05:43.761493 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:05:43.761496 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:05:43.761499 | orchestrator | 2026-03-29 01:05:43.761502 | orchestrator | TASK [barbican : Copying over barbican.conf] *********************************** 2026-03-29 01:05:43.761506 | orchestrator | Sunday 29 March 2026 01:04:45 +0000 (0:00:01.576) 0:01:03.552 ********** 2026-03-29 01:05:43.761509 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-29 01:05:43.761515 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-29 01:05:43.761521 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-29 01:05:43.761524 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-29 01:05:43.761530 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-29 01:05:43.761534 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-29 01:05:43.761537 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-29 01:05:43.761542 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-29 01:05:43.761546 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-29 01:05:43.761549 | orchestrator | 2026-03-29 01:05:43.761552 | orchestrator | TASK [barbican : Copying over existing policy file] **************************** 2026-03-29 01:05:43.761555 | orchestrator | Sunday 29 March 2026 01:04:57 +0000 (0:00:12.084) 0:01:15.637 ********** 2026-03-29 01:05:43.761561 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-29 01:05:43.761566 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-29 01:05:43.761569 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-29 01:05:43.761573 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-29 01:05:43.761578 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-29 01:05:43.761581 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:05:43.761587 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-29 01:05:43.761593 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:05:43.761597 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-29 01:05:43.761600 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-29 01:05:43.761603 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-29 01:05:43.761606 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:05:43.761610 | orchestrator | 2026-03-29 01:05:43.761613 | orchestrator | TASK [barbican : Check barbican containers] ************************************ 2026-03-29 01:05:43.761616 | orchestrator | Sunday 29 March 2026 01:04:59 +0000 (0:00:01.603) 0:01:17.240 ********** 2026-03-29 01:05:43.761621 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-29 01:05:43.761627 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-29 01:05:43.761632 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-29 01:05:43.761635 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-29 01:05:43.761639 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-29 01:05:43.761643 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-29 01:05:43.761647 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-29 01:05:43.761656 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-29 01:05:43.761660 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-29 01:05:43.761663 | orchestrator | 2026-03-29 01:05:43.761666 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-03-29 01:05:43.761669 | orchestrator | Sunday 29 March 2026 01:05:02 +0000 (0:00:03.197) 0:01:20.438 ********** 2026-03-29 01:05:43.761672 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:05:43.761675 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:05:43.761679 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:05:43.761682 | orchestrator | 2026-03-29 01:05:43.761685 | orchestrator | TASK [barbican : Creating barbican database] *********************************** 2026-03-29 01:05:43.761688 | orchestrator | Sunday 29 March 2026 01:05:03 +0000 (0:00:00.742) 0:01:21.181 ********** 2026-03-29 01:05:43.761691 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:05:43.761694 | orchestrator | 2026-03-29 01:05:43.761697 | orchestrator | TASK [barbican : Creating barbican database user and setting permissions] ****** 2026-03-29 01:05:43.761700 | orchestrator | Sunday 29 March 2026 01:05:05 +0000 (0:00:02.076) 0:01:23.257 ********** 2026-03-29 01:05:43.761703 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:05:43.761706 | orchestrator | 2026-03-29 01:05:43.761709 | orchestrator | TASK [barbican : Running barbican bootstrap container] ************************* 2026-03-29 01:05:43.761713 | orchestrator | Sunday 29 March 2026 01:05:07 +0000 (0:00:02.404) 0:01:25.661 ********** 2026-03-29 01:05:43.761716 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:05:43.761719 | orchestrator | 2026-03-29 01:05:43.761722 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-03-29 01:05:43.761725 | orchestrator | Sunday 29 March 2026 01:05:19 +0000 (0:00:11.840) 0:01:37.502 ********** 2026-03-29 01:05:43.761728 | orchestrator | 2026-03-29 01:05:43.761731 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-03-29 01:05:43.761734 | orchestrator | Sunday 29 March 2026 01:05:20 +0000 (0:00:00.609) 0:01:38.112 ********** 2026-03-29 01:05:43.761737 | orchestrator | 2026-03-29 01:05:43.761740 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-03-29 01:05:43.761743 | orchestrator | Sunday 29 March 2026 01:05:20 +0000 (0:00:00.156) 0:01:38.269 ********** 2026-03-29 01:05:43.761746 | orchestrator | 2026-03-29 01:05:43.761749 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-api container] ******************** 2026-03-29 01:05:43.761753 | orchestrator | Sunday 29 March 2026 01:05:20 +0000 (0:00:00.166) 0:01:38.435 ********** 2026-03-29 01:05:43.761758 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:05:43.761763 | orchestrator | changed: [testbed-node-2] 2026-03-29 01:05:43.761768 | orchestrator | changed: [testbed-node-1] 2026-03-29 01:05:43.761773 | orchestrator | 2026-03-29 01:05:43.761778 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-keystone-listener container] ****** 2026-03-29 01:05:43.761786 | orchestrator | Sunday 29 March 2026 01:05:31 +0000 (0:00:10.649) 0:01:49.085 ********** 2026-03-29 01:05:43.761790 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:05:43.761795 | orchestrator | changed: [testbed-node-1] 2026-03-29 01:05:43.761800 | orchestrator | changed: [testbed-node-2] 2026-03-29 01:05:43.761805 | orchestrator | 2026-03-29 01:05:43.761810 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-worker container] ***************** 2026-03-29 01:05:43.761815 | orchestrator | Sunday 29 March 2026 01:05:36 +0000 (0:00:04.921) 0:01:54.006 ********** 2026-03-29 01:05:43.761819 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:05:43.761824 | orchestrator | changed: [testbed-node-1] 2026-03-29 01:05:43.761828 | orchestrator | changed: [testbed-node-2] 2026-03-29 01:05:43.761833 | orchestrator | 2026-03-29 01:05:43.761840 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 01:05:43.761846 | orchestrator | testbed-node-0 : ok=24  changed=18  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-29 01:05:43.761852 | orchestrator | testbed-node-1 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-29 01:05:43.761857 | orchestrator | testbed-node-2 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-29 01:05:43.761862 | orchestrator | 2026-03-29 01:05:43.761867 | orchestrator | 2026-03-29 01:05:43.761872 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-29 01:05:43.761876 | orchestrator | Sunday 29 March 2026 01:05:41 +0000 (0:00:05.605) 0:01:59.612 ********** 2026-03-29 01:05:43.761880 | orchestrator | =============================================================================== 2026-03-29 01:05:43.761885 | orchestrator | service-ks-register : barbican | Creating roles ------------------------ 18.08s 2026-03-29 01:05:43.761893 | orchestrator | barbican : Copying over barbican.conf ---------------------------------- 12.08s 2026-03-29 01:05:43.761899 | orchestrator | barbican : Running barbican bootstrap container ------------------------ 11.84s 2026-03-29 01:05:43.761904 | orchestrator | barbican : Restart barbican-api container ------------------------------ 10.65s 2026-03-29 01:05:43.761909 | orchestrator | service-ks-register : barbican | Creating endpoints --------------------- 6.32s 2026-03-29 01:05:43.761931 | orchestrator | barbican : Restart barbican-worker container ---------------------------- 5.60s 2026-03-29 01:05:43.761936 | orchestrator | barbican : Restart barbican-keystone-listener container ----------------- 4.92s 2026-03-29 01:05:43.761941 | orchestrator | barbican : Copying over config.json files for services ------------------ 4.39s 2026-03-29 01:05:43.761946 | orchestrator | service-ks-register : barbican | Creating users ------------------------- 3.97s 2026-03-29 01:05:43.761951 | orchestrator | service-ks-register : barbican | Granting user roles -------------------- 3.94s 2026-03-29 01:05:43.761956 | orchestrator | service-ks-register : barbican | Creating services ---------------------- 3.75s 2026-03-29 01:05:43.761960 | orchestrator | service-cert-copy : barbican | Copying over extra CA certificates ------- 3.65s 2026-03-29 01:05:43.761965 | orchestrator | barbican : Check barbican containers ------------------------------------ 3.20s 2026-03-29 01:05:43.761970 | orchestrator | service-ks-register : barbican | Creating projects ---------------------- 3.17s 2026-03-29 01:05:43.761976 | orchestrator | barbican : Copying over barbican-api.ini -------------------------------- 3.10s 2026-03-29 01:05:43.761981 | orchestrator | barbican : Ensuring config directories exist ---------------------------- 3.07s 2026-03-29 01:05:43.761986 | orchestrator | barbican : Creating barbican database user and setting permissions ------ 2.40s 2026-03-29 01:05:43.761991 | orchestrator | barbican : include_tasks ------------------------------------------------ 2.16s 2026-03-29 01:05:43.761995 | orchestrator | barbican : Ensuring vassals config directories exist -------------------- 2.10s 2026-03-29 01:05:43.762000 | orchestrator | barbican : Creating barbican database ----------------------------------- 2.08s 2026-03-29 01:05:43.762009 | orchestrator | 2026-03-29 01:05:43 | INFO  | Task 851286e3-b1b5-4e76-8d99-2e44988fd96d is in state STARTED 2026-03-29 01:05:43.762050 | orchestrator | 2026-03-29 01:05:43 | INFO  | Task 6db50361-5eff-4136-ad47-e737479c1c22 is in state STARTED 2026-03-29 01:05:43.763040 | orchestrator | 2026-03-29 01:05:43 | INFO  | Task 248abb42-edd4-4003-8f12-64ca6a92da80 is in state STARTED 2026-03-29 01:05:43.763699 | orchestrator | 2026-03-29 01:05:43 | INFO  | Task 05bed01a-cd1b-437f-92a2-658822fb3875 is in state STARTED 2026-03-29 01:05:43.763835 | orchestrator | 2026-03-29 01:05:43 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:05:46.810736 | orchestrator | 2026-03-29 01:05:46 | INFO  | Task 851286e3-b1b5-4e76-8d99-2e44988fd96d is in state STARTED 2026-03-29 01:05:46.811423 | orchestrator | 2026-03-29 01:05:46 | INFO  | Task 6db50361-5eff-4136-ad47-e737479c1c22 is in state STARTED 2026-03-29 01:05:46.812353 | orchestrator | 2026-03-29 01:05:46 | INFO  | Task 248abb42-edd4-4003-8f12-64ca6a92da80 is in state STARTED 2026-03-29 01:05:46.813274 | orchestrator | 2026-03-29 01:05:46 | INFO  | Task 05bed01a-cd1b-437f-92a2-658822fb3875 is in state STARTED 2026-03-29 01:05:46.813319 | orchestrator | 2026-03-29 01:05:46 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:05:49.839004 | orchestrator | 2026-03-29 01:05:49 | INFO  | Task 851286e3-b1b5-4e76-8d99-2e44988fd96d is in state STARTED 2026-03-29 01:05:49.839611 | orchestrator | 2026-03-29 01:05:49 | INFO  | Task 6db50361-5eff-4136-ad47-e737479c1c22 is in state STARTED 2026-03-29 01:05:49.841880 | orchestrator | 2026-03-29 01:05:49 | INFO  | Task 248abb42-edd4-4003-8f12-64ca6a92da80 is in state STARTED 2026-03-29 01:05:49.843435 | orchestrator | 2026-03-29 01:05:49 | INFO  | Task 05bed01a-cd1b-437f-92a2-658822fb3875 is in state STARTED 2026-03-29 01:05:49.843481 | orchestrator | 2026-03-29 01:05:49 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:05:52.866684 | orchestrator | 2026-03-29 01:05:52 | INFO  | Task 851286e3-b1b5-4e76-8d99-2e44988fd96d is in state STARTED 2026-03-29 01:05:52.867067 | orchestrator | 2026-03-29 01:05:52 | INFO  | Task 6db50361-5eff-4136-ad47-e737479c1c22 is in state STARTED 2026-03-29 01:05:52.868158 | orchestrator | 2026-03-29 01:05:52 | INFO  | Task 248abb42-edd4-4003-8f12-64ca6a92da80 is in state STARTED 2026-03-29 01:05:52.868624 | orchestrator | 2026-03-29 01:05:52 | INFO  | Task 05bed01a-cd1b-437f-92a2-658822fb3875 is in state STARTED 2026-03-29 01:05:52.868687 | orchestrator | 2026-03-29 01:05:52 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:05:55.891115 | orchestrator | 2026-03-29 01:05:55 | INFO  | Task 851286e3-b1b5-4e76-8d99-2e44988fd96d is in state STARTED 2026-03-29 01:05:55.893763 | orchestrator | 2026-03-29 01:05:55 | INFO  | Task 6db50361-5eff-4136-ad47-e737479c1c22 is in state STARTED 2026-03-29 01:05:55.894543 | orchestrator | 2026-03-29 01:05:55 | INFO  | Task 248abb42-edd4-4003-8f12-64ca6a92da80 is in state STARTED 2026-03-29 01:05:55.895368 | orchestrator | 2026-03-29 01:05:55 | INFO  | Task 05bed01a-cd1b-437f-92a2-658822fb3875 is in state STARTED 2026-03-29 01:05:55.895396 | orchestrator | 2026-03-29 01:05:55 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:05:58.918882 | orchestrator | 2026-03-29 01:05:58 | INFO  | Task 851286e3-b1b5-4e76-8d99-2e44988fd96d is in state STARTED 2026-03-29 01:05:58.919334 | orchestrator | 2026-03-29 01:05:58 | INFO  | Task 6db50361-5eff-4136-ad47-e737479c1c22 is in state STARTED 2026-03-29 01:05:58.922828 | orchestrator | 2026-03-29 01:05:58 | INFO  | Task 248abb42-edd4-4003-8f12-64ca6a92da80 is in state STARTED 2026-03-29 01:05:58.923421 | orchestrator | 2026-03-29 01:05:58 | INFO  | Task 05bed01a-cd1b-437f-92a2-658822fb3875 is in state STARTED 2026-03-29 01:05:58.923472 | orchestrator | 2026-03-29 01:05:58 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:06:01.988415 | orchestrator | 2026-03-29 01:06:01 | INFO  | Task 851286e3-b1b5-4e76-8d99-2e44988fd96d is in state STARTED 2026-03-29 01:06:01.989641 | orchestrator | 2026-03-29 01:06:01 | INFO  | Task 6db50361-5eff-4136-ad47-e737479c1c22 is in state STARTED 2026-03-29 01:06:01.991277 | orchestrator | 2026-03-29 01:06:01 | INFO  | Task 248abb42-edd4-4003-8f12-64ca6a92da80 is in state STARTED 2026-03-29 01:06:01.992092 | orchestrator | 2026-03-29 01:06:01 | INFO  | Task 05bed01a-cd1b-437f-92a2-658822fb3875 is in state STARTED 2026-03-29 01:06:01.992376 | orchestrator | 2026-03-29 01:06:01 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:06:05.028493 | orchestrator | 2026-03-29 01:06:05 | INFO  | Task 851286e3-b1b5-4e76-8d99-2e44988fd96d is in state STARTED 2026-03-29 01:06:05.030312 | orchestrator | 2026-03-29 01:06:05 | INFO  | Task 6db50361-5eff-4136-ad47-e737479c1c22 is in state STARTED 2026-03-29 01:06:05.031611 | orchestrator | 2026-03-29 01:06:05 | INFO  | Task 248abb42-edd4-4003-8f12-64ca6a92da80 is in state STARTED 2026-03-29 01:06:05.033591 | orchestrator | 2026-03-29 01:06:05 | INFO  | Task 05bed01a-cd1b-437f-92a2-658822fb3875 is in state STARTED 2026-03-29 01:06:05.034169 | orchestrator | 2026-03-29 01:06:05 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:06:08.101483 | orchestrator | 2026-03-29 01:06:08 | INFO  | Task 851286e3-b1b5-4e76-8d99-2e44988fd96d is in state STARTED 2026-03-29 01:06:08.104309 | orchestrator | 2026-03-29 01:06:08 | INFO  | Task 6db50361-5eff-4136-ad47-e737479c1c22 is in state STARTED 2026-03-29 01:06:08.105733 | orchestrator | 2026-03-29 01:06:08 | INFO  | Task 248abb42-edd4-4003-8f12-64ca6a92da80 is in state STARTED 2026-03-29 01:06:08.107230 | orchestrator | 2026-03-29 01:06:08 | INFO  | Task 05bed01a-cd1b-437f-92a2-658822fb3875 is in state STARTED 2026-03-29 01:06:08.107521 | orchestrator | 2026-03-29 01:06:08 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:06:11.184578 | orchestrator | 2026-03-29 01:06:11 | INFO  | Task 851286e3-b1b5-4e76-8d99-2e44988fd96d is in state STARTED 2026-03-29 01:06:11.186439 | orchestrator | 2026-03-29 01:06:11 | INFO  | Task 6db50361-5eff-4136-ad47-e737479c1c22 is in state STARTED 2026-03-29 01:06:11.187665 | orchestrator | 2026-03-29 01:06:11 | INFO  | Task 248abb42-edd4-4003-8f12-64ca6a92da80 is in state STARTED 2026-03-29 01:06:11.189275 | orchestrator | 2026-03-29 01:06:11 | INFO  | Task 05bed01a-cd1b-437f-92a2-658822fb3875 is in state STARTED 2026-03-29 01:06:11.189307 | orchestrator | 2026-03-29 01:06:11 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:06:14.272516 | orchestrator | 2026-03-29 01:06:14 | INFO  | Task 851286e3-b1b5-4e76-8d99-2e44988fd96d is in state STARTED 2026-03-29 01:06:14.272572 | orchestrator | 2026-03-29 01:06:14 | INFO  | Task 6db50361-5eff-4136-ad47-e737479c1c22 is in state STARTED 2026-03-29 01:06:14.272578 | orchestrator | 2026-03-29 01:06:14 | INFO  | Task 248abb42-edd4-4003-8f12-64ca6a92da80 is in state STARTED 2026-03-29 01:06:14.272584 | orchestrator | 2026-03-29 01:06:14 | INFO  | Task 05bed01a-cd1b-437f-92a2-658822fb3875 is in state STARTED 2026-03-29 01:06:14.272589 | orchestrator | 2026-03-29 01:06:14 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:06:17.285990 | orchestrator | 2026-03-29 01:06:17 | INFO  | Task 851286e3-b1b5-4e76-8d99-2e44988fd96d is in state STARTED 2026-03-29 01:06:17.287521 | orchestrator | 2026-03-29 01:06:17 | INFO  | Task 6db50361-5eff-4136-ad47-e737479c1c22 is in state STARTED 2026-03-29 01:06:17.288782 | orchestrator | 2026-03-29 01:06:17 | INFO  | Task 248abb42-edd4-4003-8f12-64ca6a92da80 is in state STARTED 2026-03-29 01:06:17.290096 | orchestrator | 2026-03-29 01:06:17 | INFO  | Task 05bed01a-cd1b-437f-92a2-658822fb3875 is in state STARTED 2026-03-29 01:06:17.290146 | orchestrator | 2026-03-29 01:06:17 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:06:20.405344 | orchestrator | 2026-03-29 01:06:20 | INFO  | Task 851286e3-b1b5-4e76-8d99-2e44988fd96d is in state STARTED 2026-03-29 01:06:20.406455 | orchestrator | 2026-03-29 01:06:20 | INFO  | Task 6db50361-5eff-4136-ad47-e737479c1c22 is in state STARTED 2026-03-29 01:06:20.407750 | orchestrator | 2026-03-29 01:06:20 | INFO  | Task 248abb42-edd4-4003-8f12-64ca6a92da80 is in state STARTED 2026-03-29 01:06:20.408808 | orchestrator | 2026-03-29 01:06:20 | INFO  | Task 05bed01a-cd1b-437f-92a2-658822fb3875 is in state STARTED 2026-03-29 01:06:20.408839 | orchestrator | 2026-03-29 01:06:20 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:06:23.443490 | orchestrator | 2026-03-29 01:06:23 | INFO  | Task 851286e3-b1b5-4e76-8d99-2e44988fd96d is in state STARTED 2026-03-29 01:06:23.443555 | orchestrator | 2026-03-29 01:06:23 | INFO  | Task 6db50361-5eff-4136-ad47-e737479c1c22 is in state STARTED 2026-03-29 01:06:23.443669 | orchestrator | 2026-03-29 01:06:23 | INFO  | Task 248abb42-edd4-4003-8f12-64ca6a92da80 is in state STARTED 2026-03-29 01:06:23.443684 | orchestrator | 2026-03-29 01:06:23 | INFO  | Task 05bed01a-cd1b-437f-92a2-658822fb3875 is in state STARTED 2026-03-29 01:06:23.443692 | orchestrator | 2026-03-29 01:06:23 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:06:26.475204 | orchestrator | 2026-03-29 01:06:26 | INFO  | Task 851286e3-b1b5-4e76-8d99-2e44988fd96d is in state STARTED 2026-03-29 01:06:26.476273 | orchestrator | 2026-03-29 01:06:26 | INFO  | Task 6db50361-5eff-4136-ad47-e737479c1c22 is in state STARTED 2026-03-29 01:06:26.477837 | orchestrator | 2026-03-29 01:06:26 | INFO  | Task 248abb42-edd4-4003-8f12-64ca6a92da80 is in state STARTED 2026-03-29 01:06:26.478525 | orchestrator | 2026-03-29 01:06:26 | INFO  | Task 05bed01a-cd1b-437f-92a2-658822fb3875 is in state STARTED 2026-03-29 01:06:26.478549 | orchestrator | 2026-03-29 01:06:26 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:06:29.526536 | orchestrator | 2026-03-29 01:06:29 | INFO  | Task 851286e3-b1b5-4e76-8d99-2e44988fd96d is in state STARTED 2026-03-29 01:06:29.527992 | orchestrator | 2026-03-29 01:06:29 | INFO  | Task 6db50361-5eff-4136-ad47-e737479c1c22 is in state STARTED 2026-03-29 01:06:29.529375 | orchestrator | 2026-03-29 01:06:29 | INFO  | Task 248abb42-edd4-4003-8f12-64ca6a92da80 is in state STARTED 2026-03-29 01:06:29.531820 | orchestrator | 2026-03-29 01:06:29 | INFO  | Task 05bed01a-cd1b-437f-92a2-658822fb3875 is in state STARTED 2026-03-29 01:06:29.533247 | orchestrator | 2026-03-29 01:06:29 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:06:32.566747 | orchestrator | 2026-03-29 01:06:32 | INFO  | Task 851286e3-b1b5-4e76-8d99-2e44988fd96d is in state STARTED 2026-03-29 01:06:32.567806 | orchestrator | 2026-03-29 01:06:32 | INFO  | Task 6db50361-5eff-4136-ad47-e737479c1c22 is in state STARTED 2026-03-29 01:06:32.571871 | orchestrator | 2026-03-29 01:06:32 | INFO  | Task 248abb42-edd4-4003-8f12-64ca6a92da80 is in state STARTED 2026-03-29 01:06:32.574714 | orchestrator | 2026-03-29 01:06:32 | INFO  | Task 05bed01a-cd1b-437f-92a2-658822fb3875 is in state STARTED 2026-03-29 01:06:32.574804 | orchestrator | 2026-03-29 01:06:32 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:06:35.618446 | orchestrator | 2026-03-29 01:06:35 | INFO  | Task 851286e3-b1b5-4e76-8d99-2e44988fd96d is in state STARTED 2026-03-29 01:06:35.620769 | orchestrator | 2026-03-29 01:06:35 | INFO  | Task 6db50361-5eff-4136-ad47-e737479c1c22 is in state STARTED 2026-03-29 01:06:35.622994 | orchestrator | 2026-03-29 01:06:35 | INFO  | Task 248abb42-edd4-4003-8f12-64ca6a92da80 is in state STARTED 2026-03-29 01:06:35.625111 | orchestrator | 2026-03-29 01:06:35 | INFO  | Task 05bed01a-cd1b-437f-92a2-658822fb3875 is in state STARTED 2026-03-29 01:06:35.625140 | orchestrator | 2026-03-29 01:06:35 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:06:38.669635 | orchestrator | 2026-03-29 01:06:38 | INFO  | Task 851286e3-b1b5-4e76-8d99-2e44988fd96d is in state STARTED 2026-03-29 01:06:38.675757 | orchestrator | 2026-03-29 01:06:38 | INFO  | Task 6db50361-5eff-4136-ad47-e737479c1c22 is in state SUCCESS 2026-03-29 01:06:38.676759 | orchestrator | 2026-03-29 01:06:38.676798 | orchestrator | 2026-03-29 01:06:38.676806 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-29 01:06:38.676814 | orchestrator | 2026-03-29 01:06:38.676821 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-29 01:06:38.676827 | orchestrator | Sunday 29 March 2026 01:03:46 +0000 (0:00:00.287) 0:00:00.287 ********** 2026-03-29 01:06:38.676844 | orchestrator | ok: [testbed-node-0] 2026-03-29 01:06:38.676851 | orchestrator | ok: [testbed-node-1] 2026-03-29 01:06:38.676855 | orchestrator | ok: [testbed-node-2] 2026-03-29 01:06:38.676861 | orchestrator | 2026-03-29 01:06:38.676866 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-29 01:06:38.676872 | orchestrator | Sunday 29 March 2026 01:03:46 +0000 (0:00:00.258) 0:00:00.545 ********** 2026-03-29 01:06:38.676877 | orchestrator | ok: [testbed-node-0] => (item=enable_designate_True) 2026-03-29 01:06:38.676882 | orchestrator | ok: [testbed-node-1] => (item=enable_designate_True) 2026-03-29 01:06:38.676887 | orchestrator | ok: [testbed-node-2] => (item=enable_designate_True) 2026-03-29 01:06:38.676891 | orchestrator | 2026-03-29 01:06:38.676896 | orchestrator | PLAY [Apply role designate] **************************************************** 2026-03-29 01:06:38.676901 | orchestrator | 2026-03-29 01:06:38.676906 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-03-29 01:06:38.676911 | orchestrator | Sunday 29 March 2026 01:03:46 +0000 (0:00:00.252) 0:00:00.798 ********** 2026-03-29 01:06:38.676916 | orchestrator | included: /ansible/roles/designate/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 01:06:38.676923 | orchestrator | 2026-03-29 01:06:38.676928 | orchestrator | TASK [service-ks-register : designate | Creating services] ********************* 2026-03-29 01:06:38.676933 | orchestrator | Sunday 29 March 2026 01:03:47 +0000 (0:00:00.584) 0:00:01.383 ********** 2026-03-29 01:06:38.676939 | orchestrator | changed: [testbed-node-0] => (item=designate (dns)) 2026-03-29 01:06:38.677034 | orchestrator | 2026-03-29 01:06:38.677042 | orchestrator | TASK [service-ks-register : designate | Creating endpoints] ******************** 2026-03-29 01:06:38.677047 | orchestrator | Sunday 29 March 2026 01:03:51 +0000 (0:00:03.627) 0:00:05.010 ********** 2026-03-29 01:06:38.677052 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api-int.testbed.osism.xyz:9001 -> internal) 2026-03-29 01:06:38.677057 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api.testbed.osism.xyz:9001 -> public) 2026-03-29 01:06:38.677062 | orchestrator | 2026-03-29 01:06:38.677068 | orchestrator | TASK [service-ks-register : designate | Creating projects] ********************* 2026-03-29 01:06:38.677074 | orchestrator | Sunday 29 March 2026 01:03:57 +0000 (0:00:06.342) 0:00:11.352 ********** 2026-03-29 01:06:38.677079 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-29 01:06:38.677084 | orchestrator | 2026-03-29 01:06:38.677089 | orchestrator | TASK [service-ks-register : designate | Creating users] ************************ 2026-03-29 01:06:38.677095 | orchestrator | Sunday 29 March 2026 01:04:00 +0000 (0:00:03.215) 0:00:14.568 ********** 2026-03-29 01:06:38.677117 | orchestrator | changed: [testbed-node-0] => (item=designate -> service) 2026-03-29 01:06:38.677123 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-29 01:06:38.677128 | orchestrator | 2026-03-29 01:06:38.677134 | orchestrator | TASK [service-ks-register : designate | Creating roles] ************************ 2026-03-29 01:06:38.677139 | orchestrator | Sunday 29 March 2026 01:04:05 +0000 (0:00:04.868) 0:00:19.437 ********** 2026-03-29 01:06:38.677144 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-29 01:06:38.677149 | orchestrator | 2026-03-29 01:06:38.677154 | orchestrator | TASK [service-ks-register : designate | Granting user roles] ******************* 2026-03-29 01:06:38.677159 | orchestrator | Sunday 29 March 2026 01:04:09 +0000 (0:00:03.437) 0:00:22.874 ********** 2026-03-29 01:06:38.677165 | orchestrator | changed: [testbed-node-0] => (item=designate -> service -> admin) 2026-03-29 01:06:38.677170 | orchestrator | 2026-03-29 01:06:38.677175 | orchestrator | TASK [designate : Ensuring config directories exist] *************************** 2026-03-29 01:06:38.677181 | orchestrator | Sunday 29 March 2026 01:04:12 +0000 (0:00:03.662) 0:00:26.537 ********** 2026-03-29 01:06:38.677196 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-29 01:06:38.677216 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-29 01:06:38.677222 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-29 01:06:38.677227 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-29 01:06:38.677439 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-29 01:06:38.677447 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-29 01:06:38.677457 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-29 01:06:38.677470 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-29 01:06:38.677476 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-29 01:06:38.677482 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-29 01:06:38.677493 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-29 01:06:38.677499 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-29 01:06:38.677557 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-29 01:06:38.677582 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-29 01:06:38.678155 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-29 01:06:38.678179 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-29 01:06:38.678184 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-29 01:06:38.678202 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-29 01:06:38.678211 | orchestrator | 2026-03-29 01:06:38.678222 | orchestrator | TASK [designate : Check if policies shall be overwritten] ********************** 2026-03-29 01:06:38.678228 | orchestrator | Sunday 29 March 2026 01:04:17 +0000 (0:00:04.462) 0:00:31.000 ********** 2026-03-29 01:06:38.678233 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:06:38.678238 | orchestrator | 2026-03-29 01:06:38.678242 | orchestrator | TASK [designate : Set designate policy file] *********************************** 2026-03-29 01:06:38.678247 | orchestrator | Sunday 29 March 2026 01:04:17 +0000 (0:00:00.117) 0:00:31.117 ********** 2026-03-29 01:06:38.678251 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:06:38.678256 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:06:38.678260 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:06:38.678265 | orchestrator | 2026-03-29 01:06:38.678271 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-03-29 01:06:38.678276 | orchestrator | Sunday 29 March 2026 01:04:17 +0000 (0:00:00.313) 0:00:31.431 ********** 2026-03-29 01:06:38.678281 | orchestrator | included: /ansible/roles/designate/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 01:06:38.678286 | orchestrator | 2026-03-29 01:06:38.678290 | orchestrator | TASK [service-cert-copy : designate | Copying over extra CA certificates] ****** 2026-03-29 01:06:38.678295 | orchestrator | Sunday 29 March 2026 01:04:18 +0000 (0:00:00.972) 0:00:32.403 ********** 2026-03-29 01:06:38.678306 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-29 01:06:38.678334 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-29 01:06:38.678347 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-29 01:06:38.678353 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-29 01:06:38.678358 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-29 01:06:38.678372 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-29 01:06:38.678378 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-29 01:06:38.678386 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-29 01:06:38.678394 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-29 01:06:38.678399 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-29 01:06:38.678407 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-29 01:06:38.678414 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-29 01:06:38.678422 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-29 01:06:38.678430 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-29 01:06:38.678435 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-29 01:06:38.678444 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-29 01:06:38.678449 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-29 01:06:38.678454 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-29 01:06:38.678459 | orchestrator | 2026-03-29 01:06:38.678464 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS certificate] *** 2026-03-29 01:06:38.678469 | orchestrator | Sunday 29 March 2026 01:04:25 +0000 (0:00:06.479) 0:00:38.882 ********** 2026-03-29 01:06:38.678475 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-29 01:06:38.678482 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-29 01:06:38.678495 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-29 01:06:38.678500 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-29 01:06:38.678504 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-29 01:06:38.678507 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-29 01:06:38.678511 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:06:38.678514 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-29 01:06:38.678520 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-29 01:06:38.678529 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-29 01:06:38.678533 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-29 01:06:38.678539 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-29 01:06:38.678544 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-29 01:06:38.678550 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:06:38.678555 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-29 01:06:38.678563 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-29 01:06:38.678578 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-29 01:06:38.678583 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-29 01:06:38.678589 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-29 01:06:38.678594 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-29 01:06:38.678600 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:06:38.678607 | orchestrator | 2026-03-29 01:06:38.678612 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS key] *** 2026-03-29 01:06:38.678615 | orchestrator | Sunday 29 March 2026 01:04:25 +0000 (0:00:00.939) 0:00:39.822 ********** 2026-03-29 01:06:38.678619 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-29 01:06:38.678624 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-29 01:06:38.678636 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-29 01:06:38.678639 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-29 01:06:38.678643 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-29 01:06:38.678646 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-29 01:06:38.678649 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:06:38.678652 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-29 01:06:38.678658 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-29 01:06:38.678666 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-29 01:06:38.678670 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-29 01:06:38.678673 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-29 01:06:38.678677 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-29 01:06:38.678680 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:06:38.678683 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-29 01:06:38.678687 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-29 01:06:38.678696 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-29 01:06:38.678702 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-29 01:06:38.678705 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-29 01:06:38.678708 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-29 01:06:38.678712 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:06:38.678715 | orchestrator | 2026-03-29 01:06:38.678718 | orchestrator | TASK [designate : Copying over config.json files for services] ***************** 2026-03-29 01:06:38.678727 | orchestrator | Sunday 29 March 2026 01:04:28 +0000 (0:00:02.199) 0:00:42.021 ********** 2026-03-29 01:06:38.678730 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-29 01:06:38.678739 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-29 01:06:38.678745 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-29 01:06:38.678749 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-29 01:06:38.678752 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-29 01:06:38.678756 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-29 01:06:38.678759 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-29 01:06:38.678771 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-29 01:06:38.678775 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-29 01:06:38.678781 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-29 01:06:38.678784 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-29 01:06:38.678787 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-29 01:06:38.678791 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-29 01:06:38.678800 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-29 01:06:38.678806 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-29 01:06:38.678811 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-29 01:06:38.678815 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-29 01:06:38.678818 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-29 01:06:38.678822 | orchestrator | 2026-03-29 01:06:38.678825 | orchestrator | TASK [designate : Copying over designate.conf] ********************************* 2026-03-29 01:06:38.678828 | orchestrator | Sunday 29 March 2026 01:04:35 +0000 (0:00:07.483) 0:00:49.505 ********** 2026-03-29 01:06:38.678844 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-29 01:06:38.678851 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-29 01:06:38.678856 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-29 01:06:38.678862 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-29 01:06:38.678865 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-29 01:06:38.678868 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-29 01:06:38.678872 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-29 01:06:38.678877 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-29 01:06:38.678882 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-29 01:06:38.678888 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-29 01:06:38.678891 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-29 01:06:38.678895 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-29 01:06:38.678907 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-29 01:06:38.678914 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-29 01:06:38.678921 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-29 01:06:38.678926 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-29 01:06:38.678933 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-29 01:06:38.678937 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-29 01:06:38.678940 | orchestrator | 2026-03-29 01:06:38.678943 | orchestrator | TASK [designate : Copying over pools.yaml] ************************************* 2026-03-29 01:06:38.678947 | orchestrator | Sunday 29 March 2026 01:05:01 +0000 (0:00:25.417) 0:01:14.922 ********** 2026-03-29 01:06:38.678950 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-03-29 01:06:38.678953 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-03-29 01:06:38.678956 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-03-29 01:06:38.678960 | orchestrator | 2026-03-29 01:06:38.678963 | orchestrator | TASK [designate : Copying over named.conf] ************************************* 2026-03-29 01:06:38.678966 | orchestrator | Sunday 29 March 2026 01:05:07 +0000 (0:00:06.036) 0:01:20.958 ********** 2026-03-29 01:06:38.678972 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-03-29 01:06:38.678975 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-03-29 01:06:38.678978 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-03-29 01:06:38.678982 | orchestrator | 2026-03-29 01:06:38.678985 | orchestrator | TASK [designate : Copying over rndc.conf] ************************************** 2026-03-29 01:06:38.678988 | orchestrator | Sunday 29 March 2026 01:05:10 +0000 (0:00:03.280) 0:01:24.239 ********** 2026-03-29 01:06:38.678991 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-29 01:06:38.678997 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-29 01:06:38.679003 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-29 01:06:38.679006 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-29 01:06:38.679010 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-29 01:06:38.679015 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-29 01:06:38.679019 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-29 01:06:38.679022 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-29 01:06:38.679027 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-29 01:06:38.679103 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-29 01:06:38.679113 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-29 01:06:38.679122 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-29 01:06:38.679128 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-29 01:06:38.679134 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-29 01:06:38.679142 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-29 01:06:38.679147 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-29 01:06:38.679156 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-29 01:06:38.679161 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-29 01:06:38.679170 | orchestrator | 2026-03-29 01:06:38.679175 | orchestrator | TASK [designate : Copying over rndc.key] *************************************** 2026-03-29 01:06:38.679180 | orchestrator | Sunday 29 March 2026 01:05:13 +0000 (0:00:03.470) 0:01:27.709 ********** 2026-03-29 01:06:38.679187 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-29 01:06:38.679192 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-29 01:06:38.679200 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-29 01:06:38.679208 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-29 01:06:38.679217 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-29 01:06:38.679223 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-29 01:06:38.679229 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-29 01:06:38.679234 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-29 01:06:38.679242 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-29 01:06:38.679248 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-29 01:06:38.679255 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-29 01:06:38.679261 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-29 01:06:38.679264 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-29 01:06:38.679268 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-29 01:06:38.679271 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-29 01:06:38.679276 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-29 01:06:38.679281 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-29 01:06:38.679287 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-29 01:06:38.679291 | orchestrator | 2026-03-29 01:06:38.679294 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-03-29 01:06:38.679297 | orchestrator | Sunday 29 March 2026 01:05:17 +0000 (0:00:04.093) 0:01:31.803 ********** 2026-03-29 01:06:38.679300 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:06:38.679304 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:06:38.679307 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:06:38.679310 | orchestrator | 2026-03-29 01:06:38.679313 | orchestrator | TASK [designate : Copying over existing policy file] *************************** 2026-03-29 01:06:38.679316 | orchestrator | Sunday 29 March 2026 01:05:18 +0000 (0:00:00.273) 0:01:32.077 ********** 2026-03-29 01:06:38.679320 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-29 01:06:38.679323 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-29 01:06:38.679326 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-29 01:06:38.679334 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-29 01:06:38.679342 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-29 01:06:38.679346 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-29 01:06:38.679349 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:06:38.679352 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-29 01:06:38.679356 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-29 01:06:38.679359 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-29 01:06:38.679364 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-29 01:06:38.679372 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-29 01:06:38.679375 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-29 01:06:38.679379 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:06:38.679382 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-29 01:06:38.679386 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-29 01:06:38.679389 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-29 01:06:38.679394 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-29 01:06:38.679400 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-29 01:06:38.679406 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-29 01:06:38.679410 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:06:38.679413 | orchestrator | 2026-03-29 01:06:38.679416 | orchestrator | TASK [designate : Check designate containers] ********************************** 2026-03-29 01:06:38.679420 | orchestrator | Sunday 29 March 2026 01:05:19 +0000 (0:00:01.196) 0:01:33.273 ********** 2026-03-29 01:06:38.679423 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-29 01:06:38.679427 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-29 01:06:38.679431 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-29 01:06:38.679444 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-29 01:06:38.679457 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-29 01:06:38.679463 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-29 01:06:38.679468 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-29 01:06:38.679474 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-29 01:06:38.679479 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-29 01:06:38.679491 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-29 01:06:38.679499 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-29 01:06:38.679505 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-29 01:06:38.679510 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-29 01:06:38.679516 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-29 01:06:38.679521 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-29 01:06:38.679526 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-29 01:06:38.679537 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-29 01:06:38.679546 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-29 01:06:38.679552 | orchestrator | 2026-03-29 01:06:38.679556 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-03-29 01:06:38.679559 | orchestrator | Sunday 29 March 2026 01:05:24 +0000 (0:00:05.180) 0:01:38.454 ********** 2026-03-29 01:06:38.679562 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:06:38.679565 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:06:38.679568 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:06:38.679572 | orchestrator | 2026-03-29 01:06:38.679575 | orchestrator | TASK [designate : Creating Designate databases] ******************************** 2026-03-29 01:06:38.679578 | orchestrator | Sunday 29 March 2026 01:05:24 +0000 (0:00:00.382) 0:01:38.836 ********** 2026-03-29 01:06:38.679582 | orchestrator | changed: [testbed-node-0] => (item=designate) 2026-03-29 01:06:38.679585 | orchestrator | 2026-03-29 01:06:38.679588 | orchestrator | TASK [designate : Creating Designate databases user and setting permissions] *** 2026-03-29 01:06:38.679591 | orchestrator | Sunday 29 March 2026 01:05:27 +0000 (0:00:02.185) 0:01:41.022 ********** 2026-03-29 01:06:38.679594 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-29 01:06:38.679597 | orchestrator | changed: [testbed-node-0 -> {{ groups['designate-central'][0] }}] 2026-03-29 01:06:38.679601 | orchestrator | 2026-03-29 01:06:38.679604 | orchestrator | TASK [designate : Running Designate bootstrap container] *********************** 2026-03-29 01:06:38.679607 | orchestrator | Sunday 29 March 2026 01:05:29 +0000 (0:00:02.408) 0:01:43.430 ********** 2026-03-29 01:06:38.679613 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:06:38.679619 | orchestrator | 2026-03-29 01:06:38.679626 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-03-29 01:06:38.679631 | orchestrator | Sunday 29 March 2026 01:05:45 +0000 (0:00:15.915) 0:01:59.346 ********** 2026-03-29 01:06:38.679636 | orchestrator | 2026-03-29 01:06:38.679641 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-03-29 01:06:38.679646 | orchestrator | Sunday 29 March 2026 01:05:45 +0000 (0:00:00.096) 0:01:59.442 ********** 2026-03-29 01:06:38.679651 | orchestrator | 2026-03-29 01:06:38.679655 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-03-29 01:06:38.679660 | orchestrator | Sunday 29 March 2026 01:05:45 +0000 (0:00:00.073) 0:01:59.516 ********** 2026-03-29 01:06:38.679665 | orchestrator | 2026-03-29 01:06:38.679669 | orchestrator | RUNNING HANDLER [designate : Restart designate-backend-bind9 container] ******** 2026-03-29 01:06:38.679675 | orchestrator | Sunday 29 March 2026 01:05:45 +0000 (0:00:00.069) 0:01:59.585 ********** 2026-03-29 01:06:38.679684 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:06:38.679689 | orchestrator | changed: [testbed-node-2] 2026-03-29 01:06:38.679695 | orchestrator | changed: [testbed-node-1] 2026-03-29 01:06:38.679700 | orchestrator | 2026-03-29 01:06:38.679705 | orchestrator | RUNNING HANDLER [designate : Restart designate-api container] ****************** 2026-03-29 01:06:38.679710 | orchestrator | Sunday 29 March 2026 01:05:55 +0000 (0:00:09.450) 0:02:09.036 ********** 2026-03-29 01:06:38.679716 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:06:38.679721 | orchestrator | changed: [testbed-node-1] 2026-03-29 01:06:38.679724 | orchestrator | changed: [testbed-node-2] 2026-03-29 01:06:38.679728 | orchestrator | 2026-03-29 01:06:38.679731 | orchestrator | RUNNING HANDLER [designate : Restart designate-central container] ************** 2026-03-29 01:06:38.679734 | orchestrator | Sunday 29 March 2026 01:06:02 +0000 (0:00:07.344) 0:02:16.380 ********** 2026-03-29 01:06:38.679737 | orchestrator | changed: [testbed-node-1] 2026-03-29 01:06:38.679740 | orchestrator | changed: [testbed-node-2] 2026-03-29 01:06:38.679743 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:06:38.679746 | orchestrator | 2026-03-29 01:06:38.679749 | orchestrator | RUNNING HANDLER [designate : Restart designate-producer container] ************* 2026-03-29 01:06:38.679752 | orchestrator | Sunday 29 March 2026 01:06:12 +0000 (0:00:09.693) 0:02:26.074 ********** 2026-03-29 01:06:38.679755 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:06:38.679758 | orchestrator | changed: [testbed-node-1] 2026-03-29 01:06:38.679761 | orchestrator | changed: [testbed-node-2] 2026-03-29 01:06:38.679764 | orchestrator | 2026-03-29 01:06:38.679768 | orchestrator | RUNNING HANDLER [designate : Restart designate-mdns container] ***************** 2026-03-29 01:06:38.679771 | orchestrator | Sunday 29 March 2026 01:06:17 +0000 (0:00:05.520) 0:02:31.595 ********** 2026-03-29 01:06:38.679774 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:06:38.679777 | orchestrator | changed: [testbed-node-1] 2026-03-29 01:06:38.679781 | orchestrator | changed: [testbed-node-2] 2026-03-29 01:06:38.679784 | orchestrator | 2026-03-29 01:06:38.679787 | orchestrator | RUNNING HANDLER [designate : Restart designate-worker container] *************** 2026-03-29 01:06:38.679790 | orchestrator | Sunday 29 March 2026 01:06:24 +0000 (0:00:06.360) 0:02:37.956 ********** 2026-03-29 01:06:38.679794 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:06:38.679797 | orchestrator | changed: [testbed-node-2] 2026-03-29 01:06:38.679800 | orchestrator | changed: [testbed-node-1] 2026-03-29 01:06:38.679803 | orchestrator | 2026-03-29 01:06:38.679806 | orchestrator | TASK [designate : Non-destructive DNS pools update] **************************** 2026-03-29 01:06:38.679809 | orchestrator | Sunday 29 March 2026 01:06:31 +0000 (0:00:07.012) 0:02:44.968 ********** 2026-03-29 01:06:38.679815 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:06:38.679818 | orchestrator | 2026-03-29 01:06:38.679822 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 01:06:38.679825 | orchestrator | testbed-node-0 : ok=29  changed=23  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-29 01:06:38.679828 | orchestrator | testbed-node-1 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-29 01:06:38.679859 | orchestrator | testbed-node-2 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-29 01:06:38.679863 | orchestrator | 2026-03-29 01:06:38.679867 | orchestrator | 2026-03-29 01:06:38.679875 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-29 01:06:38.679878 | orchestrator | Sunday 29 March 2026 01:06:38 +0000 (0:00:07.093) 0:02:52.061 ********** 2026-03-29 01:06:38.679881 | orchestrator | =============================================================================== 2026-03-29 01:06:38.679885 | orchestrator | designate : Copying over designate.conf -------------------------------- 25.42s 2026-03-29 01:06:38.679888 | orchestrator | designate : Running Designate bootstrap container ---------------------- 15.92s 2026-03-29 01:06:38.679895 | orchestrator | designate : Restart designate-central container ------------------------- 9.69s 2026-03-29 01:06:38.679898 | orchestrator | designate : Restart designate-backend-bind9 container ------------------- 9.45s 2026-03-29 01:06:38.679901 | orchestrator | designate : Copying over config.json files for services ----------------- 7.48s 2026-03-29 01:06:38.679904 | orchestrator | designate : Restart designate-api container ----------------------------- 7.34s 2026-03-29 01:06:38.679907 | orchestrator | designate : Non-destructive DNS pools update ---------------------------- 7.09s 2026-03-29 01:06:38.679911 | orchestrator | designate : Restart designate-worker container -------------------------- 7.01s 2026-03-29 01:06:38.679914 | orchestrator | service-cert-copy : designate | Copying over extra CA certificates ------ 6.48s 2026-03-29 01:06:38.679917 | orchestrator | designate : Restart designate-mdns container ---------------------------- 6.36s 2026-03-29 01:06:38.679920 | orchestrator | service-ks-register : designate | Creating endpoints -------------------- 6.34s 2026-03-29 01:06:38.679924 | orchestrator | designate : Copying over pools.yaml ------------------------------------- 6.04s 2026-03-29 01:06:38.679927 | orchestrator | designate : Restart designate-producer container ------------------------ 5.52s 2026-03-29 01:06:38.679930 | orchestrator | designate : Check designate containers ---------------------------------- 5.18s 2026-03-29 01:06:38.679933 | orchestrator | service-ks-register : designate | Creating users ------------------------ 4.87s 2026-03-29 01:06:38.679938 | orchestrator | designate : Ensuring config directories exist --------------------------- 4.46s 2026-03-29 01:06:38.679943 | orchestrator | designate : Copying over rndc.key --------------------------------------- 4.09s 2026-03-29 01:06:38.679948 | orchestrator | service-ks-register : designate | Granting user roles ------------------- 3.66s 2026-03-29 01:06:38.679957 | orchestrator | service-ks-register : designate | Creating services --------------------- 3.63s 2026-03-29 01:06:38.679962 | orchestrator | designate : Copying over rndc.conf -------------------------------------- 3.47s 2026-03-29 01:06:38.679967 | orchestrator | 2026-03-29 01:06:38 | INFO  | Task 248abb42-edd4-4003-8f12-64ca6a92da80 is in state STARTED 2026-03-29 01:06:38.680593 | orchestrator | 2026-03-29 01:06:38 | INFO  | Task 05bed01a-cd1b-437f-92a2-658822fb3875 is in state STARTED 2026-03-29 01:06:38.680630 | orchestrator | 2026-03-29 01:06:38 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:06:41.722802 | orchestrator | 2026-03-29 01:06:41 | INFO  | Task c83b5385-8029-4990-a1a9-d2d242de7e84 is in state STARTED 2026-03-29 01:06:41.723465 | orchestrator | 2026-03-29 01:06:41 | INFO  | Task 851286e3-b1b5-4e76-8d99-2e44988fd96d is in state STARTED 2026-03-29 01:06:41.724688 | orchestrator | 2026-03-29 01:06:41 | INFO  | Task 248abb42-edd4-4003-8f12-64ca6a92da80 is in state STARTED 2026-03-29 01:06:41.725578 | orchestrator | 2026-03-29 01:06:41 | INFO  | Task 05bed01a-cd1b-437f-92a2-658822fb3875 is in state STARTED 2026-03-29 01:06:41.725606 | orchestrator | 2026-03-29 01:06:41 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:06:44.755735 | orchestrator | 2026-03-29 01:06:44 | INFO  | Task c83b5385-8029-4990-a1a9-d2d242de7e84 is in state STARTED 2026-03-29 01:06:44.755953 | orchestrator | 2026-03-29 01:06:44 | INFO  | Task 851286e3-b1b5-4e76-8d99-2e44988fd96d is in state STARTED 2026-03-29 01:06:44.756636 | orchestrator | 2026-03-29 01:06:44 | INFO  | Task 248abb42-edd4-4003-8f12-64ca6a92da80 is in state STARTED 2026-03-29 01:06:44.759391 | orchestrator | 2026-03-29 01:06:44 | INFO  | Task 05bed01a-cd1b-437f-92a2-658822fb3875 is in state STARTED 2026-03-29 01:06:44.759444 | orchestrator | 2026-03-29 01:06:44 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:06:47.791357 | orchestrator | 2026-03-29 01:06:47 | INFO  | Task c83b5385-8029-4990-a1a9-d2d242de7e84 is in state STARTED 2026-03-29 01:06:47.792377 | orchestrator | 2026-03-29 01:06:47 | INFO  | Task 851286e3-b1b5-4e76-8d99-2e44988fd96d is in state STARTED 2026-03-29 01:06:47.794640 | orchestrator | 2026-03-29 01:06:47 | INFO  | Task 248abb42-edd4-4003-8f12-64ca6a92da80 is in state STARTED 2026-03-29 01:06:47.795668 | orchestrator | 2026-03-29 01:06:47 | INFO  | Task 05bed01a-cd1b-437f-92a2-658822fb3875 is in state STARTED 2026-03-29 01:06:47.795711 | orchestrator | 2026-03-29 01:06:47 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:06:50.827912 | orchestrator | 2026-03-29 01:06:50 | INFO  | Task c83b5385-8029-4990-a1a9-d2d242de7e84 is in state STARTED 2026-03-29 01:06:50.828474 | orchestrator | 2026-03-29 01:06:50 | INFO  | Task 851286e3-b1b5-4e76-8d99-2e44988fd96d is in state STARTED 2026-03-29 01:06:50.829675 | orchestrator | 2026-03-29 01:06:50 | INFO  | Task 248abb42-edd4-4003-8f12-64ca6a92da80 is in state STARTED 2026-03-29 01:06:50.830891 | orchestrator | 2026-03-29 01:06:50 | INFO  | Task 05bed01a-cd1b-437f-92a2-658822fb3875 is in state STARTED 2026-03-29 01:06:50.830955 | orchestrator | 2026-03-29 01:06:50 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:06:53.860549 | orchestrator | 2026-03-29 01:06:53 | INFO  | Task c83b5385-8029-4990-a1a9-d2d242de7e84 is in state STARTED 2026-03-29 01:06:53.861918 | orchestrator | 2026-03-29 01:06:53 | INFO  | Task 851286e3-b1b5-4e76-8d99-2e44988fd96d is in state STARTED 2026-03-29 01:06:53.863519 | orchestrator | 2026-03-29 01:06:53 | INFO  | Task 248abb42-edd4-4003-8f12-64ca6a92da80 is in state STARTED 2026-03-29 01:06:53.864106 | orchestrator | 2026-03-29 01:06:53 | INFO  | Task 05bed01a-cd1b-437f-92a2-658822fb3875 is in state STARTED 2026-03-29 01:06:53.864442 | orchestrator | 2026-03-29 01:06:53 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:06:56.892180 | orchestrator | 2026-03-29 01:06:56 | INFO  | Task c83b5385-8029-4990-a1a9-d2d242de7e84 is in state STARTED 2026-03-29 01:06:56.893279 | orchestrator | 2026-03-29 01:06:56 | INFO  | Task 851286e3-b1b5-4e76-8d99-2e44988fd96d is in state STARTED 2026-03-29 01:06:56.895557 | orchestrator | 2026-03-29 01:06:56 | INFO  | Task 248abb42-edd4-4003-8f12-64ca6a92da80 is in state STARTED 2026-03-29 01:06:56.896632 | orchestrator | 2026-03-29 01:06:56 | INFO  | Task 05bed01a-cd1b-437f-92a2-658822fb3875 is in state STARTED 2026-03-29 01:06:56.896687 | orchestrator | 2026-03-29 01:06:56 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:06:59.935517 | orchestrator | 2026-03-29 01:06:59 | INFO  | Task c83b5385-8029-4990-a1a9-d2d242de7e84 is in state STARTED 2026-03-29 01:06:59.937180 | orchestrator | 2026-03-29 01:06:59 | INFO  | Task 851286e3-b1b5-4e76-8d99-2e44988fd96d is in state STARTED 2026-03-29 01:06:59.939144 | orchestrator | 2026-03-29 01:06:59 | INFO  | Task 248abb42-edd4-4003-8f12-64ca6a92da80 is in state STARTED 2026-03-29 01:06:59.940873 | orchestrator | 2026-03-29 01:06:59 | INFO  | Task 05bed01a-cd1b-437f-92a2-658822fb3875 is in state STARTED 2026-03-29 01:06:59.940919 | orchestrator | 2026-03-29 01:06:59 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:07:02.975759 | orchestrator | 2026-03-29 01:07:02 | INFO  | Task c83b5385-8029-4990-a1a9-d2d242de7e84 is in state STARTED 2026-03-29 01:07:02.976849 | orchestrator | 2026-03-29 01:07:02 | INFO  | Task 851286e3-b1b5-4e76-8d99-2e44988fd96d is in state STARTED 2026-03-29 01:07:02.977927 | orchestrator | 2026-03-29 01:07:02 | INFO  | Task 248abb42-edd4-4003-8f12-64ca6a92da80 is in state STARTED 2026-03-29 01:07:02.978716 | orchestrator | 2026-03-29 01:07:02 | INFO  | Task 05bed01a-cd1b-437f-92a2-658822fb3875 is in state STARTED 2026-03-29 01:07:02.978744 | orchestrator | 2026-03-29 01:07:02 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:07:06.017782 | orchestrator | 2026-03-29 01:07:06 | INFO  | Task c83b5385-8029-4990-a1a9-d2d242de7e84 is in state STARTED 2026-03-29 01:07:06.019618 | orchestrator | 2026-03-29 01:07:06 | INFO  | Task 851286e3-b1b5-4e76-8d99-2e44988fd96d is in state STARTED 2026-03-29 01:07:06.023336 | orchestrator | 2026-03-29 01:07:06 | INFO  | Task 248abb42-edd4-4003-8f12-64ca6a92da80 is in state STARTED 2026-03-29 01:07:06.025448 | orchestrator | 2026-03-29 01:07:06 | INFO  | Task 05bed01a-cd1b-437f-92a2-658822fb3875 is in state STARTED 2026-03-29 01:07:06.025502 | orchestrator | 2026-03-29 01:07:06 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:07:09.061368 | orchestrator | 2026-03-29 01:07:09 | INFO  | Task c83b5385-8029-4990-a1a9-d2d242de7e84 is in state STARTED 2026-03-29 01:07:09.061572 | orchestrator | 2026-03-29 01:07:09 | INFO  | Task 851286e3-b1b5-4e76-8d99-2e44988fd96d is in state STARTED 2026-03-29 01:07:09.062416 | orchestrator | 2026-03-29 01:07:09 | INFO  | Task 248abb42-edd4-4003-8f12-64ca6a92da80 is in state STARTED 2026-03-29 01:07:09.063162 | orchestrator | 2026-03-29 01:07:09 | INFO  | Task 05bed01a-cd1b-437f-92a2-658822fb3875 is in state STARTED 2026-03-29 01:07:09.063235 | orchestrator | 2026-03-29 01:07:09 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:07:12.088393 | orchestrator | 2026-03-29 01:07:12 | INFO  | Task c83b5385-8029-4990-a1a9-d2d242de7e84 is in state STARTED 2026-03-29 01:07:12.088621 | orchestrator | 2026-03-29 01:07:12 | INFO  | Task 851286e3-b1b5-4e76-8d99-2e44988fd96d is in state STARTED 2026-03-29 01:07:12.089455 | orchestrator | 2026-03-29 01:07:12 | INFO  | Task 248abb42-edd4-4003-8f12-64ca6a92da80 is in state STARTED 2026-03-29 01:07:12.090338 | orchestrator | 2026-03-29 01:07:12 | INFO  | Task 05bed01a-cd1b-437f-92a2-658822fb3875 is in state STARTED 2026-03-29 01:07:12.090384 | orchestrator | 2026-03-29 01:07:12 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:07:15.115354 | orchestrator | 2026-03-29 01:07:15 | INFO  | Task c83b5385-8029-4990-a1a9-d2d242de7e84 is in state STARTED 2026-03-29 01:07:15.115890 | orchestrator | 2026-03-29 01:07:15 | INFO  | Task 851286e3-b1b5-4e76-8d99-2e44988fd96d is in state STARTED 2026-03-29 01:07:15.116753 | orchestrator | 2026-03-29 01:07:15 | INFO  | Task 248abb42-edd4-4003-8f12-64ca6a92da80 is in state STARTED 2026-03-29 01:07:15.117438 | orchestrator | 2026-03-29 01:07:15 | INFO  | Task 05bed01a-cd1b-437f-92a2-658822fb3875 is in state STARTED 2026-03-29 01:07:15.117662 | orchestrator | 2026-03-29 01:07:15 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:07:18.143243 | orchestrator | 2026-03-29 01:07:18 | INFO  | Task c83b5385-8029-4990-a1a9-d2d242de7e84 is in state STARTED 2026-03-29 01:07:18.143307 | orchestrator | 2026-03-29 01:07:18 | INFO  | Task 851286e3-b1b5-4e76-8d99-2e44988fd96d is in state SUCCESS 2026-03-29 01:07:18.143853 | orchestrator | 2026-03-29 01:07:18 | INFO  | Task 248abb42-edd4-4003-8f12-64ca6a92da80 is in state STARTED 2026-03-29 01:07:18.144529 | orchestrator | 2026-03-29 01:07:18 | INFO  | Task 05bed01a-cd1b-437f-92a2-658822fb3875 is in state STARTED 2026-03-29 01:07:18.144609 | orchestrator | 2026-03-29 01:07:18 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:07:21.175550 | orchestrator | 2026-03-29 01:07:21 | INFO  | Task e777afcd-bf7b-43de-ad1d-8dba3c410dba is in state STARTED 2026-03-29 01:07:21.176227 | orchestrator | 2026-03-29 01:07:21 | INFO  | Task c83b5385-8029-4990-a1a9-d2d242de7e84 is in state STARTED 2026-03-29 01:07:21.178090 | orchestrator | 2026-03-29 01:07:21 | INFO  | Task 248abb42-edd4-4003-8f12-64ca6a92da80 is in state STARTED 2026-03-29 01:07:21.178454 | orchestrator | 2026-03-29 01:07:21 | INFO  | Task 05bed01a-cd1b-437f-92a2-658822fb3875 is in state STARTED 2026-03-29 01:07:21.178569 | orchestrator | 2026-03-29 01:07:21 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:07:24.275965 | orchestrator | 2026-03-29 01:07:24 | INFO  | Task e777afcd-bf7b-43de-ad1d-8dba3c410dba is in state STARTED 2026-03-29 01:07:24.277762 | orchestrator | 2026-03-29 01:07:24 | INFO  | Task c83b5385-8029-4990-a1a9-d2d242de7e84 is in state STARTED 2026-03-29 01:07:24.278610 | orchestrator | 2026-03-29 01:07:24 | INFO  | Task 248abb42-edd4-4003-8f12-64ca6a92da80 is in state STARTED 2026-03-29 01:07:24.280320 | orchestrator | 2026-03-29 01:07:24 | INFO  | Task 05bed01a-cd1b-437f-92a2-658822fb3875 is in state STARTED 2026-03-29 01:07:24.280467 | orchestrator | 2026-03-29 01:07:24 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:07:27.312610 | orchestrator | 2026-03-29 01:07:27 | INFO  | Task e777afcd-bf7b-43de-ad1d-8dba3c410dba is in state STARTED 2026-03-29 01:07:27.313953 | orchestrator | 2026-03-29 01:07:27 | INFO  | Task c83b5385-8029-4990-a1a9-d2d242de7e84 is in state STARTED 2026-03-29 01:07:27.315745 | orchestrator | 2026-03-29 01:07:27 | INFO  | Task 248abb42-edd4-4003-8f12-64ca6a92da80 is in state STARTED 2026-03-29 01:07:27.318207 | orchestrator | 2026-03-29 01:07:27 | INFO  | Task 05bed01a-cd1b-437f-92a2-658822fb3875 is in state STARTED 2026-03-29 01:07:27.318258 | orchestrator | 2026-03-29 01:07:27 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:07:30.358913 | orchestrator | 2026-03-29 01:07:30 | INFO  | Task e777afcd-bf7b-43de-ad1d-8dba3c410dba is in state STARTED 2026-03-29 01:07:30.360581 | orchestrator | 2026-03-29 01:07:30 | INFO  | Task c83b5385-8029-4990-a1a9-d2d242de7e84 is in state STARTED 2026-03-29 01:07:30.369313 | orchestrator | 2026-03-29 01:07:30 | INFO  | Task 248abb42-edd4-4003-8f12-64ca6a92da80 is in state STARTED 2026-03-29 01:07:30.372142 | orchestrator | 2026-03-29 01:07:30 | INFO  | Task 05bed01a-cd1b-437f-92a2-658822fb3875 is in state STARTED 2026-03-29 01:07:30.372201 | orchestrator | 2026-03-29 01:07:30 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:07:33.412142 | orchestrator | 2026-03-29 01:07:33 | INFO  | Task e777afcd-bf7b-43de-ad1d-8dba3c410dba is in state STARTED 2026-03-29 01:07:33.415070 | orchestrator | 2026-03-29 01:07:33 | INFO  | Task c83b5385-8029-4990-a1a9-d2d242de7e84 is in state STARTED 2026-03-29 01:07:33.415292 | orchestrator | 2026-03-29 01:07:33 | INFO  | Task 248abb42-edd4-4003-8f12-64ca6a92da80 is in state STARTED 2026-03-29 01:07:33.416328 | orchestrator | 2026-03-29 01:07:33 | INFO  | Task 05bed01a-cd1b-437f-92a2-658822fb3875 is in state STARTED 2026-03-29 01:07:33.416363 | orchestrator | 2026-03-29 01:07:33 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:07:36.447822 | orchestrator | 2026-03-29 01:07:36 | INFO  | Task e777afcd-bf7b-43de-ad1d-8dba3c410dba is in state STARTED 2026-03-29 01:07:36.449665 | orchestrator | 2026-03-29 01:07:36 | INFO  | Task c83b5385-8029-4990-a1a9-d2d242de7e84 is in state STARTED 2026-03-29 01:07:36.451412 | orchestrator | 2026-03-29 01:07:36 | INFO  | Task 248abb42-edd4-4003-8f12-64ca6a92da80 is in state STARTED 2026-03-29 01:07:36.453077 | orchestrator | 2026-03-29 01:07:36 | INFO  | Task 05bed01a-cd1b-437f-92a2-658822fb3875 is in state STARTED 2026-03-29 01:07:36.453171 | orchestrator | 2026-03-29 01:07:36 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:07:39.498772 | orchestrator | 2026-03-29 01:07:39 | INFO  | Task e777afcd-bf7b-43de-ad1d-8dba3c410dba is in state STARTED 2026-03-29 01:07:39.501889 | orchestrator | 2026-03-29 01:07:39 | INFO  | Task c83b5385-8029-4990-a1a9-d2d242de7e84 is in state STARTED 2026-03-29 01:07:39.505124 | orchestrator | 2026-03-29 01:07:39 | INFO  | Task 248abb42-edd4-4003-8f12-64ca6a92da80 is in state STARTED 2026-03-29 01:07:39.507548 | orchestrator | 2026-03-29 01:07:39 | INFO  | Task 05bed01a-cd1b-437f-92a2-658822fb3875 is in state STARTED 2026-03-29 01:07:39.507601 | orchestrator | 2026-03-29 01:07:39 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:07:42.547862 | orchestrator | 2026-03-29 01:07:42 | INFO  | Task e777afcd-bf7b-43de-ad1d-8dba3c410dba is in state STARTED 2026-03-29 01:07:42.550529 | orchestrator | 2026-03-29 01:07:42 | INFO  | Task c83b5385-8029-4990-a1a9-d2d242de7e84 is in state STARTED 2026-03-29 01:07:42.552456 | orchestrator | 2026-03-29 01:07:42 | INFO  | Task 248abb42-edd4-4003-8f12-64ca6a92da80 is in state STARTED 2026-03-29 01:07:42.554133 | orchestrator | 2026-03-29 01:07:42 | INFO  | Task 05bed01a-cd1b-437f-92a2-658822fb3875 is in state STARTED 2026-03-29 01:07:42.554176 | orchestrator | 2026-03-29 01:07:42 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:07:45.593433 | orchestrator | 2026-03-29 01:07:45 | INFO  | Task e777afcd-bf7b-43de-ad1d-8dba3c410dba is in state STARTED 2026-03-29 01:07:45.594724 | orchestrator | 2026-03-29 01:07:45 | INFO  | Task c83b5385-8029-4990-a1a9-d2d242de7e84 is in state STARTED 2026-03-29 01:07:45.596655 | orchestrator | 2026-03-29 01:07:45 | INFO  | Task 248abb42-edd4-4003-8f12-64ca6a92da80 is in state STARTED 2026-03-29 01:07:45.598683 | orchestrator | 2026-03-29 01:07:45 | INFO  | Task 05bed01a-cd1b-437f-92a2-658822fb3875 is in state STARTED 2026-03-29 01:07:45.598866 | orchestrator | 2026-03-29 01:07:45 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:07:48.645842 | orchestrator | 2026-03-29 01:07:48 | INFO  | Task e777afcd-bf7b-43de-ad1d-8dba3c410dba is in state STARTED 2026-03-29 01:07:48.648580 | orchestrator | 2026-03-29 01:07:48 | INFO  | Task c83b5385-8029-4990-a1a9-d2d242de7e84 is in state SUCCESS 2026-03-29 01:07:48.648719 | orchestrator | 2026-03-29 01:07:48.648771 | orchestrator | 2026-03-29 01:07:48.648778 | orchestrator | PLAY [Download ironic ipa images] ********************************************** 2026-03-29 01:07:48.648784 | orchestrator | 2026-03-29 01:07:48.648788 | orchestrator | TASK [Ensure the destination directory exists] ********************************* 2026-03-29 01:07:48.648794 | orchestrator | Sunday 29 March 2026 01:05:47 +0000 (0:00:00.226) 0:00:00.226 ********** 2026-03-29 01:07:48.648799 | orchestrator | changed: [localhost] 2026-03-29 01:07:48.648805 | orchestrator | 2026-03-29 01:07:48.648810 | orchestrator | TASK [Download ironic-agent initramfs] ***************************************** 2026-03-29 01:07:48.648815 | orchestrator | Sunday 29 March 2026 01:05:49 +0000 (0:00:02.043) 0:00:02.269 ********** 2026-03-29 01:07:48.648819 | orchestrator | changed: [localhost] 2026-03-29 01:07:48.648824 | orchestrator | 2026-03-29 01:07:48.648828 | orchestrator | TASK [Download ironic-agent kernel] ******************************************** 2026-03-29 01:07:48.648833 | orchestrator | Sunday 29 March 2026 01:06:24 +0000 (0:00:35.040) 0:00:37.310 ********** 2026-03-29 01:07:48.648848 | orchestrator | FAILED - RETRYING: [localhost]: Download ironic-agent kernel (3 retries left). 2026-03-29 01:07:48.648854 | orchestrator | FAILED - RETRYING: [localhost]: Download ironic-agent kernel (2 retries left). 2026-03-29 01:07:48.648860 | orchestrator | changed: [localhost] 2026-03-29 01:07:48.648865 | orchestrator | 2026-03-29 01:07:48.648875 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-29 01:07:48.648878 | orchestrator | 2026-03-29 01:07:48.648883 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-29 01:07:48.648888 | orchestrator | Sunday 29 March 2026 01:07:16 +0000 (0:00:51.848) 0:01:29.158 ********** 2026-03-29 01:07:48.648893 | orchestrator | ok: [testbed-node-0] 2026-03-29 01:07:48.648902 | orchestrator | ok: [testbed-node-1] 2026-03-29 01:07:48.648907 | orchestrator | ok: [testbed-node-2] 2026-03-29 01:07:48.648926 | orchestrator | 2026-03-29 01:07:48.648932 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-29 01:07:48.648937 | orchestrator | Sunday 29 March 2026 01:07:16 +0000 (0:00:00.262) 0:01:29.421 ********** 2026-03-29 01:07:48.648967 | orchestrator | ok: [testbed-node-0] => (item=enable_ironic_False) 2026-03-29 01:07:48.648974 | orchestrator | ok: [testbed-node-1] => (item=enable_ironic_False) 2026-03-29 01:07:48.648979 | orchestrator | ok: [testbed-node-2] => (item=enable_ironic_False) 2026-03-29 01:07:48.648985 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: enable_ironic_True 2026-03-29 01:07:48.648990 | orchestrator | 2026-03-29 01:07:48.648995 | orchestrator | PLAY [Apply role ironic] ******************************************************* 2026-03-29 01:07:48.649043 | orchestrator | skipping: no hosts matched 2026-03-29 01:07:48.649061 | orchestrator | 2026-03-29 01:07:48.649072 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 01:07:48.649108 | orchestrator | localhost : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-29 01:07:48.649116 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-29 01:07:48.649123 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-29 01:07:48.649133 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-29 01:07:48.649139 | orchestrator | 2026-03-29 01:07:48.649144 | orchestrator | 2026-03-29 01:07:48.649149 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-29 01:07:48.649153 | orchestrator | Sunday 29 March 2026 01:07:17 +0000 (0:00:00.611) 0:01:30.033 ********** 2026-03-29 01:07:48.649156 | orchestrator | =============================================================================== 2026-03-29 01:07:48.649159 | orchestrator | Download ironic-agent kernel ------------------------------------------- 51.85s 2026-03-29 01:07:48.649162 | orchestrator | Download ironic-agent initramfs ---------------------------------------- 35.04s 2026-03-29 01:07:48.649165 | orchestrator | Ensure the destination directory exists --------------------------------- 2.04s 2026-03-29 01:07:48.649169 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.61s 2026-03-29 01:07:48.649177 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.26s 2026-03-29 01:07:48.649180 | orchestrator | 2026-03-29 01:07:48.649627 | orchestrator | 2026-03-29 01:07:48.649641 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-29 01:07:48.649645 | orchestrator | 2026-03-29 01:07:48.649648 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-29 01:07:48.649657 | orchestrator | Sunday 29 March 2026 01:06:43 +0000 (0:00:00.300) 0:00:00.300 ********** 2026-03-29 01:07:48.649660 | orchestrator | ok: [testbed-node-0] 2026-03-29 01:07:48.649664 | orchestrator | ok: [testbed-node-1] 2026-03-29 01:07:48.649667 | orchestrator | ok: [testbed-node-2] 2026-03-29 01:07:48.649670 | orchestrator | 2026-03-29 01:07:48.649673 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-29 01:07:48.649677 | orchestrator | Sunday 29 March 2026 01:06:43 +0000 (0:00:00.275) 0:00:00.576 ********** 2026-03-29 01:07:48.649680 | orchestrator | ok: [testbed-node-0] => (item=enable_placement_True) 2026-03-29 01:07:48.649684 | orchestrator | ok: [testbed-node-1] => (item=enable_placement_True) 2026-03-29 01:07:48.649687 | orchestrator | ok: [testbed-node-2] => (item=enable_placement_True) 2026-03-29 01:07:48.649690 | orchestrator | 2026-03-29 01:07:48.649693 | orchestrator | PLAY [Apply role placement] **************************************************** 2026-03-29 01:07:48.649696 | orchestrator | 2026-03-29 01:07:48.649699 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-03-29 01:07:48.649702 | orchestrator | Sunday 29 March 2026 01:06:43 +0000 (0:00:00.278) 0:00:00.854 ********** 2026-03-29 01:07:48.649713 | orchestrator | included: /ansible/roles/placement/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 01:07:48.649716 | orchestrator | 2026-03-29 01:07:48.649719 | orchestrator | TASK [service-ks-register : placement | Creating services] ********************* 2026-03-29 01:07:48.649722 | orchestrator | Sunday 29 March 2026 01:06:44 +0000 (0:00:00.658) 0:00:01.513 ********** 2026-03-29 01:07:48.649764 | orchestrator | changed: [testbed-node-0] => (item=placement (placement)) 2026-03-29 01:07:48.649768 | orchestrator | 2026-03-29 01:07:48.649772 | orchestrator | TASK [service-ks-register : placement | Creating endpoints] ******************** 2026-03-29 01:07:48.649775 | orchestrator | Sunday 29 March 2026 01:06:48 +0000 (0:00:03.895) 0:00:05.408 ********** 2026-03-29 01:07:48.649778 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api-int.testbed.osism.xyz:8780 -> internal) 2026-03-29 01:07:48.649781 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api.testbed.osism.xyz:8780 -> public) 2026-03-29 01:07:48.649784 | orchestrator | 2026-03-29 01:07:48.649787 | orchestrator | TASK [service-ks-register : placement | Creating projects] ********************* 2026-03-29 01:07:48.649794 | orchestrator | Sunday 29 March 2026 01:06:54 +0000 (0:00:06.213) 0:00:11.622 ********** 2026-03-29 01:07:48.649798 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-29 01:07:48.649805 | orchestrator | 2026-03-29 01:07:48.649808 | orchestrator | TASK [service-ks-register : placement | Creating users] ************************ 2026-03-29 01:07:48.649811 | orchestrator | Sunday 29 March 2026 01:06:57 +0000 (0:00:02.770) 0:00:14.393 ********** 2026-03-29 01:07:48.649816 | orchestrator | changed: [testbed-node-0] => (item=placement -> service) 2026-03-29 01:07:48.649821 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-29 01:07:48.649829 | orchestrator | 2026-03-29 01:07:48.649835 | orchestrator | TASK [service-ks-register : placement | Creating roles] ************************ 2026-03-29 01:07:48.649840 | orchestrator | Sunday 29 March 2026 01:07:00 +0000 (0:00:03.363) 0:00:17.756 ********** 2026-03-29 01:07:48.649845 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-29 01:07:48.649850 | orchestrator | 2026-03-29 01:07:48.649855 | orchestrator | TASK [service-ks-register : placement | Granting user roles] ******************* 2026-03-29 01:07:48.649860 | orchestrator | Sunday 29 March 2026 01:07:03 +0000 (0:00:02.864) 0:00:20.621 ********** 2026-03-29 01:07:48.649865 | orchestrator | changed: [testbed-node-0] => (item=placement -> service -> admin) 2026-03-29 01:07:48.649870 | orchestrator | 2026-03-29 01:07:48.649874 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-03-29 01:07:48.649877 | orchestrator | Sunday 29 March 2026 01:07:07 +0000 (0:00:03.262) 0:00:23.883 ********** 2026-03-29 01:07:48.649880 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:07:48.649884 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:07:48.649887 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:07:48.649890 | orchestrator | 2026-03-29 01:07:48.649893 | orchestrator | TASK [placement : Ensuring config directories exist] *************************** 2026-03-29 01:07:48.649896 | orchestrator | Sunday 29 March 2026 01:07:07 +0000 (0:00:00.240) 0:00:24.124 ********** 2026-03-29 01:07:48.649901 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-29 01:07:48.649917 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-29 01:07:48.649921 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-29 01:07:48.649924 | orchestrator | 2026-03-29 01:07:48.649927 | orchestrator | TASK [placement : Check if policies shall be overwritten] ********************** 2026-03-29 01:07:48.649933 | orchestrator | Sunday 29 March 2026 01:07:08 +0000 (0:00:01.631) 0:00:25.756 ********** 2026-03-29 01:07:48.649936 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:07:48.649939 | orchestrator | 2026-03-29 01:07:48.649943 | orchestrator | TASK [placement : Set placement policy file] *********************************** 2026-03-29 01:07:48.649946 | orchestrator | Sunday 29 March 2026 01:07:09 +0000 (0:00:00.200) 0:00:25.956 ********** 2026-03-29 01:07:48.649949 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:07:48.649952 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:07:48.649955 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:07:48.649958 | orchestrator | 2026-03-29 01:07:48.649961 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-03-29 01:07:48.649964 | orchestrator | Sunday 29 March 2026 01:07:09 +0000 (0:00:00.288) 0:00:26.245 ********** 2026-03-29 01:07:48.649967 | orchestrator | included: /ansible/roles/placement/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 01:07:48.649971 | orchestrator | 2026-03-29 01:07:48.649974 | orchestrator | TASK [service-cert-copy : placement | Copying over extra CA certificates] ****** 2026-03-29 01:07:48.649977 | orchestrator | Sunday 29 March 2026 01:07:10 +0000 (0:00:01.456) 0:00:27.702 ********** 2026-03-29 01:07:48.649980 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-29 01:07:48.649989 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-29 01:07:48.649993 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-29 01:07:48.649996 | orchestrator | 2026-03-29 01:07:48.649999 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS certificate] *** 2026-03-29 01:07:48.650002 | orchestrator | Sunday 29 March 2026 01:07:12 +0000 (0:00:01.990) 0:00:29.692 ********** 2026-03-29 01:07:48.650007 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-29 01:07:48.650010 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:07:48.650034 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-29 01:07:48.650041 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:07:48.650047 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-29 01:07:48.650050 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:07:48.650053 | orchestrator | 2026-03-29 01:07:48.650056 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS key] *** 2026-03-29 01:07:48.650059 | orchestrator | Sunday 29 March 2026 01:07:13 +0000 (0:00:00.607) 0:00:30.300 ********** 2026-03-29 01:07:48.650063 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-29 01:07:48.650066 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:07:48.650071 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-29 01:07:48.650074 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:07:48.650077 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-29 01:07:48.650083 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:07:48.650086 | orchestrator | 2026-03-29 01:07:48.650089 | orchestrator | TASK [placement : Copying over config.json files for services] ***************** 2026-03-29 01:07:48.650092 | orchestrator | Sunday 29 March 2026 01:07:14 +0000 (0:00:01.130) 0:00:31.431 ********** 2026-03-29 01:07:48.650097 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-29 01:07:48.650101 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-29 01:07:48.650106 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-29 01:07:48.650109 | orchestrator | 2026-03-29 01:07:48.650112 | orchestrator | TASK [placement : Copying over placement.conf] ********************************* 2026-03-29 01:07:48.650116 | orchestrator | Sunday 29 March 2026 01:07:16 +0000 (0:00:01.897) 0:00:33.328 ********** 2026-03-29 01:07:48.650119 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-29 01:07:48.650126 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-29 01:07:48.650132 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-29 01:07:48.650135 | orchestrator | 2026-03-29 01:07:48.650138 | orchestrator | TASK [placement : Copying over placement-api wsgi configuration] *************** 2026-03-29 01:07:48.650141 | orchestrator | Sunday 29 March 2026 01:07:19 +0000 (0:00:02.886) 0:00:36.215 ********** 2026-03-29 01:07:48.650144 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-03-29 01:07:48.650148 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-03-29 01:07:48.650151 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-03-29 01:07:48.650154 | orchestrator | 2026-03-29 01:07:48.650157 | orchestrator | TASK [placement : Copying over migrate-db.rc.j2 configuration] ***************** 2026-03-29 01:07:48.650160 | orchestrator | Sunday 29 March 2026 01:07:21 +0000 (0:00:02.233) 0:00:38.448 ********** 2026-03-29 01:07:48.650163 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:07:48.650166 | orchestrator | changed: [testbed-node-2] 2026-03-29 01:07:48.650169 | orchestrator | changed: [testbed-node-1] 2026-03-29 01:07:48.650173 | orchestrator | 2026-03-29 01:07:48.650176 | orchestrator | TASK [placement : Copying over existing policy file] *************************** 2026-03-29 01:07:48.650179 | orchestrator | Sunday 29 March 2026 01:07:23 +0000 (0:00:02.036) 0:00:40.485 ********** 2026-03-29 01:07:48.650184 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-29 01:07:48.650191 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:07:48.650195 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-29 01:07:48.650198 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:07:48.650203 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-29 01:07:48.650207 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:07:48.650210 | orchestrator | 2026-03-29 01:07:48.650213 | orchestrator | TASK [placement : Check placement containers] ********************************** 2026-03-29 01:07:48.650216 | orchestrator | Sunday 29 March 2026 01:07:24 +0000 (0:00:01.063) 0:00:41.548 ********** 2026-03-29 01:07:48.650219 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-29 01:07:48.650225 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-29 01:07:48.650230 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-29 01:07:48.650233 | orchestrator | 2026-03-29 01:07:48.650236 | orchestrator | TASK [placement : Creating placement databases] ******************************** 2026-03-29 01:07:48.650239 | orchestrator | Sunday 29 March 2026 01:07:25 +0000 (0:00:01.162) 0:00:42.710 ********** 2026-03-29 01:07:48.650243 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:07:48.650247 | orchestrator | 2026-03-29 01:07:48.650251 | orchestrator | TASK [placement : Creating placement databases user and setting permissions] *** 2026-03-29 01:07:48.650254 | orchestrator | Sunday 29 March 2026 01:07:27 +0000 (0:00:02.142) 0:00:44.853 ********** 2026-03-29 01:07:48.650258 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:07:48.650261 | orchestrator | 2026-03-29 01:07:48.650265 | orchestrator | TASK [placement : Running placement bootstrap container] *********************** 2026-03-29 01:07:48.650268 | orchestrator | Sunday 29 March 2026 01:07:30 +0000 (0:00:02.111) 0:00:46.964 ********** 2026-03-29 01:07:48.650272 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:07:48.650276 | orchestrator | 2026-03-29 01:07:48.650279 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-03-29 01:07:48.650283 | orchestrator | Sunday 29 March 2026 01:07:42 +0000 (0:00:12.682) 0:00:59.646 ********** 2026-03-29 01:07:48.650287 | orchestrator | 2026-03-29 01:07:48.650290 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-03-29 01:07:48.650294 | orchestrator | Sunday 29 March 2026 01:07:42 +0000 (0:00:00.058) 0:00:59.705 ********** 2026-03-29 01:07:48.650298 | orchestrator | 2026-03-29 01:07:48.650303 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-03-29 01:07:48.650309 | orchestrator | Sunday 29 March 2026 01:07:42 +0000 (0:00:00.059) 0:00:59.764 ********** 2026-03-29 01:07:48.650314 | orchestrator | 2026-03-29 01:07:48.650319 | orchestrator | RUNNING HANDLER [placement : Restart placement-api container] ****************** 2026-03-29 01:07:48.650327 | orchestrator | Sunday 29 March 2026 01:07:42 +0000 (0:00:00.060) 0:00:59.824 ********** 2026-03-29 01:07:48.650333 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:07:48.650338 | orchestrator | changed: [testbed-node-2] 2026-03-29 01:07:48.650343 | orchestrator | changed: [testbed-node-1] 2026-03-29 01:07:48.650348 | orchestrator | 2026-03-29 01:07:48.650354 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 01:07:48.650359 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-29 01:07:48.650364 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-29 01:07:48.650374 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-29 01:07:48.650379 | orchestrator | 2026-03-29 01:07:48.650385 | orchestrator | 2026-03-29 01:07:48.650390 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-29 01:07:48.650395 | orchestrator | Sunday 29 March 2026 01:07:47 +0000 (0:00:04.785) 0:01:04.610 ********** 2026-03-29 01:07:48.650401 | orchestrator | =============================================================================== 2026-03-29 01:07:48.650406 | orchestrator | placement : Running placement bootstrap container ---------------------- 12.68s 2026-03-29 01:07:48.650411 | orchestrator | service-ks-register : placement | Creating endpoints -------------------- 6.21s 2026-03-29 01:07:48.650417 | orchestrator | placement : Restart placement-api container ----------------------------- 4.79s 2026-03-29 01:07:48.650422 | orchestrator | service-ks-register : placement | Creating services --------------------- 3.90s 2026-03-29 01:07:48.650427 | orchestrator | service-ks-register : placement | Creating users ------------------------ 3.36s 2026-03-29 01:07:48.650433 | orchestrator | service-ks-register : placement | Granting user roles ------------------- 3.26s 2026-03-29 01:07:48.650439 | orchestrator | placement : Copying over placement.conf --------------------------------- 2.89s 2026-03-29 01:07:48.650445 | orchestrator | service-ks-register : placement | Creating roles ------------------------ 2.86s 2026-03-29 01:07:48.650449 | orchestrator | service-ks-register : placement | Creating projects --------------------- 2.77s 2026-03-29 01:07:48.650452 | orchestrator | placement : Copying over placement-api wsgi configuration --------------- 2.23s 2026-03-29 01:07:48.650456 | orchestrator | placement : Creating placement databases -------------------------------- 2.14s 2026-03-29 01:07:48.650460 | orchestrator | placement : Creating placement databases user and setting permissions --- 2.11s 2026-03-29 01:07:48.650463 | orchestrator | placement : Copying over migrate-db.rc.j2 configuration ----------------- 2.04s 2026-03-29 01:07:48.650467 | orchestrator | service-cert-copy : placement | Copying over extra CA certificates ------ 1.99s 2026-03-29 01:07:48.650470 | orchestrator | placement : Copying over config.json files for services ----------------- 1.90s 2026-03-29 01:07:48.650474 | orchestrator | placement : Ensuring config directories exist --------------------------- 1.63s 2026-03-29 01:07:48.650478 | orchestrator | placement : include_tasks ----------------------------------------------- 1.46s 2026-03-29 01:07:48.650481 | orchestrator | placement : Check placement containers ---------------------------------- 1.16s 2026-03-29 01:07:48.650485 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS key --- 1.13s 2026-03-29 01:07:48.650489 | orchestrator | placement : Copying over existing policy file --------------------------- 1.06s 2026-03-29 01:07:48.650960 | orchestrator | 2026-03-29 01:07:48 | INFO  | Task 248abb42-edd4-4003-8f12-64ca6a92da80 is in state STARTED 2026-03-29 01:07:48.652355 | orchestrator | 2026-03-29 01:07:48 | INFO  | Task 05bed01a-cd1b-437f-92a2-658822fb3875 is in state STARTED 2026-03-29 01:07:48.652411 | orchestrator | 2026-03-29 01:07:48 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:07:51.680053 | orchestrator | 2026-03-29 01:07:51 | INFO  | Task e777afcd-bf7b-43de-ad1d-8dba3c410dba is in state STARTED 2026-03-29 01:07:51.681445 | orchestrator | 2026-03-29 01:07:51 | INFO  | Task 248abb42-edd4-4003-8f12-64ca6a92da80 is in state STARTED 2026-03-29 01:07:51.682261 | orchestrator | 2026-03-29 01:07:51 | INFO  | Task 12879e08-866f-411e-86e5-19578cc2ff39 is in state STARTED 2026-03-29 01:07:51.682851 | orchestrator | 2026-03-29 01:07:51 | INFO  | Task 05bed01a-cd1b-437f-92a2-658822fb3875 is in state STARTED 2026-03-29 01:07:51.682874 | orchestrator | 2026-03-29 01:07:51 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:07:54.713951 | orchestrator | 2026-03-29 01:07:54 | INFO  | Task e777afcd-bf7b-43de-ad1d-8dba3c410dba is in state STARTED 2026-03-29 01:07:54.715127 | orchestrator | 2026-03-29 01:07:54 | INFO  | Task 248abb42-edd4-4003-8f12-64ca6a92da80 is in state STARTED 2026-03-29 01:07:54.715910 | orchestrator | 2026-03-29 01:07:54 | INFO  | Task 12879e08-866f-411e-86e5-19578cc2ff39 is in state STARTED 2026-03-29 01:07:54.716886 | orchestrator | 2026-03-29 01:07:54 | INFO  | Task 05bed01a-cd1b-437f-92a2-658822fb3875 is in state STARTED 2026-03-29 01:07:54.716921 | orchestrator | 2026-03-29 01:07:54 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:07:57.762157 | orchestrator | 2026-03-29 01:07:57 | INFO  | Task e777afcd-bf7b-43de-ad1d-8dba3c410dba is in state STARTED 2026-03-29 01:07:57.767981 | orchestrator | 2026-03-29 01:07:57 | INFO  | Task 248abb42-edd4-4003-8f12-64ca6a92da80 is in state SUCCESS 2026-03-29 01:07:57.769608 | orchestrator | 2026-03-29 01:07:57.769660 | orchestrator | 2026-03-29 01:07:57.769668 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-29 01:07:57.769674 | orchestrator | 2026-03-29 01:07:57.769680 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-29 01:07:57.769685 | orchestrator | Sunday 29 March 2026 01:03:36 +0000 (0:00:00.302) 0:00:00.302 ********** 2026-03-29 01:07:57.769691 | orchestrator | ok: [testbed-node-0] 2026-03-29 01:07:57.769697 | orchestrator | ok: [testbed-node-1] 2026-03-29 01:07:57.769703 | orchestrator | ok: [testbed-node-2] 2026-03-29 01:07:57.769708 | orchestrator | ok: [testbed-node-3] 2026-03-29 01:07:57.769723 | orchestrator | ok: [testbed-node-4] 2026-03-29 01:07:57.769729 | orchestrator | ok: [testbed-node-5] 2026-03-29 01:07:57.769734 | orchestrator | 2026-03-29 01:07:57.769740 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-29 01:07:57.769744 | orchestrator | Sunday 29 March 2026 01:03:36 +0000 (0:00:00.534) 0:00:00.836 ********** 2026-03-29 01:07:57.769747 | orchestrator | ok: [testbed-node-0] => (item=enable_neutron_True) 2026-03-29 01:07:57.769751 | orchestrator | ok: [testbed-node-1] => (item=enable_neutron_True) 2026-03-29 01:07:57.769754 | orchestrator | ok: [testbed-node-2] => (item=enable_neutron_True) 2026-03-29 01:07:57.769757 | orchestrator | ok: [testbed-node-3] => (item=enable_neutron_True) 2026-03-29 01:07:57.769760 | orchestrator | ok: [testbed-node-4] => (item=enable_neutron_True) 2026-03-29 01:07:57.769764 | orchestrator | ok: [testbed-node-5] => (item=enable_neutron_True) 2026-03-29 01:07:57.769767 | orchestrator | 2026-03-29 01:07:57.769770 | orchestrator | PLAY [Apply role neutron] ****************************************************** 2026-03-29 01:07:57.769773 | orchestrator | 2026-03-29 01:07:57.769777 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-03-29 01:07:57.769780 | orchestrator | Sunday 29 March 2026 01:03:37 +0000 (0:00:00.625) 0:00:01.461 ********** 2026-03-29 01:07:57.769829 | orchestrator | included: /ansible/roles/neutron/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-29 01:07:57.769835 | orchestrator | 2026-03-29 01:07:57.769838 | orchestrator | TASK [neutron : Get container facts] ******************************************* 2026-03-29 01:07:57.769841 | orchestrator | Sunday 29 March 2026 01:03:38 +0000 (0:00:01.026) 0:00:02.488 ********** 2026-03-29 01:07:57.769844 | orchestrator | ok: [testbed-node-0] 2026-03-29 01:07:57.769853 | orchestrator | ok: [testbed-node-3] 2026-03-29 01:07:57.769856 | orchestrator | ok: [testbed-node-1] 2026-03-29 01:07:57.769859 | orchestrator | ok: [testbed-node-2] 2026-03-29 01:07:57.769863 | orchestrator | ok: [testbed-node-4] 2026-03-29 01:07:57.769866 | orchestrator | ok: [testbed-node-5] 2026-03-29 01:07:57.769873 | orchestrator | 2026-03-29 01:07:57.769895 | orchestrator | TASK [neutron : Get container volume facts] ************************************ 2026-03-29 01:07:57.769907 | orchestrator | Sunday 29 March 2026 01:03:39 +0000 (0:00:01.424) 0:00:03.913 ********** 2026-03-29 01:07:57.769913 | orchestrator | ok: [testbed-node-0] 2026-03-29 01:07:57.769924 | orchestrator | ok: [testbed-node-1] 2026-03-29 01:07:57.769928 | orchestrator | ok: [testbed-node-2] 2026-03-29 01:07:57.769931 | orchestrator | ok: [testbed-node-3] 2026-03-29 01:07:57.769953 | orchestrator | ok: [testbed-node-4] 2026-03-29 01:07:57.769957 | orchestrator | ok: [testbed-node-5] 2026-03-29 01:07:57.769962 | orchestrator | 2026-03-29 01:07:57.769967 | orchestrator | TASK [neutron : Check for ML2/OVN presence] ************************************ 2026-03-29 01:07:57.769972 | orchestrator | Sunday 29 March 2026 01:03:40 +0000 (0:00:01.226) 0:00:05.139 ********** 2026-03-29 01:07:57.769977 | orchestrator | ok: [testbed-node-0] => { 2026-03-29 01:07:57.769983 | orchestrator |  "changed": false, 2026-03-29 01:07:57.769988 | orchestrator |  "msg": "All assertions passed" 2026-03-29 01:07:57.769993 | orchestrator | } 2026-03-29 01:07:57.769998 | orchestrator | ok: [testbed-node-1] => { 2026-03-29 01:07:57.770003 | orchestrator |  "changed": false, 2026-03-29 01:07:57.770008 | orchestrator |  "msg": "All assertions passed" 2026-03-29 01:07:57.770039 | orchestrator | } 2026-03-29 01:07:57.770046 | orchestrator | ok: [testbed-node-2] => { 2026-03-29 01:07:57.770051 | orchestrator |  "changed": false, 2026-03-29 01:07:57.770057 | orchestrator |  "msg": "All assertions passed" 2026-03-29 01:07:57.770062 | orchestrator | } 2026-03-29 01:07:57.770068 | orchestrator | ok: [testbed-node-3] => { 2026-03-29 01:07:57.770073 | orchestrator |  "changed": false, 2026-03-29 01:07:57.770078 | orchestrator |  "msg": "All assertions passed" 2026-03-29 01:07:57.770084 | orchestrator | } 2026-03-29 01:07:57.770089 | orchestrator | ok: [testbed-node-4] => { 2026-03-29 01:07:57.770094 | orchestrator |  "changed": false, 2026-03-29 01:07:57.770100 | orchestrator |  "msg": "All assertions passed" 2026-03-29 01:07:57.770105 | orchestrator | } 2026-03-29 01:07:57.770111 | orchestrator | ok: [testbed-node-5] => { 2026-03-29 01:07:57.770116 | orchestrator |  "changed": false, 2026-03-29 01:07:57.770122 | orchestrator |  "msg": "All assertions passed" 2026-03-29 01:07:57.770127 | orchestrator | } 2026-03-29 01:07:57.770133 | orchestrator | 2026-03-29 01:07:57.770138 | orchestrator | TASK [neutron : Check for ML2/OVS presence] ************************************ 2026-03-29 01:07:57.770144 | orchestrator | Sunday 29 March 2026 01:03:41 +0000 (0:00:00.569) 0:00:05.709 ********** 2026-03-29 01:07:57.770149 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:07:57.770155 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:07:57.770161 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:07:57.770167 | orchestrator | skipping: [testbed-node-3] 2026-03-29 01:07:57.770172 | orchestrator | skipping: [testbed-node-4] 2026-03-29 01:07:57.770178 | orchestrator | skipping: [testbed-node-5] 2026-03-29 01:07:57.770184 | orchestrator | 2026-03-29 01:07:57.770190 | orchestrator | TASK [service-ks-register : neutron | Creating services] *********************** 2026-03-29 01:07:57.770195 | orchestrator | Sunday 29 March 2026 01:03:42 +0000 (0:00:00.986) 0:00:06.695 ********** 2026-03-29 01:07:57.770201 | orchestrator | changed: [testbed-node-0] => (item=neutron (network)) 2026-03-29 01:07:57.770208 | orchestrator | 2026-03-29 01:07:57.770214 | orchestrator | TASK [service-ks-register : neutron | Creating endpoints] ********************** 2026-03-29 01:07:57.770219 | orchestrator | Sunday 29 March 2026 01:03:46 +0000 (0:00:03.970) 0:00:10.666 ********** 2026-03-29 01:07:57.770225 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api-int.testbed.osism.xyz:9696 -> internal) 2026-03-29 01:07:57.770231 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api.testbed.osism.xyz:9696 -> public) 2026-03-29 01:07:57.770237 | orchestrator | 2026-03-29 01:07:57.770252 | orchestrator | TASK [service-ks-register : neutron | Creating projects] *********************** 2026-03-29 01:07:57.770258 | orchestrator | Sunday 29 March 2026 01:03:52 +0000 (0:00:06.495) 0:00:17.162 ********** 2026-03-29 01:07:57.770263 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-29 01:07:57.770268 | orchestrator | 2026-03-29 01:07:57.770274 | orchestrator | TASK [service-ks-register : neutron | Creating users] ************************** 2026-03-29 01:07:57.770280 | orchestrator | Sunday 29 March 2026 01:03:56 +0000 (0:00:03.364) 0:00:20.527 ********** 2026-03-29 01:07:57.770285 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service) 2026-03-29 01:07:57.770291 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-29 01:07:57.770301 | orchestrator | 2026-03-29 01:07:57.770307 | orchestrator | TASK [service-ks-register : neutron | Creating roles] ************************** 2026-03-29 01:07:57.770312 | orchestrator | Sunday 29 March 2026 01:03:59 +0000 (0:00:03.516) 0:00:24.043 ********** 2026-03-29 01:07:57.770318 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-29 01:07:57.770324 | orchestrator | 2026-03-29 01:07:57.770329 | orchestrator | TASK [service-ks-register : neutron | Granting user roles] ********************* 2026-03-29 01:07:57.770335 | orchestrator | Sunday 29 March 2026 01:04:03 +0000 (0:00:03.933) 0:00:27.977 ********** 2026-03-29 01:07:57.770340 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> admin) 2026-03-29 01:07:57.770345 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> service) 2026-03-29 01:07:57.770349 | orchestrator | 2026-03-29 01:07:57.770353 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-03-29 01:07:57.770357 | orchestrator | Sunday 29 March 2026 01:04:11 +0000 (0:00:07.460) 0:00:35.437 ********** 2026-03-29 01:07:57.770361 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:07:57.770386 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:07:57.770390 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:07:57.770394 | orchestrator | skipping: [testbed-node-3] 2026-03-29 01:07:57.770397 | orchestrator | skipping: [testbed-node-4] 2026-03-29 01:07:57.770404 | orchestrator | skipping: [testbed-node-5] 2026-03-29 01:07:57.770408 | orchestrator | 2026-03-29 01:07:57.770427 | orchestrator | TASK [Load and persist kernel modules] ***************************************** 2026-03-29 01:07:57.770431 | orchestrator | Sunday 29 March 2026 01:04:11 +0000 (0:00:00.531) 0:00:35.969 ********** 2026-03-29 01:07:57.770435 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:07:57.770443 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:07:57.770446 | orchestrator | skipping: [testbed-node-3] 2026-03-29 01:07:57.770450 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:07:57.770453 | orchestrator | skipping: [testbed-node-4] 2026-03-29 01:07:57.770457 | orchestrator | skipping: [testbed-node-5] 2026-03-29 01:07:57.770461 | orchestrator | 2026-03-29 01:07:57.770465 | orchestrator | TASK [neutron : Check IPv6 support] ******************************************** 2026-03-29 01:07:57.770468 | orchestrator | Sunday 29 March 2026 01:04:14 +0000 (0:00:02.424) 0:00:38.394 ********** 2026-03-29 01:07:57.770472 | orchestrator | ok: [testbed-node-1] 2026-03-29 01:07:57.770476 | orchestrator | ok: [testbed-node-0] 2026-03-29 01:07:57.770479 | orchestrator | ok: [testbed-node-2] 2026-03-29 01:07:57.770483 | orchestrator | ok: [testbed-node-3] 2026-03-29 01:07:57.770487 | orchestrator | ok: [testbed-node-4] 2026-03-29 01:07:57.770490 | orchestrator | ok: [testbed-node-5] 2026-03-29 01:07:57.770494 | orchestrator | 2026-03-29 01:07:57.770498 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-03-29 01:07:57.770501 | orchestrator | Sunday 29 March 2026 01:04:15 +0000 (0:00:00.949) 0:00:39.343 ********** 2026-03-29 01:07:57.770505 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:07:57.770509 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:07:57.770513 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:07:57.770517 | orchestrator | skipping: [testbed-node-3] 2026-03-29 01:07:57.770521 | orchestrator | skipping: [testbed-node-4] 2026-03-29 01:07:57.770524 | orchestrator | skipping: [testbed-node-5] 2026-03-29 01:07:57.770528 | orchestrator | 2026-03-29 01:07:57.770532 | orchestrator | TASK [neutron : Ensuring config directories exist] ***************************** 2026-03-29 01:07:57.770535 | orchestrator | Sunday 29 March 2026 01:04:17 +0000 (0:00:02.076) 0:00:41.420 ********** 2026-03-29 01:07:57.770541 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-29 01:07:57.770555 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-29 01:07:57.770560 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-29 01:07:57.770567 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-29 01:07:57.770572 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-29 01:07:57.770577 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-29 01:07:57.770583 | orchestrator | 2026-03-29 01:07:57.770587 | orchestrator | TASK [neutron : Check if extra ml2 plugins exists] ***************************** 2026-03-29 01:07:57.770591 | orchestrator | Sunday 29 March 2026 01:04:19 +0000 (0:00:02.704) 0:00:44.125 ********** 2026-03-29 01:07:57.770595 | orchestrator | [WARNING]: Skipped 2026-03-29 01:07:57.770600 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' path 2026-03-29 01:07:57.770604 | orchestrator | due to this access issue: 2026-03-29 01:07:57.770608 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' is not 2026-03-29 01:07:57.770611 | orchestrator | a directory 2026-03-29 01:07:57.770617 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-29 01:07:57.770622 | orchestrator | 2026-03-29 01:07:57.770628 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-03-29 01:07:57.770636 | orchestrator | Sunday 29 March 2026 01:04:20 +0000 (0:00:00.884) 0:00:45.009 ********** 2026-03-29 01:07:57.770641 | orchestrator | included: /ansible/roles/neutron/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-29 01:07:57.770648 | orchestrator | 2026-03-29 01:07:57.770653 | orchestrator | TASK [service-cert-copy : neutron | Copying over extra CA certificates] ******** 2026-03-29 01:07:57.770658 | orchestrator | Sunday 29 March 2026 01:04:21 +0000 (0:00:01.094) 0:00:46.104 ********** 2026-03-29 01:07:57.770664 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-29 01:07:57.770673 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-29 01:07:57.770679 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-29 01:07:57.770689 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-29 01:07:57.770699 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-29 01:07:57.770705 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-29 01:07:57.770710 | orchestrator | 2026-03-29 01:07:57.770730 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS certificate] *** 2026-03-29 01:07:57.770735 | orchestrator | Sunday 29 March 2026 01:04:25 +0000 (0:00:03.795) 0:00:49.900 ********** 2026-03-29 01:07:57.770742 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-29 01:07:57.770753 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:07:57.770759 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-29 01:07:57.770763 | orchestrator | skipping: [testbed-node-4] 2026-03-29 01:07:57.770767 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-29 01:07:57.770770 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:07:57.770776 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-29 01:07:57.770779 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:07:57.770784 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-29 01:07:57.770788 | orchestrator | skipping: [testbed-node-5] 2026-03-29 01:07:57.770792 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-29 01:07:57.770797 | orchestrator | skipping: [testbed-node-3] 2026-03-29 01:07:57.770801 | orchestrator | 2026-03-29 01:07:57.770804 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS key] ***** 2026-03-29 01:07:57.770809 | orchestrator | Sunday 29 March 2026 01:04:28 +0000 (0:00:02.670) 0:00:52.570 ********** 2026-03-29 01:07:57.770815 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-29 01:07:57.770820 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:07:57.770828 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-29 01:07:57.770834 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:07:57.770839 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-29 01:07:57.770845 | orchestrator | skipping: [testbed-node-5] 2026-03-29 01:07:57.770853 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-29 01:07:57.770865 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:07:57.770870 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-29 01:07:57.770876 | orchestrator | skipping: [testbed-node-4] 2026-03-29 01:07:57.770882 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-29 01:07:57.770890 | orchestrator | skipping: [testbed-node-3] 2026-03-29 01:07:57.770899 | orchestrator | 2026-03-29 01:07:57.770904 | orchestrator | TASK [neutron : Creating TLS backend PEM File] ********************************* 2026-03-29 01:07:57.770909 | orchestrator | Sunday 29 March 2026 01:04:32 +0000 (0:00:04.215) 0:00:56.786 ********** 2026-03-29 01:07:57.770914 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:07:57.770919 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:07:57.770924 | orchestrator | skipping: [testbed-node-4] 2026-03-29 01:07:57.770929 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:07:57.770934 | orchestrator | skipping: [testbed-node-3] 2026-03-29 01:07:57.770939 | orchestrator | skipping: [testbed-node-5] 2026-03-29 01:07:57.770943 | orchestrator | 2026-03-29 01:07:57.770948 | orchestrator | TASK [neutron : Check if policies shall be overwritten] ************************ 2026-03-29 01:07:57.770957 | orchestrator | Sunday 29 March 2026 01:04:35 +0000 (0:00:02.475) 0:00:59.262 ********** 2026-03-29 01:07:57.770962 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:07:57.770966 | orchestrator | 2026-03-29 01:07:57.770972 | orchestrator | TASK [neutron : Set neutron policy file] *************************************** 2026-03-29 01:07:57.770977 | orchestrator | Sunday 29 March 2026 01:04:35 +0000 (0:00:00.301) 0:00:59.563 ********** 2026-03-29 01:07:57.770982 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:07:57.770987 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:07:57.770992 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:07:57.770997 | orchestrator | skipping: [testbed-node-3] 2026-03-29 01:07:57.771002 | orchestrator | skipping: [testbed-node-4] 2026-03-29 01:07:57.771008 | orchestrator | skipping: [testbed-node-5] 2026-03-29 01:07:57.771016 | orchestrator | 2026-03-29 01:07:57.771022 | orchestrator | TASK [neutron : Copying over existing policy file] ***************************** 2026-03-29 01:07:57.771027 | orchestrator | Sunday 29 March 2026 01:04:35 +0000 (0:00:00.589) 0:01:00.152 ********** 2026-03-29 01:07:57.771036 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-29 01:07:57.771046 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:07:57.771051 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-29 01:07:57.771056 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:07:57.771061 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-29 01:07:57.771066 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:07:57.771075 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-29 01:07:57.771080 | orchestrator | skipping: [testbed-node-4] 2026-03-29 01:07:57.771086 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-29 01:07:57.771094 | orchestrator | skipping: [testbed-node-3] 2026-03-29 01:07:57.771101 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-29 01:07:57.771107 | orchestrator | skipping: [testbed-node-5] 2026-03-29 01:07:57.771112 | orchestrator | 2026-03-29 01:07:57.771117 | orchestrator | TASK [neutron : Copying over config.json files for services] ******************* 2026-03-29 01:07:57.771122 | orchestrator | Sunday 29 March 2026 01:04:39 +0000 (0:00:03.620) 0:01:03.773 ********** 2026-03-29 01:07:57.771127 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-29 01:07:57.771133 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-29 01:07:57.771143 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-29 01:07:57.771155 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-29 01:07:57.771163 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-29 01:07:57.771169 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-29 01:07:57.771175 | orchestrator | 2026-03-29 01:07:57.771180 | orchestrator | TASK [neutron : Copying over neutron.conf] ************************************* 2026-03-29 01:07:57.771186 | orchestrator | Sunday 29 March 2026 01:04:44 +0000 (0:00:04.542) 0:01:08.315 ********** 2026-03-29 01:07:57.771191 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-29 01:07:57.771201 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-29 01:07:57.771210 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-29 01:07:57.771218 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-29 01:07:57.771223 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-29 01:07:57.771228 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-29 01:07:57.771233 | orchestrator | 2026-03-29 01:07:57.771239 | orchestrator | TASK [neutron : Copying over neutron_vpnaas.conf] ****************************** 2026-03-29 01:07:57.771244 | orchestrator | Sunday 29 March 2026 01:04:53 +0000 (0:00:08.973) 0:01:17.288 ********** 2026-03-29 01:07:57.771254 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-29 01:07:57.771264 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:07:57.771272 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-29 01:07:57.771278 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:07:57.771283 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-29 01:07:57.771289 | orchestrator | skipping: [testbed-node-3] 2026-03-29 01:07:57.771295 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-29 01:07:57.771300 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:07:57.771306 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-29 01:07:57.771315 | orchestrator | skipping: [testbed-node-4] 2026-03-29 01:07:57.771324 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-29 01:07:57.771329 | orchestrator | skipping: [testbed-node-5] 2026-03-29 01:07:57.771335 | orchestrator | 2026-03-29 01:07:57.771340 | orchestrator | TASK [neutron : Copying over ssh key] ****************************************** 2026-03-29 01:07:57.771345 | orchestrator | Sunday 29 March 2026 01:04:56 +0000 (0:00:03.348) 0:01:20.637 ********** 2026-03-29 01:07:57.771351 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:07:57.771356 | orchestrator | changed: [testbed-node-1] 2026-03-29 01:07:57.771361 | orchestrator | skipping: [testbed-node-4] 2026-03-29 01:07:57.771367 | orchestrator | skipping: [testbed-node-3] 2026-03-29 01:07:57.771372 | orchestrator | changed: [testbed-node-2] 2026-03-29 01:07:57.771377 | orchestrator | skipping: [testbed-node-5] 2026-03-29 01:07:57.771382 | orchestrator | 2026-03-29 01:07:57.771387 | orchestrator | TASK [neutron : Copying over ml2_conf.ini] ************************************* 2026-03-29 01:07:57.771392 | orchestrator | Sunday 29 March 2026 01:05:00 +0000 (0:00:04.442) 0:01:25.079 ********** 2026-03-29 01:07:57.771400 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-29 01:07:57.771405 | orchestrator | skipping: [testbed-node-3] 2026-03-29 01:07:57.771412 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-29 01:07:57.771417 | orchestrator | skipping: [testbed-node-4] 2026-03-29 01:07:57.771422 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-29 01:07:57.771431 | orchestrator | skipping: [testbed-node-5] 2026-03-29 01:07:57.771441 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-29 01:07:57.771447 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-29 01:07:57.771455 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-29 01:07:57.771461 | orchestrator | 2026-03-29 01:07:57.771466 | orchestrator | TASK [neutron : Copying over linuxbridge_agent.ini] **************************** 2026-03-29 01:07:57.771471 | orchestrator | Sunday 29 March 2026 01:05:05 +0000 (0:00:04.580) 0:01:29.660 ********** 2026-03-29 01:07:57.771477 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:07:57.771482 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:07:57.771488 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:07:57.771493 | orchestrator | skipping: [testbed-node-3] 2026-03-29 01:07:57.771498 | orchestrator | skipping: [testbed-node-4] 2026-03-29 01:07:57.771504 | orchestrator | skipping: [testbed-node-5] 2026-03-29 01:07:57.771515 | orchestrator | 2026-03-29 01:07:57.771520 | orchestrator | TASK [neutron : Copying over openvswitch_agent.ini] **************************** 2026-03-29 01:07:57.771526 | orchestrator | Sunday 29 March 2026 01:05:08 +0000 (0:00:03.310) 0:01:32.970 ********** 2026-03-29 01:07:57.771531 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:07:57.771537 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:07:57.771542 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:07:57.771547 | orchestrator | skipping: [testbed-node-3] 2026-03-29 01:07:57.771553 | orchestrator | skipping: [testbed-node-4] 2026-03-29 01:07:57.771558 | orchestrator | skipping: [testbed-node-5] 2026-03-29 01:07:57.771563 | orchestrator | 2026-03-29 01:07:57.771568 | orchestrator | TASK [neutron : Copying over sriov_agent.ini] ********************************** 2026-03-29 01:07:57.771573 | orchestrator | Sunday 29 March 2026 01:05:11 +0000 (0:00:02.781) 0:01:35.752 ********** 2026-03-29 01:07:57.771579 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:07:57.771584 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:07:57.771589 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:07:57.771595 | orchestrator | skipping: [testbed-node-3] 2026-03-29 01:07:57.771600 | orchestrator | skipping: [testbed-node-5] 2026-03-29 01:07:57.771605 | orchestrator | skipping: [testbed-node-4] 2026-03-29 01:07:57.771610 | orchestrator | 2026-03-29 01:07:57.771616 | orchestrator | TASK [neutron : Copying over mlnx_agent.ini] *********************************** 2026-03-29 01:07:57.771621 | orchestrator | Sunday 29 March 2026 01:05:13 +0000 (0:00:01.799) 0:01:37.552 ********** 2026-03-29 01:07:57.771626 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:07:57.771632 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:07:57.771637 | orchestrator | skipping: [testbed-node-4] 2026-03-29 01:07:57.771642 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:07:57.771648 | orchestrator | skipping: [testbed-node-3] 2026-03-29 01:07:57.771653 | orchestrator | skipping: [testbed-node-5] 2026-03-29 01:07:57.771658 | orchestrator | 2026-03-29 01:07:57.771664 | orchestrator | TASK [neutron : Copying over eswitchd.conf] ************************************ 2026-03-29 01:07:57.771669 | orchestrator | Sunday 29 March 2026 01:05:16 +0000 (0:00:02.988) 0:01:40.541 ********** 2026-03-29 01:07:57.771675 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:07:57.771680 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:07:57.771685 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:07:57.771691 | orchestrator | skipping: [testbed-node-4] 2026-03-29 01:07:57.771700 | orchestrator | skipping: [testbed-node-3] 2026-03-29 01:07:57.771705 | orchestrator | skipping: [testbed-node-5] 2026-03-29 01:07:57.771710 | orchestrator | 2026-03-29 01:07:57.771831 | orchestrator | TASK [neutron : Copying over dhcp_agent.ini] *********************************** 2026-03-29 01:07:57.771837 | orchestrator | Sunday 29 March 2026 01:05:18 +0000 (0:00:02.239) 0:01:42.781 ********** 2026-03-29 01:07:57.771842 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:07:57.771848 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:07:57.771862 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:07:57.771867 | orchestrator | skipping: [testbed-node-3] 2026-03-29 01:07:57.771872 | orchestrator | skipping: [testbed-node-4] 2026-03-29 01:07:57.771878 | orchestrator | skipping: [testbed-node-5] 2026-03-29 01:07:57.771883 | orchestrator | 2026-03-29 01:07:57.771889 | orchestrator | TASK [neutron : Copying over dnsmasq.conf] ************************************* 2026-03-29 01:07:57.771893 | orchestrator | Sunday 29 March 2026 01:05:21 +0000 (0:00:02.913) 0:01:45.694 ********** 2026-03-29 01:07:57.771896 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-29 01:07:57.771900 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:07:57.771904 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-29 01:07:57.771907 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:07:57.771910 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-29 01:07:57.771913 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:07:57.771921 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-29 01:07:57.771925 | orchestrator | skipping: [testbed-node-3] 2026-03-29 01:07:57.771928 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-29 01:07:57.771931 | orchestrator | skipping: [testbed-node-4] 2026-03-29 01:07:57.771937 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-29 01:07:57.771940 | orchestrator | skipping: [testbed-node-5] 2026-03-29 01:07:57.771946 | orchestrator | 2026-03-29 01:07:57.771951 | orchestrator | TASK [neutron : Copying over l3_agent.ini] ************************************* 2026-03-29 01:07:57.771956 | orchestrator | Sunday 29 March 2026 01:05:23 +0000 (0:00:02.173) 0:01:47.867 ********** 2026-03-29 01:07:57.771963 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-29 01:07:57.771969 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:07:57.771974 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-29 01:07:57.771986 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-29 01:07:57.771991 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:07:57.771996 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:07:57.772002 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-29 01:07:57.772012 | orchestrator | skipping: [testbed-node-4] 2026-03-29 01:07:57.772020 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-29 01:07:57.772027 | orchestrator | skipping: [testbed-node-3] 2026-03-29 01:07:57.772032 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-29 01:07:57.772038 | orchestrator | skipping: [testbed-node-5] 2026-03-29 01:07:57.772043 | orchestrator | 2026-03-29 01:07:57.772049 | orchestrator | TASK [neutron : Copying over fwaas_driver.ini] ********************************* 2026-03-29 01:07:57.772054 | orchestrator | Sunday 29 March 2026 01:05:25 +0000 (0:00:01.925) 0:01:49.793 ********** 2026-03-29 01:07:57.772059 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-29 01:07:57.772065 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:07:57.772241 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-29 01:07:57.772256 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:07:57.772265 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-29 01:07:57.772271 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:07:57.772276 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-29 01:07:57.772282 | orchestrator | skipping: [testbed-node-3] 2026-03-29 01:07:57.772288 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-29 01:07:57.772293 | orchestrator | skipping: [testbed-node-5] 2026-03-29 01:07:57.772299 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-29 01:07:57.772304 | orchestrator | skipping: [testbed-node-4] 2026-03-29 01:07:57.772310 | orchestrator | 2026-03-29 01:07:57.772315 | orchestrator | TASK [neutron : Copying over metadata_agent.ini] ******************************* 2026-03-29 01:07:57.772321 | orchestrator | Sunday 29 March 2026 01:05:27 +0000 (0:00:01.798) 0:01:51.591 ********** 2026-03-29 01:07:57.772330 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:07:57.772338 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:07:57.772344 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:07:57.772349 | orchestrator | skipping: [testbed-node-3] 2026-03-29 01:07:57.772355 | orchestrator | skipping: [testbed-node-5] 2026-03-29 01:07:57.772360 | orchestrator | skipping: [testbed-node-4] 2026-03-29 01:07:57.772366 | orchestrator | 2026-03-29 01:07:57.772371 | orchestrator | TASK [neutron : Copying over neutron_ovn_metadata_agent.ini] ******************* 2026-03-29 01:07:57.772377 | orchestrator | Sunday 29 March 2026 01:05:29 +0000 (0:00:01.720) 0:01:53.312 ********** 2026-03-29 01:07:57.772382 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:07:57.772387 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:07:57.772392 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:07:57.772397 | orchestrator | changed: [testbed-node-3] 2026-03-29 01:07:57.772402 | orchestrator | changed: [testbed-node-5] 2026-03-29 01:07:57.772407 | orchestrator | changed: [testbed-node-4] 2026-03-29 01:07:57.772412 | orchestrator | 2026-03-29 01:07:57.772417 | orchestrator | TASK [neutron : Copying over metering_agent.ini] ******************************* 2026-03-29 01:07:57.772422 | orchestrator | Sunday 29 March 2026 01:05:32 +0000 (0:00:03.856) 0:01:57.168 ********** 2026-03-29 01:07:57.772427 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:07:57.772432 | orchestrator | skipping: [testbed-node-3] 2026-03-29 01:07:57.772437 | orchestrator | skipping: [testbed-node-5] 2026-03-29 01:07:57.772442 | orchestrator | skipping: [testbed-node-4] 2026-03-29 01:07:57.772447 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:07:57.772453 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:07:57.772458 | orchestrator | 2026-03-29 01:07:57.772463 | orchestrator | TASK [neutron : Copying over ironic_neutron_agent.ini] ************************* 2026-03-29 01:07:57.772469 | orchestrator | Sunday 29 March 2026 01:05:35 +0000 (0:00:02.488) 0:01:59.656 ********** 2026-03-29 01:07:57.772474 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:07:57.772480 | orchestrator | skipping: [testbed-node-4] 2026-03-29 01:07:57.772485 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:07:57.772490 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:07:57.772496 | orchestrator | skipping: [testbed-node-3] 2026-03-29 01:07:57.772501 | orchestrator | skipping: [testbed-node-5] 2026-03-29 01:07:57.772507 | orchestrator | 2026-03-29 01:07:57.772516 | orchestrator | TASK [neutron : Copying over bgp_dragent.ini] ********************************** 2026-03-29 01:07:57.772521 | orchestrator | Sunday 29 March 2026 01:05:38 +0000 (0:00:02.655) 0:02:02.312 ********** 2026-03-29 01:07:57.772527 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:07:57.772532 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:07:57.772538 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:07:57.772543 | orchestrator | skipping: [testbed-node-4] 2026-03-29 01:07:57.772549 | orchestrator | skipping: [testbed-node-3] 2026-03-29 01:07:57.772554 | orchestrator | skipping: [testbed-node-5] 2026-03-29 01:07:57.772559 | orchestrator | 2026-03-29 01:07:57.772565 | orchestrator | TASK [neutron : Copying over ovn_agent.ini] ************************************ 2026-03-29 01:07:57.772570 | orchestrator | Sunday 29 March 2026 01:05:40 +0000 (0:00:02.185) 0:02:04.497 ********** 2026-03-29 01:07:57.772576 | orchestrator | skipping: [testbed-node-4] 2026-03-29 01:07:57.772581 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:07:57.772587 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:07:57.772592 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:07:57.772597 | orchestrator | skipping: [testbed-node-3] 2026-03-29 01:07:57.772602 | orchestrator | skipping: [testbed-node-5] 2026-03-29 01:07:57.772607 | orchestrator | 2026-03-29 01:07:57.772612 | orchestrator | TASK [neutron : Copying over nsx.ini] ****************************************** 2026-03-29 01:07:57.772617 | orchestrator | Sunday 29 March 2026 01:05:42 +0000 (0:00:02.474) 0:02:06.971 ********** 2026-03-29 01:07:57.772622 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:07:57.772627 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:07:57.772632 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:07:57.772640 | orchestrator | skipping: [testbed-node-3] 2026-03-29 01:07:57.772645 | orchestrator | skipping: [testbed-node-5] 2026-03-29 01:07:57.772650 | orchestrator | skipping: [testbed-node-4] 2026-03-29 01:07:57.772655 | orchestrator | 2026-03-29 01:07:57.772660 | orchestrator | TASK [neutron : Copy neutron-l3-agent-wrapper script] ************************** 2026-03-29 01:07:57.772665 | orchestrator | Sunday 29 March 2026 01:05:45 +0000 (0:00:02.987) 0:02:09.959 ********** 2026-03-29 01:07:57.772670 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:07:57.772676 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:07:57.772681 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:07:57.772686 | orchestrator | skipping: [testbed-node-3] 2026-03-29 01:07:57.772691 | orchestrator | skipping: [testbed-node-4] 2026-03-29 01:07:57.772696 | orchestrator | skipping: [testbed-node-5] 2026-03-29 01:07:57.772701 | orchestrator | 2026-03-29 01:07:57.772706 | orchestrator | TASK [neutron : Copying over extra ml2 plugins] ******************************** 2026-03-29 01:07:57.772727 | orchestrator | Sunday 29 March 2026 01:05:49 +0000 (0:00:03.560) 0:02:13.520 ********** 2026-03-29 01:07:57.772733 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:07:57.772739 | orchestrator | skipping: [testbed-node-4] 2026-03-29 01:07:57.772744 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:07:57.772749 | orchestrator | skipping: [testbed-node-5] 2026-03-29 01:07:57.772754 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:07:57.772759 | orchestrator | skipping: [testbed-node-3] 2026-03-29 01:07:57.772764 | orchestrator | 2026-03-29 01:07:57.772770 | orchestrator | TASK [neutron : Copying over neutron-tls-proxy.cfg] **************************** 2026-03-29 01:07:57.772775 | orchestrator | Sunday 29 March 2026 01:05:52 +0000 (0:00:03.105) 0:02:16.625 ********** 2026-03-29 01:07:57.772780 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-29 01:07:57.772786 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:07:57.772791 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-29 01:07:57.772796 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:07:57.772801 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-29 01:07:57.772807 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:07:57.772812 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-29 01:07:57.772817 | orchestrator | skipping: [testbed-node-4] 2026-03-29 01:07:57.772826 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-29 01:07:57.772831 | orchestrator | skipping: [testbed-node-5] 2026-03-29 01:07:57.772837 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-29 01:07:57.772842 | orchestrator | skipping: [testbed-node-3] 2026-03-29 01:07:57.772847 | orchestrator | 2026-03-29 01:07:57.772852 | orchestrator | TASK [neutron : Copying over neutron_taas.conf] ******************************** 2026-03-29 01:07:57.772857 | orchestrator | Sunday 29 March 2026 01:05:54 +0000 (0:00:02.054) 0:02:18.679 ********** 2026-03-29 01:07:57.772863 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-29 01:07:57.772873 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:07:57.772881 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-29 01:07:57.772887 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:07:57.772893 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-29 01:07:57.772899 | orchestrator | skipping: [testbed-node-4] 2026-03-29 01:07:57.772904 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-29 01:07:57.772910 | orchestrator | skipping: [testbed-node-5] 2026-03-29 01:07:57.772920 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-29 01:07:57.772925 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:07:57.772933 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-29 01:07:57.772941 | orchestrator | skipping: [testbed-node-3] 2026-03-29 01:07:57.772945 | orchestrator | 2026-03-29 01:07:57.772949 | orchestrator | TASK [neutron : Check neutron containers] ************************************** 2026-03-29 01:07:57.772952 | orchestrator | Sunday 29 March 2026 01:05:57 +0000 (0:00:03.342) 0:02:22.021 ********** 2026-03-29 01:07:57.772956 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-29 01:07:57.772960 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-29 01:07:57.772968 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-29 01:07:57.772972 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-29 01:07:57.772983 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-29 01:07:57.772987 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-29 01:07:57.772990 | orchestrator | 2026-03-29 01:07:57.772994 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-03-29 01:07:57.772998 | orchestrator | Sunday 29 March 2026 01:06:01 +0000 (0:00:04.050) 0:02:26.071 ********** 2026-03-29 01:07:57.773002 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:07:57.773005 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:07:57.773009 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:07:57.773012 | orchestrator | skipping: [testbed-node-3] 2026-03-29 01:07:57.773016 | orchestrator | skipping: [testbed-node-4] 2026-03-29 01:07:57.773019 | orchestrator | skipping: [testbed-node-5] 2026-03-29 01:07:57.773023 | orchestrator | 2026-03-29 01:07:57.773027 | orchestrator | TASK [neutron : Creating Neutron database] ************************************* 2026-03-29 01:07:57.773030 | orchestrator | Sunday 29 March 2026 01:06:02 +0000 (0:00:00.676) 0:02:26.748 ********** 2026-03-29 01:07:57.773034 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:07:57.773038 | orchestrator | 2026-03-29 01:07:57.773043 | orchestrator | TASK [neutron : Creating Neutron database user and setting permissions] ******** 2026-03-29 01:07:57.773048 | orchestrator | Sunday 29 March 2026 01:06:04 +0000 (0:00:02.054) 0:02:28.803 ********** 2026-03-29 01:07:57.773053 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:07:57.773058 | orchestrator | 2026-03-29 01:07:57.773063 | orchestrator | TASK [neutron : Running Neutron bootstrap container] *************************** 2026-03-29 01:07:57.773069 | orchestrator | Sunday 29 March 2026 01:06:07 +0000 (0:00:02.496) 0:02:31.299 ********** 2026-03-29 01:07:57.773074 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:07:57.773079 | orchestrator | 2026-03-29 01:07:57.773084 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-29 01:07:57.773089 | orchestrator | Sunday 29 March 2026 01:06:47 +0000 (0:00:40.850) 0:03:12.149 ********** 2026-03-29 01:07:57.773095 | orchestrator | 2026-03-29 01:07:57.773100 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-29 01:07:57.773105 | orchestrator | Sunday 29 March 2026 01:06:48 +0000 (0:00:00.130) 0:03:12.281 ********** 2026-03-29 01:07:57.773110 | orchestrator | 2026-03-29 01:07:57.773115 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-29 01:07:57.773120 | orchestrator | Sunday 29 March 2026 01:06:48 +0000 (0:00:00.134) 0:03:12.415 ********** 2026-03-29 01:07:57.773129 | orchestrator | 2026-03-29 01:07:57.773134 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-29 01:07:57.773139 | orchestrator | Sunday 29 March 2026 01:06:48 +0000 (0:00:00.136) 0:03:12.551 ********** 2026-03-29 01:07:57.773144 | orchestrator | 2026-03-29 01:07:57.773154 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-29 01:07:57.773158 | orchestrator | Sunday 29 March 2026 01:06:48 +0000 (0:00:00.173) 0:03:12.725 ********** 2026-03-29 01:07:57.773161 | orchestrator | 2026-03-29 01:07:57.773165 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-29 01:07:57.773169 | orchestrator | Sunday 29 March 2026 01:06:48 +0000 (0:00:00.112) 0:03:12.837 ********** 2026-03-29 01:07:57.773172 | orchestrator | 2026-03-29 01:07:57.773176 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-server container] ******************* 2026-03-29 01:07:57.773180 | orchestrator | Sunday 29 March 2026 01:06:48 +0000 (0:00:00.080) 0:03:12.918 ********** 2026-03-29 01:07:57.773183 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:07:57.773187 | orchestrator | changed: [testbed-node-2] 2026-03-29 01:07:57.773190 | orchestrator | changed: [testbed-node-1] 2026-03-29 01:07:57.773194 | orchestrator | 2026-03-29 01:07:57.773198 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-ovn-metadata-agent container] ******* 2026-03-29 01:07:57.773201 | orchestrator | Sunday 29 March 2026 01:07:08 +0000 (0:00:20.129) 0:03:33.047 ********** 2026-03-29 01:07:57.773205 | orchestrator | changed: [testbed-node-5] 2026-03-29 01:07:57.773208 | orchestrator | changed: [testbed-node-4] 2026-03-29 01:07:57.773212 | orchestrator | changed: [testbed-node-3] 2026-03-29 01:07:57.773216 | orchestrator | 2026-03-29 01:07:57.773219 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 01:07:57.773223 | orchestrator | testbed-node-0 : ok=26  changed=15  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-29 01:07:57.773228 | orchestrator | testbed-node-1 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2026-03-29 01:07:57.773234 | orchestrator | testbed-node-2 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2026-03-29 01:07:57.773238 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-29 01:07:57.773242 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-29 01:07:57.773246 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-29 01:07:57.773250 | orchestrator | 2026-03-29 01:07:57.773253 | orchestrator | 2026-03-29 01:07:57.773257 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-29 01:07:57.773261 | orchestrator | Sunday 29 March 2026 01:07:56 +0000 (0:00:48.046) 0:04:21.094 ********** 2026-03-29 01:07:57.773265 | orchestrator | =============================================================================== 2026-03-29 01:07:57.773269 | orchestrator | neutron : Restart neutron-ovn-metadata-agent container ----------------- 48.05s 2026-03-29 01:07:57.773273 | orchestrator | neutron : Running Neutron bootstrap container -------------------------- 40.85s 2026-03-29 01:07:57.773276 | orchestrator | neutron : Restart neutron-server container ----------------------------- 20.13s 2026-03-29 01:07:57.773280 | orchestrator | neutron : Copying over neutron.conf ------------------------------------- 8.97s 2026-03-29 01:07:57.773284 | orchestrator | service-ks-register : neutron | Granting user roles --------------------- 7.46s 2026-03-29 01:07:57.773287 | orchestrator | service-ks-register : neutron | Creating endpoints ---------------------- 6.50s 2026-03-29 01:07:57.773290 | orchestrator | neutron : Copying over ml2_conf.ini ------------------------------------- 4.58s 2026-03-29 01:07:57.773296 | orchestrator | neutron : Copying over config.json files for services ------------------- 4.54s 2026-03-29 01:07:57.773299 | orchestrator | neutron : Copying over ssh key ------------------------------------------ 4.44s 2026-03-29 01:07:57.773302 | orchestrator | service-cert-copy : neutron | Copying over backend internal TLS key ----- 4.22s 2026-03-29 01:07:57.773305 | orchestrator | neutron : Check neutron containers -------------------------------------- 4.05s 2026-03-29 01:07:57.773308 | orchestrator | service-ks-register : neutron | Creating services ----------------------- 3.97s 2026-03-29 01:07:57.773311 | orchestrator | service-ks-register : neutron | Creating roles -------------------------- 3.93s 2026-03-29 01:07:57.773314 | orchestrator | neutron : Copying over neutron_ovn_metadata_agent.ini ------------------- 3.86s 2026-03-29 01:07:57.773317 | orchestrator | service-cert-copy : neutron | Copying over extra CA certificates -------- 3.80s 2026-03-29 01:07:57.773320 | orchestrator | neutron : Copying over existing policy file ----------------------------- 3.62s 2026-03-29 01:07:57.773323 | orchestrator | neutron : Copy neutron-l3-agent-wrapper script -------------------------- 3.56s 2026-03-29 01:07:57.773326 | orchestrator | service-ks-register : neutron | Creating users -------------------------- 3.52s 2026-03-29 01:07:57.773329 | orchestrator | service-ks-register : neutron | Creating projects ----------------------- 3.36s 2026-03-29 01:07:57.773332 | orchestrator | neutron : Copying over neutron_vpnaas.conf ------------------------------ 3.35s 2026-03-29 01:07:57.773336 | orchestrator | 2026-03-29 01:07:57 | INFO  | Task 12879e08-866f-411e-86e5-19578cc2ff39 is in state STARTED 2026-03-29 01:07:57.773339 | orchestrator | 2026-03-29 01:07:57 | INFO  | Task 05bed01a-cd1b-437f-92a2-658822fb3875 is in state STARTED 2026-03-29 01:07:57.773343 | orchestrator | 2026-03-29 01:07:57 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:08:00.832127 | orchestrator | 2026-03-29 01:08:00 | INFO  | Task e777afcd-bf7b-43de-ad1d-8dba3c410dba is in state STARTED 2026-03-29 01:08:00.833052 | orchestrator | 2026-03-29 01:08:00 | INFO  | Task e2472d95-c3e5-4d46-8a8d-91f5d997e477 is in state STARTED 2026-03-29 01:08:00.836214 | orchestrator | 2026-03-29 01:08:00 | INFO  | Task 12879e08-866f-411e-86e5-19578cc2ff39 is in state STARTED 2026-03-29 01:08:00.837021 | orchestrator | 2026-03-29 01:08:00 | INFO  | Task 05bed01a-cd1b-437f-92a2-658822fb3875 is in state STARTED 2026-03-29 01:08:00.837056 | orchestrator | 2026-03-29 01:08:00 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:08:03.872820 | orchestrator | 2026-03-29 01:08:03 | INFO  | Task e777afcd-bf7b-43de-ad1d-8dba3c410dba is in state STARTED 2026-03-29 01:08:03.872896 | orchestrator | 2026-03-29 01:08:03 | INFO  | Task e2472d95-c3e5-4d46-8a8d-91f5d997e477 is in state SUCCESS 2026-03-29 01:08:03.873208 | orchestrator | 2026-03-29 01:08:03 | INFO  | Task 12879e08-866f-411e-86e5-19578cc2ff39 is in state STARTED 2026-03-29 01:08:03.874048 | orchestrator | 2026-03-29 01:08:03 | INFO  | Task 05bed01a-cd1b-437f-92a2-658822fb3875 is in state STARTED 2026-03-29 01:08:03.874081 | orchestrator | 2026-03-29 01:08:03 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:08:06.918213 | orchestrator | 2026-03-29 01:08:06 | INFO  | Task eaf45d79-acac-464b-9378-52c460a1ae78 is in state STARTED 2026-03-29 01:08:06.920371 | orchestrator | 2026-03-29 01:08:06 | INFO  | Task e777afcd-bf7b-43de-ad1d-8dba3c410dba is in state STARTED 2026-03-29 01:08:06.922446 | orchestrator | 2026-03-29 01:08:06 | INFO  | Task 12879e08-866f-411e-86e5-19578cc2ff39 is in state STARTED 2026-03-29 01:08:06.924223 | orchestrator | 2026-03-29 01:08:06 | INFO  | Task 05bed01a-cd1b-437f-92a2-658822fb3875 is in state STARTED 2026-03-29 01:08:06.924412 | orchestrator | 2026-03-29 01:08:06 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:08:09.957354 | orchestrator | 2026-03-29 01:08:09 | INFO  | Task eaf45d79-acac-464b-9378-52c460a1ae78 is in state STARTED 2026-03-29 01:08:09.958962 | orchestrator | 2026-03-29 01:08:09 | INFO  | Task e777afcd-bf7b-43de-ad1d-8dba3c410dba is in state STARTED 2026-03-29 01:08:09.961976 | orchestrator | 2026-03-29 01:08:09 | INFO  | Task 12879e08-866f-411e-86e5-19578cc2ff39 is in state STARTED 2026-03-29 01:08:09.963021 | orchestrator | 2026-03-29 01:08:09 | INFO  | Task 05bed01a-cd1b-437f-92a2-658822fb3875 is in state STARTED 2026-03-29 01:08:09.963045 | orchestrator | 2026-03-29 01:08:09 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:08:12.997830 | orchestrator | 2026-03-29 01:08:12 | INFO  | Task eaf45d79-acac-464b-9378-52c460a1ae78 is in state STARTED 2026-03-29 01:08:12.998356 | orchestrator | 2026-03-29 01:08:13 | INFO  | Task e777afcd-bf7b-43de-ad1d-8dba3c410dba is in state STARTED 2026-03-29 01:08:12.999252 | orchestrator | 2026-03-29 01:08:13 | INFO  | Task 12879e08-866f-411e-86e5-19578cc2ff39 is in state STARTED 2026-03-29 01:08:13.001020 | orchestrator | 2026-03-29 01:08:13 | INFO  | Task 05bed01a-cd1b-437f-92a2-658822fb3875 is in state STARTED 2026-03-29 01:08:13.001053 | orchestrator | 2026-03-29 01:08:13 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:08:16.055943 | orchestrator | 2026-03-29 01:08:16 | INFO  | Task eaf45d79-acac-464b-9378-52c460a1ae78 is in state STARTED 2026-03-29 01:08:16.057177 | orchestrator | 2026-03-29 01:08:16 | INFO  | Task e777afcd-bf7b-43de-ad1d-8dba3c410dba is in state STARTED 2026-03-29 01:08:16.057780 | orchestrator | 2026-03-29 01:08:16 | INFO  | Task 12879e08-866f-411e-86e5-19578cc2ff39 is in state STARTED 2026-03-29 01:08:16.058562 | orchestrator | 2026-03-29 01:08:16 | INFO  | Task 05bed01a-cd1b-437f-92a2-658822fb3875 is in state STARTED 2026-03-29 01:08:16.058630 | orchestrator | 2026-03-29 01:08:16 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:08:19.098788 | orchestrator | 2026-03-29 01:08:19 | INFO  | Task eaf45d79-acac-464b-9378-52c460a1ae78 is in state STARTED 2026-03-29 01:08:19.100192 | orchestrator | 2026-03-29 01:08:19 | INFO  | Task e777afcd-bf7b-43de-ad1d-8dba3c410dba is in state STARTED 2026-03-29 01:08:19.101674 | orchestrator | 2026-03-29 01:08:19 | INFO  | Task 12879e08-866f-411e-86e5-19578cc2ff39 is in state STARTED 2026-03-29 01:08:19.103232 | orchestrator | 2026-03-29 01:08:19 | INFO  | Task 05bed01a-cd1b-437f-92a2-658822fb3875 is in state STARTED 2026-03-29 01:08:19.103452 | orchestrator | 2026-03-29 01:08:19 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:08:22.146337 | orchestrator | 2026-03-29 01:08:22 | INFO  | Task eaf45d79-acac-464b-9378-52c460a1ae78 is in state STARTED 2026-03-29 01:08:22.147083 | orchestrator | 2026-03-29 01:08:22 | INFO  | Task e777afcd-bf7b-43de-ad1d-8dba3c410dba is in state STARTED 2026-03-29 01:08:22.149327 | orchestrator | 2026-03-29 01:08:22 | INFO  | Task 12879e08-866f-411e-86e5-19578cc2ff39 is in state STARTED 2026-03-29 01:08:22.153183 | orchestrator | 2026-03-29 01:08:22 | INFO  | Task 05bed01a-cd1b-437f-92a2-658822fb3875 is in state STARTED 2026-03-29 01:08:22.153224 | orchestrator | 2026-03-29 01:08:22 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:08:25.191964 | orchestrator | 2026-03-29 01:08:25 | INFO  | Task eaf45d79-acac-464b-9378-52c460a1ae78 is in state STARTED 2026-03-29 01:08:25.194280 | orchestrator | 2026-03-29 01:08:25 | INFO  | Task e777afcd-bf7b-43de-ad1d-8dba3c410dba is in state STARTED 2026-03-29 01:08:25.196543 | orchestrator | 2026-03-29 01:08:25 | INFO  | Task 12879e08-866f-411e-86e5-19578cc2ff39 is in state STARTED 2026-03-29 01:08:25.198442 | orchestrator | 2026-03-29 01:08:25 | INFO  | Task 05bed01a-cd1b-437f-92a2-658822fb3875 is in state STARTED 2026-03-29 01:08:25.198514 | orchestrator | 2026-03-29 01:08:25 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:08:28.237325 | orchestrator | 2026-03-29 01:08:28 | INFO  | Task eaf45d79-acac-464b-9378-52c460a1ae78 is in state STARTED 2026-03-29 01:08:28.239486 | orchestrator | 2026-03-29 01:08:28 | INFO  | Task e777afcd-bf7b-43de-ad1d-8dba3c410dba is in state STARTED 2026-03-29 01:08:28.241865 | orchestrator | 2026-03-29 01:08:28 | INFO  | Task 12879e08-866f-411e-86e5-19578cc2ff39 is in state STARTED 2026-03-29 01:08:28.243600 | orchestrator | 2026-03-29 01:08:28 | INFO  | Task 05bed01a-cd1b-437f-92a2-658822fb3875 is in state STARTED 2026-03-29 01:08:28.243639 | orchestrator | 2026-03-29 01:08:28 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:08:31.292509 | orchestrator | 2026-03-29 01:08:31 | INFO  | Task eaf45d79-acac-464b-9378-52c460a1ae78 is in state STARTED 2026-03-29 01:08:31.293030 | orchestrator | 2026-03-29 01:08:31 | INFO  | Task e777afcd-bf7b-43de-ad1d-8dba3c410dba is in state STARTED 2026-03-29 01:08:31.294319 | orchestrator | 2026-03-29 01:08:31 | INFO  | Task 12879e08-866f-411e-86e5-19578cc2ff39 is in state STARTED 2026-03-29 01:08:31.295361 | orchestrator | 2026-03-29 01:08:31 | INFO  | Task 05bed01a-cd1b-437f-92a2-658822fb3875 is in state STARTED 2026-03-29 01:08:31.295393 | orchestrator | 2026-03-29 01:08:31 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:08:34.336217 | orchestrator | 2026-03-29 01:08:34 | INFO  | Task eaf45d79-acac-464b-9378-52c460a1ae78 is in state STARTED 2026-03-29 01:08:34.337935 | orchestrator | 2026-03-29 01:08:34 | INFO  | Task e777afcd-bf7b-43de-ad1d-8dba3c410dba is in state STARTED 2026-03-29 01:08:34.339389 | orchestrator | 2026-03-29 01:08:34 | INFO  | Task 12879e08-866f-411e-86e5-19578cc2ff39 is in state STARTED 2026-03-29 01:08:34.341392 | orchestrator | 2026-03-29 01:08:34 | INFO  | Task 05bed01a-cd1b-437f-92a2-658822fb3875 is in state STARTED 2026-03-29 01:08:34.341446 | orchestrator | 2026-03-29 01:08:34 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:08:37.382492 | orchestrator | 2026-03-29 01:08:37 | INFO  | Task eaf45d79-acac-464b-9378-52c460a1ae78 is in state STARTED 2026-03-29 01:08:37.384362 | orchestrator | 2026-03-29 01:08:37 | INFO  | Task e777afcd-bf7b-43de-ad1d-8dba3c410dba is in state STARTED 2026-03-29 01:08:37.386395 | orchestrator | 2026-03-29 01:08:37 | INFO  | Task 12879e08-866f-411e-86e5-19578cc2ff39 is in state STARTED 2026-03-29 01:08:37.387725 | orchestrator | 2026-03-29 01:08:37 | INFO  | Task 05bed01a-cd1b-437f-92a2-658822fb3875 is in state STARTED 2026-03-29 01:08:37.387764 | orchestrator | 2026-03-29 01:08:37 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:08:40.432073 | orchestrator | 2026-03-29 01:08:40 | INFO  | Task eaf45d79-acac-464b-9378-52c460a1ae78 is in state STARTED 2026-03-29 01:08:40.432131 | orchestrator | 2026-03-29 01:08:40 | INFO  | Task e777afcd-bf7b-43de-ad1d-8dba3c410dba is in state STARTED 2026-03-29 01:08:40.433837 | orchestrator | 2026-03-29 01:08:40 | INFO  | Task 12879e08-866f-411e-86e5-19578cc2ff39 is in state STARTED 2026-03-29 01:08:40.435219 | orchestrator | 2026-03-29 01:08:40 | INFO  | Task 05bed01a-cd1b-437f-92a2-658822fb3875 is in state STARTED 2026-03-29 01:08:40.435285 | orchestrator | 2026-03-29 01:08:40 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:08:43.468974 | orchestrator | 2026-03-29 01:08:43 | INFO  | Task eaf45d79-acac-464b-9378-52c460a1ae78 is in state STARTED 2026-03-29 01:08:43.469619 | orchestrator | 2026-03-29 01:08:43 | INFO  | Task e777afcd-bf7b-43de-ad1d-8dba3c410dba is in state STARTED 2026-03-29 01:08:43.470284 | orchestrator | 2026-03-29 01:08:43 | INFO  | Task 12879e08-866f-411e-86e5-19578cc2ff39 is in state STARTED 2026-03-29 01:08:43.471124 | orchestrator | 2026-03-29 01:08:43 | INFO  | Task 05bed01a-cd1b-437f-92a2-658822fb3875 is in state STARTED 2026-03-29 01:08:43.471165 | orchestrator | 2026-03-29 01:08:43 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:08:46.518407 | orchestrator | 2026-03-29 01:08:46 | INFO  | Task eaf45d79-acac-464b-9378-52c460a1ae78 is in state STARTED 2026-03-29 01:08:46.519169 | orchestrator | 2026-03-29 01:08:46 | INFO  | Task e777afcd-bf7b-43de-ad1d-8dba3c410dba is in state STARTED 2026-03-29 01:08:46.519857 | orchestrator | 2026-03-29 01:08:46 | INFO  | Task 12879e08-866f-411e-86e5-19578cc2ff39 is in state STARTED 2026-03-29 01:08:46.520827 | orchestrator | 2026-03-29 01:08:46 | INFO  | Task 05bed01a-cd1b-437f-92a2-658822fb3875 is in state STARTED 2026-03-29 01:08:46.520855 | orchestrator | 2026-03-29 01:08:46 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:08:49.548486 | orchestrator | 2026-03-29 01:08:49 | INFO  | Task eaf45d79-acac-464b-9378-52c460a1ae78 is in state STARTED 2026-03-29 01:08:49.549498 | orchestrator | 2026-03-29 01:08:49 | INFO  | Task e777afcd-bf7b-43de-ad1d-8dba3c410dba is in state STARTED 2026-03-29 01:08:49.549890 | orchestrator | 2026-03-29 01:08:49 | INFO  | Task 12879e08-866f-411e-86e5-19578cc2ff39 is in state STARTED 2026-03-29 01:08:49.550902 | orchestrator | 2026-03-29 01:08:49 | INFO  | Task 05bed01a-cd1b-437f-92a2-658822fb3875 is in state STARTED 2026-03-29 01:08:49.550937 | orchestrator | 2026-03-29 01:08:49 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:08:52.588793 | orchestrator | 2026-03-29 01:08:52 | INFO  | Task eaf45d79-acac-464b-9378-52c460a1ae78 is in state STARTED 2026-03-29 01:08:52.590553 | orchestrator | 2026-03-29 01:08:52 | INFO  | Task e777afcd-bf7b-43de-ad1d-8dba3c410dba is in state STARTED 2026-03-29 01:08:52.594068 | orchestrator | 2026-03-29 01:08:52 | INFO  | Task 12879e08-866f-411e-86e5-19578cc2ff39 is in state STARTED 2026-03-29 01:08:52.596098 | orchestrator | 2026-03-29 01:08:52 | INFO  | Task 05bed01a-cd1b-437f-92a2-658822fb3875 is in state STARTED 2026-03-29 01:08:52.596153 | orchestrator | 2026-03-29 01:08:52 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:08:55.624386 | orchestrator | 2026-03-29 01:08:55 | INFO  | Task eaf45d79-acac-464b-9378-52c460a1ae78 is in state STARTED 2026-03-29 01:08:55.625566 | orchestrator | 2026-03-29 01:08:55 | INFO  | Task e777afcd-bf7b-43de-ad1d-8dba3c410dba is in state STARTED 2026-03-29 01:08:55.627780 | orchestrator | 2026-03-29 01:08:55 | INFO  | Task 12879e08-866f-411e-86e5-19578cc2ff39 is in state STARTED 2026-03-29 01:08:55.629234 | orchestrator | 2026-03-29 01:08:55 | INFO  | Task 05bed01a-cd1b-437f-92a2-658822fb3875 is in state STARTED 2026-03-29 01:08:55.629268 | orchestrator | 2026-03-29 01:08:55 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:08:58.668063 | orchestrator | 2026-03-29 01:08:58 | INFO  | Task eaf45d79-acac-464b-9378-52c460a1ae78 is in state STARTED 2026-03-29 01:08:58.670719 | orchestrator | 2026-03-29 01:08:58 | INFO  | Task e777afcd-bf7b-43de-ad1d-8dba3c410dba is in state STARTED 2026-03-29 01:08:58.673336 | orchestrator | 2026-03-29 01:08:58 | INFO  | Task 12879e08-866f-411e-86e5-19578cc2ff39 is in state STARTED 2026-03-29 01:08:58.675509 | orchestrator | 2026-03-29 01:08:58 | INFO  | Task 05bed01a-cd1b-437f-92a2-658822fb3875 is in state STARTED 2026-03-29 01:08:58.675565 | orchestrator | 2026-03-29 01:08:58 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:09:01.713252 | orchestrator | 2026-03-29 01:09:01 | INFO  | Task eaf45d79-acac-464b-9378-52c460a1ae78 is in state STARTED 2026-03-29 01:09:01.713351 | orchestrator | 2026-03-29 01:09:01 | INFO  | Task e777afcd-bf7b-43de-ad1d-8dba3c410dba is in state STARTED 2026-03-29 01:09:01.714580 | orchestrator | 2026-03-29 01:09:01 | INFO  | Task 12879e08-866f-411e-86e5-19578cc2ff39 is in state STARTED 2026-03-29 01:09:01.715322 | orchestrator | 2026-03-29 01:09:01 | INFO  | Task 05bed01a-cd1b-437f-92a2-658822fb3875 is in state STARTED 2026-03-29 01:09:01.715419 | orchestrator | 2026-03-29 01:09:01 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:09:04.750975 | orchestrator | 2026-03-29 01:09:04 | INFO  | Task eaf45d79-acac-464b-9378-52c460a1ae78 is in state STARTED 2026-03-29 01:09:04.751447 | orchestrator | 2026-03-29 01:09:04 | INFO  | Task e777afcd-bf7b-43de-ad1d-8dba3c410dba is in state STARTED 2026-03-29 01:09:04.752202 | orchestrator | 2026-03-29 01:09:04 | INFO  | Task 12879e08-866f-411e-86e5-19578cc2ff39 is in state STARTED 2026-03-29 01:09:04.752959 | orchestrator | 2026-03-29 01:09:04 | INFO  | Task 05bed01a-cd1b-437f-92a2-658822fb3875 is in state STARTED 2026-03-29 01:09:04.752981 | orchestrator | 2026-03-29 01:09:04 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:09:07.790938 | orchestrator | 2026-03-29 01:09:07 | INFO  | Task eaf45d79-acac-464b-9378-52c460a1ae78 is in state STARTED 2026-03-29 01:09:07.793004 | orchestrator | 2026-03-29 01:09:07 | INFO  | Task e777afcd-bf7b-43de-ad1d-8dba3c410dba is in state SUCCESS 2026-03-29 01:09:07.794741 | orchestrator | 2026-03-29 01:09:07.794777 | orchestrator | 2026-03-29 01:09:07.794782 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-29 01:09:07.794786 | orchestrator | 2026-03-29 01:09:07.794789 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-29 01:09:07.794793 | orchestrator | Sunday 29 March 2026 01:08:01 +0000 (0:00:00.214) 0:00:00.214 ********** 2026-03-29 01:09:07.794796 | orchestrator | ok: [testbed-node-0] 2026-03-29 01:09:07.794800 | orchestrator | ok: [testbed-node-1] 2026-03-29 01:09:07.794804 | orchestrator | ok: [testbed-node-2] 2026-03-29 01:09:07.794807 | orchestrator | 2026-03-29 01:09:07.794810 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-29 01:09:07.794814 | orchestrator | Sunday 29 March 2026 01:08:01 +0000 (0:00:00.480) 0:00:00.694 ********** 2026-03-29 01:09:07.794817 | orchestrator | ok: [testbed-node-0] => (item=enable_nova_True) 2026-03-29 01:09:07.794821 | orchestrator | ok: [testbed-node-1] => (item=enable_nova_True) 2026-03-29 01:09:07.794824 | orchestrator | ok: [testbed-node-2] => (item=enable_nova_True) 2026-03-29 01:09:07.794827 | orchestrator | 2026-03-29 01:09:07.794837 | orchestrator | PLAY [Wait for the Nova service] *********************************************** 2026-03-29 01:09:07.794840 | orchestrator | 2026-03-29 01:09:07.794843 | orchestrator | TASK [Waiting for Nova public port to be UP] *********************************** 2026-03-29 01:09:07.794847 | orchestrator | Sunday 29 March 2026 01:08:02 +0000 (0:00:00.568) 0:00:01.262 ********** 2026-03-29 01:09:07.794850 | orchestrator | ok: [testbed-node-0] 2026-03-29 01:09:07.794853 | orchestrator | ok: [testbed-node-2] 2026-03-29 01:09:07.794856 | orchestrator | ok: [testbed-node-1] 2026-03-29 01:09:07.794859 | orchestrator | 2026-03-29 01:09:07.794863 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 01:09:07.794866 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-29 01:09:07.794871 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-29 01:09:07.794874 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-29 01:09:07.794877 | orchestrator | 2026-03-29 01:09:07.794880 | orchestrator | 2026-03-29 01:09:07.794883 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-29 01:09:07.794897 | orchestrator | Sunday 29 March 2026 01:08:03 +0000 (0:00:01.059) 0:00:02.322 ********** 2026-03-29 01:09:07.794901 | orchestrator | =============================================================================== 2026-03-29 01:09:07.794904 | orchestrator | Waiting for Nova public port to be UP ----------------------------------- 1.06s 2026-03-29 01:09:07.794907 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.57s 2026-03-29 01:09:07.794910 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.48s 2026-03-29 01:09:07.794913 | orchestrator | 2026-03-29 01:09:07.794916 | orchestrator | 2026-03-29 01:09:07.794919 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-29 01:09:07.794922 | orchestrator | 2026-03-29 01:09:07.794925 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-29 01:09:07.794929 | orchestrator | Sunday 29 March 2026 01:07:21 +0000 (0:00:00.330) 0:00:00.330 ********** 2026-03-29 01:09:07.794932 | orchestrator | ok: [testbed-node-0] 2026-03-29 01:09:07.794935 | orchestrator | ok: [testbed-node-1] 2026-03-29 01:09:07.794938 | orchestrator | ok: [testbed-node-2] 2026-03-29 01:09:07.794941 | orchestrator | 2026-03-29 01:09:07.794944 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-29 01:09:07.794947 | orchestrator | Sunday 29 March 2026 01:07:21 +0000 (0:00:00.345) 0:00:00.675 ********** 2026-03-29 01:09:07.794950 | orchestrator | ok: [testbed-node-0] => (item=enable_magnum_True) 2026-03-29 01:09:07.794953 | orchestrator | ok: [testbed-node-1] => (item=enable_magnum_True) 2026-03-29 01:09:07.794957 | orchestrator | ok: [testbed-node-2] => (item=enable_magnum_True) 2026-03-29 01:09:07.794960 | orchestrator | 2026-03-29 01:09:07.794963 | orchestrator | PLAY [Apply role magnum] ******************************************************* 2026-03-29 01:09:07.794966 | orchestrator | 2026-03-29 01:09:07.794969 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-03-29 01:09:07.794972 | orchestrator | Sunday 29 March 2026 01:07:22 +0000 (0:00:00.416) 0:00:01.092 ********** 2026-03-29 01:09:07.794975 | orchestrator | included: /ansible/roles/magnum/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 01:09:07.794978 | orchestrator | 2026-03-29 01:09:07.794982 | orchestrator | TASK [service-ks-register : magnum | Creating services] ************************ 2026-03-29 01:09:07.794985 | orchestrator | Sunday 29 March 2026 01:07:23 +0000 (0:00:01.424) 0:00:02.516 ********** 2026-03-29 01:09:07.794988 | orchestrator | changed: [testbed-node-0] => (item=magnum (container-infra)) 2026-03-29 01:09:07.794991 | orchestrator | 2026-03-29 01:09:07.794994 | orchestrator | TASK [service-ks-register : magnum | Creating endpoints] *********************** 2026-03-29 01:09:07.794998 | orchestrator | Sunday 29 March 2026 01:07:27 +0000 (0:00:04.065) 0:00:06.581 ********** 2026-03-29 01:09:07.795001 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api-int.testbed.osism.xyz:9511/v1 -> internal) 2026-03-29 01:09:07.795004 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api.testbed.osism.xyz:9511/v1 -> public) 2026-03-29 01:09:07.795007 | orchestrator | 2026-03-29 01:09:07.795010 | orchestrator | TASK [service-ks-register : magnum | Creating projects] ************************ 2026-03-29 01:09:07.795014 | orchestrator | Sunday 29 March 2026 01:07:33 +0000 (0:00:05.937) 0:00:12.519 ********** 2026-03-29 01:09:07.795017 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-29 01:09:07.795020 | orchestrator | 2026-03-29 01:09:07.795023 | orchestrator | TASK [service-ks-register : magnum | Creating users] *************************** 2026-03-29 01:09:07.795026 | orchestrator | Sunday 29 March 2026 01:07:36 +0000 (0:00:03.221) 0:00:15.741 ********** 2026-03-29 01:09:07.795037 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service) 2026-03-29 01:09:07.795040 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-29 01:09:07.795043 | orchestrator | 2026-03-29 01:09:07.795046 | orchestrator | TASK [service-ks-register : magnum | Creating roles] *************************** 2026-03-29 01:09:07.795049 | orchestrator | Sunday 29 March 2026 01:07:40 +0000 (0:00:03.625) 0:00:19.367 ********** 2026-03-29 01:09:07.795055 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-29 01:09:07.795058 | orchestrator | 2026-03-29 01:09:07.795062 | orchestrator | TASK [service-ks-register : magnum | Granting user roles] ********************** 2026-03-29 01:09:07.795065 | orchestrator | Sunday 29 March 2026 01:07:43 +0000 (0:00:03.177) 0:00:22.544 ********** 2026-03-29 01:09:07.795068 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service -> admin) 2026-03-29 01:09:07.795071 | orchestrator | 2026-03-29 01:09:07.795074 | orchestrator | TASK [magnum : Creating Magnum trustee domain] ********************************* 2026-03-29 01:09:07.795077 | orchestrator | Sunday 29 March 2026 01:07:47 +0000 (0:00:03.712) 0:00:26.257 ********** 2026-03-29 01:09:07.795080 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:09:07.795083 | orchestrator | 2026-03-29 01:09:07.795088 | orchestrator | TASK [magnum : Creating Magnum trustee user] *********************************** 2026-03-29 01:09:07.795091 | orchestrator | Sunday 29 March 2026 01:07:50 +0000 (0:00:03.016) 0:00:29.274 ********** 2026-03-29 01:09:07.795094 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:09:07.795097 | orchestrator | 2026-03-29 01:09:07.795100 | orchestrator | TASK [magnum : Creating Magnum trustee user role] ****************************** 2026-03-29 01:09:07.795103 | orchestrator | Sunday 29 March 2026 01:07:53 +0000 (0:00:03.481) 0:00:32.755 ********** 2026-03-29 01:09:07.795106 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:09:07.795109 | orchestrator | 2026-03-29 01:09:07.795113 | orchestrator | TASK [magnum : Ensuring config directories exist] ****************************** 2026-03-29 01:09:07.795116 | orchestrator | Sunday 29 March 2026 01:07:57 +0000 (0:00:03.684) 0:00:36.440 ********** 2026-03-29 01:09:07.795121 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-29 01:09:07.795126 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-29 01:09:07.795130 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-29 01:09:07.795138 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-29 01:09:07.795146 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-29 01:09:07.795150 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-29 01:09:07.795153 | orchestrator | 2026-03-29 01:09:07.795156 | orchestrator | TASK [magnum : Check if policies shall be overwritten] ************************* 2026-03-29 01:09:07.795159 | orchestrator | Sunday 29 March 2026 01:07:59 +0000 (0:00:01.916) 0:00:38.356 ********** 2026-03-29 01:09:07.795162 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:09:07.795166 | orchestrator | 2026-03-29 01:09:07.795169 | orchestrator | TASK [magnum : Set magnum policy file] ***************************************** 2026-03-29 01:09:07.795172 | orchestrator | Sunday 29 March 2026 01:07:59 +0000 (0:00:00.122) 0:00:38.479 ********** 2026-03-29 01:09:07.795177 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:09:07.795183 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:09:07.795192 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:09:07.795197 | orchestrator | 2026-03-29 01:09:07.795202 | orchestrator | TASK [magnum : Check if kubeconfig file is supplied] *************************** 2026-03-29 01:09:07.795207 | orchestrator | Sunday 29 March 2026 01:08:00 +0000 (0:00:00.322) 0:00:38.802 ********** 2026-03-29 01:09:07.795212 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-29 01:09:07.795217 | orchestrator | 2026-03-29 01:09:07.795222 | orchestrator | TASK [magnum : Copying over kubeconfig file] *********************************** 2026-03-29 01:09:07.795227 | orchestrator | Sunday 29 March 2026 01:08:01 +0000 (0:00:01.137) 0:00:39.939 ********** 2026-03-29 01:09:07.795232 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-29 01:09:07.795287 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-29 01:09:07.795299 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-29 01:09:07.795304 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-29 01:09:07.795307 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-29 01:09:07.795313 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-29 01:09:07.795316 | orchestrator | 2026-03-29 01:09:07.795319 | orchestrator | TASK [magnum : Set magnum kubeconfig file's path] ****************************** 2026-03-29 01:09:07.795322 | orchestrator | Sunday 29 March 2026 01:08:03 +0000 (0:00:02.463) 0:00:42.403 ********** 2026-03-29 01:09:07.795326 | orchestrator | ok: [testbed-node-0] 2026-03-29 01:09:07.795329 | orchestrator | ok: [testbed-node-1] 2026-03-29 01:09:07.795332 | orchestrator | ok: [testbed-node-2] 2026-03-29 01:09:07.795335 | orchestrator | 2026-03-29 01:09:07.795338 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-03-29 01:09:07.795343 | orchestrator | Sunday 29 March 2026 01:08:04 +0000 (0:00:00.512) 0:00:42.915 ********** 2026-03-29 01:09:07.795347 | orchestrator | included: /ansible/roles/magnum/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 01:09:07.795350 | orchestrator | 2026-03-29 01:09:07.795353 | orchestrator | TASK [service-cert-copy : magnum | Copying over extra CA certificates] ********* 2026-03-29 01:09:07.795356 | orchestrator | Sunday 29 March 2026 01:08:04 +0000 (0:00:00.543) 0:00:43.459 ********** 2026-03-29 01:09:07.795361 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-29 01:09:07.795365 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-29 01:09:07.795368 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-29 01:09:07.795373 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-29 01:09:07.795380 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-29 01:09:07.795385 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-29 01:09:07.795388 | orchestrator | 2026-03-29 01:09:07.795391 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS certificate] *** 2026-03-29 01:09:07.795394 | orchestrator | Sunday 29 March 2026 01:08:07 +0000 (0:00:02.356) 0:00:45.815 ********** 2026-03-29 01:09:07.795398 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-29 01:09:07.795403 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-29 01:09:07.795407 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:09:07.795410 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-29 01:09:07.795416 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-29 01:09:07.795419 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:09:07.795428 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-29 01:09:07.795431 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-29 01:09:07.795437 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:09:07.795441 | orchestrator | 2026-03-29 01:09:07.795444 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS key] ****** 2026-03-29 01:09:07.795447 | orchestrator | Sunday 29 March 2026 01:08:08 +0000 (0:00:01.414) 0:00:47.230 ********** 2026-03-29 01:09:07.795450 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-29 01:09:07.795454 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-29 01:09:07.795457 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:09:07.795464 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-29 01:09:07.795468 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-29 01:09:07.795471 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:09:07.795474 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-29 01:09:07.795479 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-29 01:09:07.795483 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:09:07.795486 | orchestrator | 2026-03-29 01:09:07.795489 | orchestrator | TASK [magnum : Copying over config.json files for services] ******************** 2026-03-29 01:09:07.795492 | orchestrator | Sunday 29 March 2026 01:08:09 +0000 (0:00:00.830) 0:00:48.060 ********** 2026-03-29 01:09:07.795588 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-29 01:09:07.795595 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-29 01:09:07.795598 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-29 01:09:07.795647 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-29 01:09:07.795652 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-29 01:09:07.795658 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-29 01:09:07.795661 | orchestrator | 2026-03-29 01:09:07.795665 | orchestrator | TASK [magnum : Copying over magnum.conf] *************************************** 2026-03-29 01:09:07.795668 | orchestrator | Sunday 29 March 2026 01:08:11 +0000 (0:00:02.133) 0:00:50.194 ********** 2026-03-29 01:09:07.795673 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-29 01:09:07.795679 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-29 01:09:07.795682 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-29 01:09:07.795685 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-29 01:09:07.795691 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-29 01:09:07.795696 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-29 01:09:07.795701 | orchestrator | 2026-03-29 01:09:07.795705 | orchestrator | TASK [magnum : Copying over existing policy file] ****************************** 2026-03-29 01:09:07.795708 | orchestrator | Sunday 29 March 2026 01:08:18 +0000 (0:00:07.085) 0:00:57.279 ********** 2026-03-29 01:09:07.795711 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-29 01:09:07.795714 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-29 01:09:07.795718 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:09:07.795721 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-29 01:09:07.795727 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-29 01:09:07.795730 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:09:07.795735 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-29 01:09:07.795741 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-29 01:09:07.795744 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:09:07.795747 | orchestrator | 2026-03-29 01:09:07.795751 | orchestrator | TASK [magnum : Check magnum containers] **************************************** 2026-03-29 01:09:07.795754 | orchestrator | Sunday 29 March 2026 01:08:19 +0000 (0:00:00.591) 0:00:57.870 ********** 2026-03-29 01:09:07.795757 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-29 01:09:07.795762 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-29 01:09:07.795767 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-29 01:09:07.795772 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-29 01:09:07.795776 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-29 01:09:07.795779 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-29 01:09:07.795782 | orchestrator | 2026-03-29 01:09:07.795785 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-03-29 01:09:07.795788 | orchestrator | Sunday 29 March 2026 01:08:21 +0000 (0:00:02.172) 0:01:00.043 ********** 2026-03-29 01:09:07.795792 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:09:07.795795 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:09:07.795798 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:09:07.795801 | orchestrator | 2026-03-29 01:09:07.795804 | orchestrator | TASK [magnum : Creating Magnum database] *************************************** 2026-03-29 01:09:07.795807 | orchestrator | Sunday 29 March 2026 01:08:21 +0000 (0:00:00.263) 0:01:00.307 ********** 2026-03-29 01:09:07.795810 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:09:07.795813 | orchestrator | 2026-03-29 01:09:07.795817 | orchestrator | TASK [magnum : Creating Magnum database user and setting permissions] ********** 2026-03-29 01:09:07.795820 | orchestrator | Sunday 29 March 2026 01:08:23 +0000 (0:00:02.106) 0:01:02.413 ********** 2026-03-29 01:09:07.795823 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:09:07.795826 | orchestrator | 2026-03-29 01:09:07.795829 | orchestrator | TASK [magnum : Running Magnum bootstrap container] ***************************** 2026-03-29 01:09:07.795832 | orchestrator | Sunday 29 March 2026 01:08:25 +0000 (0:00:02.135) 0:01:04.548 ********** 2026-03-29 01:09:07.795840 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:09:07.795843 | orchestrator | 2026-03-29 01:09:07.795846 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-03-29 01:09:07.795849 | orchestrator | Sunday 29 March 2026 01:08:40 +0000 (0:00:14.682) 0:01:19.230 ********** 2026-03-29 01:09:07.795852 | orchestrator | 2026-03-29 01:09:07.795855 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-03-29 01:09:07.795858 | orchestrator | Sunday 29 March 2026 01:08:40 +0000 (0:00:00.237) 0:01:19.468 ********** 2026-03-29 01:09:07.795862 | orchestrator | 2026-03-29 01:09:07.795865 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-03-29 01:09:07.795868 | orchestrator | Sunday 29 March 2026 01:08:40 +0000 (0:00:00.065) 0:01:19.534 ********** 2026-03-29 01:09:07.795871 | orchestrator | 2026-03-29 01:09:07.795874 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-api container] ************************ 2026-03-29 01:09:07.795877 | orchestrator | Sunday 29 March 2026 01:08:40 +0000 (0:00:00.066) 0:01:19.600 ********** 2026-03-29 01:09:07.795880 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:09:07.795883 | orchestrator | changed: [testbed-node-2] 2026-03-29 01:09:07.795889 | orchestrator | changed: [testbed-node-1] 2026-03-29 01:09:07.795892 | orchestrator | 2026-03-29 01:09:07.795896 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-conductor container] ****************** 2026-03-29 01:09:07.795899 | orchestrator | Sunday 29 March 2026 01:08:53 +0000 (0:00:12.475) 0:01:32.076 ********** 2026-03-29 01:09:07.795902 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:09:07.795905 | orchestrator | changed: [testbed-node-2] 2026-03-29 01:09:07.795908 | orchestrator | changed: [testbed-node-1] 2026-03-29 01:09:07.795911 | orchestrator | 2026-03-29 01:09:07.795914 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 01:09:07.795918 | orchestrator | testbed-node-0 : ok=26  changed=18  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-29 01:09:07.795921 | orchestrator | testbed-node-1 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-29 01:09:07.795924 | orchestrator | testbed-node-2 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-29 01:09:07.795927 | orchestrator | 2026-03-29 01:09:07.795930 | orchestrator | 2026-03-29 01:09:07.795934 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-29 01:09:07.795937 | orchestrator | Sunday 29 March 2026 01:09:06 +0000 (0:00:13.471) 0:01:45.547 ********** 2026-03-29 01:09:07.795940 | orchestrator | =============================================================================== 2026-03-29 01:09:07.795943 | orchestrator | magnum : Running Magnum bootstrap container ---------------------------- 14.68s 2026-03-29 01:09:07.795946 | orchestrator | magnum : Restart magnum-conductor container ---------------------------- 13.47s 2026-03-29 01:09:07.795949 | orchestrator | magnum : Restart magnum-api container ---------------------------------- 12.48s 2026-03-29 01:09:07.795953 | orchestrator | magnum : Copying over magnum.conf --------------------------------------- 7.09s 2026-03-29 01:09:07.795956 | orchestrator | service-ks-register : magnum | Creating endpoints ----------------------- 5.94s 2026-03-29 01:09:07.795959 | orchestrator | service-ks-register : magnum | Creating services ------------------------ 4.07s 2026-03-29 01:09:07.795962 | orchestrator | service-ks-register : magnum | Granting user roles ---------------------- 3.71s 2026-03-29 01:09:07.795965 | orchestrator | magnum : Creating Magnum trustee user role ------------------------------ 3.68s 2026-03-29 01:09:07.795968 | orchestrator | service-ks-register : magnum | Creating users --------------------------- 3.63s 2026-03-29 01:09:07.795971 | orchestrator | magnum : Creating Magnum trustee user ----------------------------------- 3.48s 2026-03-29 01:09:07.795974 | orchestrator | service-ks-register : magnum | Creating projects ------------------------ 3.22s 2026-03-29 01:09:07.795978 | orchestrator | service-ks-register : magnum | Creating roles --------------------------- 3.18s 2026-03-29 01:09:07.795983 | orchestrator | magnum : Creating Magnum trustee domain --------------------------------- 3.02s 2026-03-29 01:09:07.795986 | orchestrator | magnum : Copying over kubeconfig file ----------------------------------- 2.46s 2026-03-29 01:09:07.795989 | orchestrator | service-cert-copy : magnum | Copying over extra CA certificates --------- 2.36s 2026-03-29 01:09:07.795992 | orchestrator | magnum : Check magnum containers ---------------------------------------- 2.17s 2026-03-29 01:09:07.795995 | orchestrator | magnum : Creating Magnum database user and setting permissions ---------- 2.14s 2026-03-29 01:09:07.795998 | orchestrator | magnum : Copying over config.json files for services -------------------- 2.13s 2026-03-29 01:09:07.796002 | orchestrator | magnum : Creating Magnum database --------------------------------------- 2.11s 2026-03-29 01:09:07.796005 | orchestrator | magnum : Ensuring config directories exist ------------------------------ 1.92s 2026-03-29 01:09:07.796008 | orchestrator | 2026-03-29 01:09:07 | INFO  | Task 12879e08-866f-411e-86e5-19578cc2ff39 is in state STARTED 2026-03-29 01:09:07.796391 | orchestrator | 2026-03-29 01:09:07 | INFO  | Task 05bed01a-cd1b-437f-92a2-658822fb3875 is in state STARTED 2026-03-29 01:09:07.796422 | orchestrator | 2026-03-29 01:09:07 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:09:10.840254 | orchestrator | 2026-03-29 01:09:10 | INFO  | Task eaf45d79-acac-464b-9378-52c460a1ae78 is in state STARTED 2026-03-29 01:09:10.843370 | orchestrator | 2026-03-29 01:09:10 | INFO  | Task 12879e08-866f-411e-86e5-19578cc2ff39 is in state STARTED 2026-03-29 01:09:10.847949 | orchestrator | 2026-03-29 01:09:10 | INFO  | Task 05bed01a-cd1b-437f-92a2-658822fb3875 is in state STARTED 2026-03-29 01:09:10.847990 | orchestrator | 2026-03-29 01:09:10 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:09:13.892149 | orchestrator | 2026-03-29 01:09:13 | INFO  | Task eaf45d79-acac-464b-9378-52c460a1ae78 is in state STARTED 2026-03-29 01:09:13.894263 | orchestrator | 2026-03-29 01:09:13 | INFO  | Task 12879e08-866f-411e-86e5-19578cc2ff39 is in state STARTED 2026-03-29 01:09:13.895719 | orchestrator | 2026-03-29 01:09:13 | INFO  | Task 05bed01a-cd1b-437f-92a2-658822fb3875 is in state STARTED 2026-03-29 01:09:13.895775 | orchestrator | 2026-03-29 01:09:13 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:09:16.942289 | orchestrator | 2026-03-29 01:09:16 | INFO  | Task eaf45d79-acac-464b-9378-52c460a1ae78 is in state STARTED 2026-03-29 01:09:16.943922 | orchestrator | 2026-03-29 01:09:16 | INFO  | Task 12879e08-866f-411e-86e5-19578cc2ff39 is in state STARTED 2026-03-29 01:09:16.944709 | orchestrator | 2026-03-29 01:09:16 | INFO  | Task 05bed01a-cd1b-437f-92a2-658822fb3875 is in state STARTED 2026-03-29 01:09:16.944734 | orchestrator | 2026-03-29 01:09:16 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:09:20.009674 | orchestrator | 2026-03-29 01:09:20 | INFO  | Task eaf45d79-acac-464b-9378-52c460a1ae78 is in state STARTED 2026-03-29 01:09:20.013257 | orchestrator | 2026-03-29 01:09:20 | INFO  | Task 12879e08-866f-411e-86e5-19578cc2ff39 is in state STARTED 2026-03-29 01:09:20.030235 | orchestrator | 2026-03-29 01:09:20 | INFO  | Task 05bed01a-cd1b-437f-92a2-658822fb3875 is in state STARTED 2026-03-29 01:09:20.030291 | orchestrator | 2026-03-29 01:09:20 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:09:23.083105 | orchestrator | 2026-03-29 01:09:23 | INFO  | Task eaf45d79-acac-464b-9378-52c460a1ae78 is in state STARTED 2026-03-29 01:09:23.083151 | orchestrator | 2026-03-29 01:09:23 | INFO  | Task 12879e08-866f-411e-86e5-19578cc2ff39 is in state STARTED 2026-03-29 01:09:23.083156 | orchestrator | 2026-03-29 01:09:23 | INFO  | Task 05bed01a-cd1b-437f-92a2-658822fb3875 is in state STARTED 2026-03-29 01:09:23.083161 | orchestrator | 2026-03-29 01:09:23 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:09:26.095467 | orchestrator | 2026-03-29 01:09:26 | INFO  | Task eaf45d79-acac-464b-9378-52c460a1ae78 is in state STARTED 2026-03-29 01:09:26.096357 | orchestrator | 2026-03-29 01:09:26 | INFO  | Task 12879e08-866f-411e-86e5-19578cc2ff39 is in state STARTED 2026-03-29 01:09:26.097101 | orchestrator | 2026-03-29 01:09:26 | INFO  | Task 05bed01a-cd1b-437f-92a2-658822fb3875 is in state STARTED 2026-03-29 01:09:26.097410 | orchestrator | 2026-03-29 01:09:26 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:09:29.123123 | orchestrator | 2026-03-29 01:09:29 | INFO  | Task eaf45d79-acac-464b-9378-52c460a1ae78 is in state STARTED 2026-03-29 01:09:29.123446 | orchestrator | 2026-03-29 01:09:29 | INFO  | Task 12879e08-866f-411e-86e5-19578cc2ff39 is in state STARTED 2026-03-29 01:09:29.124050 | orchestrator | 2026-03-29 01:09:29 | INFO  | Task 05bed01a-cd1b-437f-92a2-658822fb3875 is in state STARTED 2026-03-29 01:09:29.124078 | orchestrator | 2026-03-29 01:09:29 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:09:32.163418 | orchestrator | 2026-03-29 01:09:32 | INFO  | Task eaf45d79-acac-464b-9378-52c460a1ae78 is in state STARTED 2026-03-29 01:09:32.164302 | orchestrator | 2026-03-29 01:09:32 | INFO  | Task 12879e08-866f-411e-86e5-19578cc2ff39 is in state STARTED 2026-03-29 01:09:32.166197 | orchestrator | 2026-03-29 01:09:32 | INFO  | Task 05bed01a-cd1b-437f-92a2-658822fb3875 is in state STARTED 2026-03-29 01:09:32.166254 | orchestrator | 2026-03-29 01:09:32 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:09:35.208616 | orchestrator | 2026-03-29 01:09:35 | INFO  | Task eaf45d79-acac-464b-9378-52c460a1ae78 is in state STARTED 2026-03-29 01:09:35.209673 | orchestrator | 2026-03-29 01:09:35 | INFO  | Task 12879e08-866f-411e-86e5-19578cc2ff39 is in state STARTED 2026-03-29 01:09:35.211131 | orchestrator | 2026-03-29 01:09:35 | INFO  | Task 05bed01a-cd1b-437f-92a2-658822fb3875 is in state STARTED 2026-03-29 01:09:35.211222 | orchestrator | 2026-03-29 01:09:35 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:09:38.254426 | orchestrator | 2026-03-29 01:09:38 | INFO  | Task eaf45d79-acac-464b-9378-52c460a1ae78 is in state STARTED 2026-03-29 01:09:38.257624 | orchestrator | 2026-03-29 01:09:38 | INFO  | Task 12879e08-866f-411e-86e5-19578cc2ff39 is in state STARTED 2026-03-29 01:09:38.259271 | orchestrator | 2026-03-29 01:09:38 | INFO  | Task 05bed01a-cd1b-437f-92a2-658822fb3875 is in state STARTED 2026-03-29 01:09:38.259314 | orchestrator | 2026-03-29 01:09:38 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:09:41.301972 | orchestrator | 2026-03-29 01:09:41 | INFO  | Task eaf45d79-acac-464b-9378-52c460a1ae78 is in state STARTED 2026-03-29 01:09:41.303352 | orchestrator | 2026-03-29 01:09:41 | INFO  | Task 12879e08-866f-411e-86e5-19578cc2ff39 is in state STARTED 2026-03-29 01:09:41.305157 | orchestrator | 2026-03-29 01:09:41 | INFO  | Task 05bed01a-cd1b-437f-92a2-658822fb3875 is in state STARTED 2026-03-29 01:09:41.305196 | orchestrator | 2026-03-29 01:09:41 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:09:44.348274 | orchestrator | 2026-03-29 01:09:44 | INFO  | Task eaf45d79-acac-464b-9378-52c460a1ae78 is in state STARTED 2026-03-29 01:09:44.349709 | orchestrator | 2026-03-29 01:09:44 | INFO  | Task 12879e08-866f-411e-86e5-19578cc2ff39 is in state STARTED 2026-03-29 01:09:44.351289 | orchestrator | 2026-03-29 01:09:44 | INFO  | Task 05bed01a-cd1b-437f-92a2-658822fb3875 is in state STARTED 2026-03-29 01:09:44.351398 | orchestrator | 2026-03-29 01:09:44 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:09:47.399073 | orchestrator | 2026-03-29 01:09:47 | INFO  | Task eaf45d79-acac-464b-9378-52c460a1ae78 is in state STARTED 2026-03-29 01:09:47.400724 | orchestrator | 2026-03-29 01:09:47 | INFO  | Task 12879e08-866f-411e-86e5-19578cc2ff39 is in state STARTED 2026-03-29 01:09:47.402344 | orchestrator | 2026-03-29 01:09:47 | INFO  | Task 05bed01a-cd1b-437f-92a2-658822fb3875 is in state STARTED 2026-03-29 01:09:47.402439 | orchestrator | 2026-03-29 01:09:47 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:09:50.457154 | orchestrator | 2026-03-29 01:09:50 | INFO  | Task eaf45d79-acac-464b-9378-52c460a1ae78 is in state STARTED 2026-03-29 01:09:50.461199 | orchestrator | 2026-03-29 01:09:50 | INFO  | Task 12879e08-866f-411e-86e5-19578cc2ff39 is in state SUCCESS 2026-03-29 01:09:50.463248 | orchestrator | 2026-03-29 01:09:50.463299 | orchestrator | 2026-03-29 01:09:50.463307 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-29 01:09:50.463314 | orchestrator | 2026-03-29 01:09:50.463320 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-29 01:09:50.463326 | orchestrator | Sunday 29 March 2026 01:07:51 +0000 (0:00:00.286) 0:00:00.286 ********** 2026-03-29 01:09:50.463332 | orchestrator | ok: [testbed-node-0] 2026-03-29 01:09:50.463339 | orchestrator | ok: [testbed-node-1] 2026-03-29 01:09:50.463345 | orchestrator | ok: [testbed-node-2] 2026-03-29 01:09:50.463351 | orchestrator | 2026-03-29 01:09:50.463357 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-29 01:09:50.463363 | orchestrator | Sunday 29 March 2026 01:07:51 +0000 (0:00:00.266) 0:00:00.553 ********** 2026-03-29 01:09:50.463369 | orchestrator | ok: [testbed-node-0] => (item=enable_grafana_True) 2026-03-29 01:09:50.463375 | orchestrator | ok: [testbed-node-1] => (item=enable_grafana_True) 2026-03-29 01:09:50.463381 | orchestrator | ok: [testbed-node-2] => (item=enable_grafana_True) 2026-03-29 01:09:50.463387 | orchestrator | 2026-03-29 01:09:50.463393 | orchestrator | PLAY [Apply role grafana] ****************************************************** 2026-03-29 01:09:50.463398 | orchestrator | 2026-03-29 01:09:50.463404 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-03-29 01:09:50.463410 | orchestrator | Sunday 29 March 2026 01:07:51 +0000 (0:00:00.303) 0:00:00.857 ********** 2026-03-29 01:09:50.463416 | orchestrator | included: /ansible/roles/grafana/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 01:09:50.463422 | orchestrator | 2026-03-29 01:09:50.463428 | orchestrator | TASK [grafana : Ensuring config directories exist] ***************************** 2026-03-29 01:09:50.463434 | orchestrator | Sunday 29 March 2026 01:07:52 +0000 (0:00:00.548) 0:00:01.406 ********** 2026-03-29 01:09:50.463443 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-29 01:09:50.463456 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-29 01:09:50.463495 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-29 01:09:50.463506 | orchestrator | 2026-03-29 01:09:50.463515 | orchestrator | TASK [grafana : Check if extra configuration file exists] ********************** 2026-03-29 01:09:50.463557 | orchestrator | Sunday 29 March 2026 01:07:53 +0000 (0:00:01.149) 0:00:02.555 ********** 2026-03-29 01:09:50.463570 | orchestrator | [WARNING]: Skipped '/operations/prometheus/grafana' path due to this access 2026-03-29 01:09:50.463580 | orchestrator | issue: '/operations/prometheus/grafana' is not a directory 2026-03-29 01:09:50.463590 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-29 01:09:50.463600 | orchestrator | 2026-03-29 01:09:50.463607 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-03-29 01:09:50.463615 | orchestrator | Sunday 29 March 2026 01:07:54 +0000 (0:00:00.799) 0:00:03.354 ********** 2026-03-29 01:09:50.463675 | orchestrator | included: /ansible/roles/grafana/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 01:09:50.463685 | orchestrator | 2026-03-29 01:09:50.463694 | orchestrator | TASK [service-cert-copy : grafana | Copying over extra CA certificates] ******** 2026-03-29 01:09:50.463703 | orchestrator | Sunday 29 March 2026 01:07:54 +0000 (0:00:00.626) 0:00:03.981 ********** 2026-03-29 01:09:50.463727 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-29 01:09:50.463739 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-29 01:09:50.463751 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-29 01:09:50.463761 | orchestrator | 2026-03-29 01:09:50.463838 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS certificate] *** 2026-03-29 01:09:50.463863 | orchestrator | Sunday 29 March 2026 01:07:56 +0000 (0:00:01.497) 0:00:05.479 ********** 2026-03-29 01:09:50.463873 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-29 01:09:50.463937 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:09:50.463957 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-29 01:09:50.463969 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:09:50.463989 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-29 01:09:50.464000 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:09:50.464010 | orchestrator | 2026-03-29 01:09:50.464017 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS key] ***** 2026-03-29 01:09:50.464024 | orchestrator | Sunday 29 March 2026 01:07:56 +0000 (0:00:00.363) 0:00:05.843 ********** 2026-03-29 01:09:50.464031 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-29 01:09:50.464038 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-29 01:09:50.464052 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:09:50.464062 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:09:50.464075 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-29 01:09:50.464089 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:09:50.464098 | orchestrator | 2026-03-29 01:09:50.464107 | orchestrator | TASK [grafana : Copying over config.json files] ******************************** 2026-03-29 01:09:50.464117 | orchestrator | Sunday 29 March 2026 01:07:57 +0000 (0:00:00.578) 0:00:06.421 ********** 2026-03-29 01:09:50.464133 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-29 01:09:50.464144 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-29 01:09:50.464163 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-29 01:09:50.464173 | orchestrator | 2026-03-29 01:09:50.464180 | orchestrator | TASK [grafana : Copying over grafana.ini] ************************************** 2026-03-29 01:09:50.464187 | orchestrator | Sunday 29 March 2026 01:07:58 +0000 (0:00:01.404) 0:00:07.826 ********** 2026-03-29 01:09:50.464194 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-29 01:09:50.464209 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-29 01:09:50.464219 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-29 01:09:50.464236 | orchestrator | 2026-03-29 01:09:50.464246 | orchestrator | TASK [grafana : Copying over extra configuration file] ************************* 2026-03-29 01:09:50.464255 | orchestrator | Sunday 29 March 2026 01:07:59 +0000 (0:00:01.335) 0:00:09.162 ********** 2026-03-29 01:09:50.464264 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:09:50.464274 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:09:50.464283 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:09:50.464291 | orchestrator | 2026-03-29 01:09:50.464300 | orchestrator | TASK [grafana : Configuring Prometheus as data source for Grafana] ************* 2026-03-29 01:09:50.464315 | orchestrator | Sunday 29 March 2026 01:08:00 +0000 (0:00:00.429) 0:00:09.591 ********** 2026-03-29 01:09:50.464325 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-03-29 01:09:50.464334 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-03-29 01:09:50.464343 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-03-29 01:09:50.464353 | orchestrator | 2026-03-29 01:09:50.464362 | orchestrator | TASK [grafana : Configuring dashboards provisioning] *************************** 2026-03-29 01:09:50.464372 | orchestrator | Sunday 29 March 2026 01:08:01 +0000 (0:00:01.406) 0:00:10.998 ********** 2026-03-29 01:09:50.464381 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-03-29 01:09:50.464392 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-03-29 01:09:50.464406 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-03-29 01:09:50.464417 | orchestrator | 2026-03-29 01:09:50.464427 | orchestrator | TASK [grafana : Find custom grafana dashboards] ******************************** 2026-03-29 01:09:50.464437 | orchestrator | Sunday 29 March 2026 01:08:03 +0000 (0:00:01.330) 0:00:12.328 ********** 2026-03-29 01:09:50.464454 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-29 01:09:50.464464 | orchestrator | 2026-03-29 01:09:50.464470 | orchestrator | TASK [grafana : Find templated grafana dashboards] ***************************** 2026-03-29 01:09:50.464476 | orchestrator | Sunday 29 March 2026 01:08:04 +0000 (0:00:00.988) 0:00:13.317 ********** 2026-03-29 01:09:50.464482 | orchestrator | [WARNING]: Skipped '/etc/kolla/grafana/dashboards' path due to this access 2026-03-29 01:09:50.464488 | orchestrator | issue: '/etc/kolla/grafana/dashboards' is not a directory 2026-03-29 01:09:50.464502 | orchestrator | ok: [testbed-node-0] 2026-03-29 01:09:50.464509 | orchestrator | ok: [testbed-node-1] 2026-03-29 01:09:50.464515 | orchestrator | ok: [testbed-node-2] 2026-03-29 01:09:50.464521 | orchestrator | 2026-03-29 01:09:50.464609 | orchestrator | TASK [grafana : Prune templated Grafana dashboards] **************************** 2026-03-29 01:09:50.464617 | orchestrator | Sunday 29 March 2026 01:08:04 +0000 (0:00:00.695) 0:00:14.012 ********** 2026-03-29 01:09:50.464623 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:09:50.464629 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:09:50.464634 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:09:50.464640 | orchestrator | 2026-03-29 01:09:50.464649 | orchestrator | TASK [grafana : Copying over custom dashboards] ******************************** 2026-03-29 01:09:50.464658 | orchestrator | Sunday 29 March 2026 01:08:05 +0000 (0:00:00.422) 0:00:14.434 ********** 2026-03-29 01:09:50.464668 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 121701, 'inode': 1102296, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.650692, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:09:50.464741 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 121701, 'inode': 1102296, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.650692, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:09:50.464753 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 121701, 'inode': 1102296, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.650692, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:09:50.464770 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/cephfsdashboard.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfsdashboard.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 143913, 'inode': 1102368, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.6770687, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:09:50.464824 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/cephfsdashboard.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfsdashboard.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 143913, 'inode': 1102368, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.6770687, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:09:50.464845 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/cephfsdashboard.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfsdashboard.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 143913, 'inode': 1102368, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.6770687, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:09:50.464856 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26019, 'inode': 1102398, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.6907024, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:09:50.464866 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26019, 'inode': 1102398, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.6907024, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:09:50.464876 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26019, 'inode': 1102398, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.6907024, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:09:50.464893 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1102329, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.6726978, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:09:50.464905 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1102329, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.6726978, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:09:50.464929 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1102329, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.6726978, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:09:50.464942 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 170293, 'inode': 1102399, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.6917028, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:09:50.464953 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 170293, 'inode': 1102399, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.6917028, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:09:50.464964 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 170293, 'inode': 1102399, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.6917028, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:09:50.464975 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-nvmeof-performance.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-nvmeof-performance.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 33297, 'inode': 1102306, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.6521504, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:09:50.464990 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-nvmeof-performance.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-nvmeof-performance.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 33297, 'inode': 1102306, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.6521504, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:09:50.465017 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-nvmeof-performance.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-nvmeof-performance.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 33297, 'inode': 1102306, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.6521504, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:09:50.465028 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26346, 'inode': 1102389, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.6843455, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:09:50.465038 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26346, 'inode': 1102389, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.6843455, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:09:50.465047 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26346, 'inode': 1102389, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.6843455, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:09:50.465057 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 46110, 'inode': 1102395, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.688702, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:09:50.465072 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 46110, 'inode': 1102395, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.688702, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:09:50.465447 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 46110, 'inode': 1102395, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.688702, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:09:50.465540 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1102293, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.6492417, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:09:50.465555 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1102293, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.6492417, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:09:50.465564 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1102293, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.6492417, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:09:50.465570 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1102305, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.6511345, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:09:50.465581 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1102305, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.6511345, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:09:50.465587 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1102305, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.6511345, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:09:50.465606 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1102363, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.673698, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:09:50.465611 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1102363, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.673698, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:09:50.465617 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1102363, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.673698, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:09:50.465622 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19231, 'inode': 1102392, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.6857011, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:09:50.465627 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19231, 'inode': 1102392, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.6857011, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:09:50.465635 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19231, 'inode': 1102392, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.6857011, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:09:50.465703 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13320, 'inode': 1102397, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.6897023, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:09:50.465713 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13320, 'inode': 1102397, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.6897023, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:09:50.465721 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13320, 'inode': 1102397, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.6897023, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:09:50.465729 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1102325, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.6576939, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:09:50.465739 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1102325, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.6576939, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:09:50.465752 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1102325, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.6576939, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:09:50.465766 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 20042, 'inode': 1102394, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.6877017, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:09:50.465776 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 20042, 'inode': 1102394, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.6877017, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:09:50.465781 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 20042, 'inode': 1102394, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.6877017, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:09:50.465787 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/smb-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/smb-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29877, 'inode': 1102400, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.692703, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:09:50.465792 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/smb-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/smb-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29877, 'inode': 1102400, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.692703, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:09:50.465797 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/smb-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/smb-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29877, 'inode': 1102400, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.692703, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:09:50.465832 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38375, 'inode': 1102391, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.684701, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:09:50.465852 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38375, 'inode': 1102391, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.684701, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:09:50.465863 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38375, 'inode': 1102391, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.684701, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:09:50.465872 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 63043, 'inode': 1102379, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.6796997, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:09:50.465882 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 63043, 'inode': 1102379, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.6796997, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:09:50.465891 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 63043, 'inode': 1102379, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.6796997, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:09:50.465911 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27387, 'inode': 1102377, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.6786993, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:09:50.465924 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27387, 'inode': 1102377, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.6786993, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:09:50.465930 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27387, 'inode': 1102377, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.6786993, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:09:50.465935 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49016, 'inode': 1102393, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.6867015, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:09:50.465940 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49016, 'inode': 1102393, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.6867015, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:09:50.465946 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49016, 'inode': 1102393, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.6867015, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:09:50.465959 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 43303, 'inode': 1102373, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.678323, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:09:50.465968 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 43303, 'inode': 1102373, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.678323, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:09:50.465982 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 43303, 'inode': 1102373, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.678323, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:09:50.465991 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16614, 'inode': 1102396, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.6897023, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:09:50.466000 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16614, 'inode': 1102396, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.6897023, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:09:50.466010 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16614, 'inode': 1102396, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.6897023, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:09:50.466076 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-nvmeof.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-nvmeof.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 52667, 'inode': 1102310, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.6566935, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:09:50.466092 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-nvmeof.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-nvmeof.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 52667, 'inode': 1102310, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.6566935, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:09:50.466107 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-nvmeof.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-nvmeof.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 52667, 'inode': 1102310, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.6566935, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:09:50.466116 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1102498, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.748718, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:09:50.466126 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1102498, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.748718, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:09:50.466135 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1102498, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.748718, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:09:50.466148 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1102408, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.703706, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:09:50.466161 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1102408, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.703706, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:09:50.466170 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1102408, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.703706, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:09:50.466184 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1102405, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.6957037, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:09:50.466191 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1102405, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.6957037, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:09:50.466198 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1102405, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.6957037, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:09:50.466216 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15767, 'inode': 1102412, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.7065346, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:09:50.466232 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15767, 'inode': 1102412, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.7065346, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:09:50.466240 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15767, 'inode': 1102412, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.7065346, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:09:50.466254 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1102402, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.6936684, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:09:50.466264 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1102402, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.6936684, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:09:50.466273 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1102402, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.6936684, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:09:50.466289 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1102481, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.7422795, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:09:50.466298 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1102481, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.7422795, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:09:50.466305 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1102481, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.7422795, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:09:50.466316 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1102445, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.7287126, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:09:50.466322 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1102445, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.7287126, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:09:50.466329 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1102445, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.7287126, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:09:50.466339 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22303, 'inode': 1102482, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.7437875, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:09:50.466346 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22303, 'inode': 1102482, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.7437875, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:09:50.466355 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22303, 'inode': 1102482, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.7437875, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:09:50.466366 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1102493, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.7477176, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:09:50.466373 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1102493, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.7477176, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:09:50.466379 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1102493, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.7477176, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:09:50.466389 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21194, 'inode': 1102480, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.7417161, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:09:50.466395 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21194, 'inode': 1102480, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.7417161, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:09:50.466406 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21194, 'inode': 1102480, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.7417161, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:09:50.466419 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1102410, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.7057066, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:09:50.466428 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1102410, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.7057066, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:09:50.466436 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1102410, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.7057066, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:09:50.466450 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1102407, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.7007053, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:09:50.466459 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1102407, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.7007053, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:09:50.466472 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1102409, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.7047062, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:09:50.466481 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1102407, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.7007053, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:09:50.466494 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1102409, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.7047062, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:09:50.466503 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1102406, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.6987047, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:09:50.466517 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1102409, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.7047062, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:09:50.466543 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1102406, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.6987047, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:09:50.466555 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15957, 'inode': 1102411, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.7062, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:09:50.466560 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1102406, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.6987047, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:09:50.466571 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15957, 'inode': 1102411, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.7062, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:09:50.466577 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1102488, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.7467175, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:09:50.466586 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15957, 'inode': 1102411, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.7062, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:09:50.466591 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1102488, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.7467175, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:09:50.466597 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1102487, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.7447202, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:09:50.466605 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1102488, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.7467175, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:09:50.466614 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1102487, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.7447202, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:09:50.466621 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1102403, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.6937034, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:09:50.466640 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1102487, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.7447202, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:09:50.466651 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1102403, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.6937034, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:09:50.466660 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1102404, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.6947036, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:09:50.466673 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1102403, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.6937034, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:09:50.466685 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1102404, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.6947036, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:09:50.466690 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1102474, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.7417161, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:09:50.466700 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1102404, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.6947036, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:09:50.466706 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1102474, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.7417161, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:09:50.466711 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21951, 'inode': 1102486, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.7437875, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:09:50.466719 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1102474, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.7417161, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:09:50.466728 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21951, 'inode': 1102486, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.7437875, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:09:50.466733 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21951, 'inode': 1102486, 'dev': 96, 'nlink': 1, 'atime': 1774742554.0, 'mtime': 1774742554.0, 'ctime': 1774743590.7437875, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:09:50.466742 | orchestrator | 2026-03-29 01:09:50.466748 | orchestrator | TASK [grafana : Check grafana containers] ************************************** 2026-03-29 01:09:50.466753 | orchestrator | Sunday 29 March 2026 01:08:42 +0000 (0:00:36.814) 0:00:51.249 ********** 2026-03-29 01:09:50.466759 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-29 01:09:50.466764 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-29 01:09:50.466770 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-29 01:09:50.466775 | orchestrator | 2026-03-29 01:09:50.466781 | orchestrator | TASK [grafana : Creating grafana database] ************************************* 2026-03-29 01:09:50.466786 | orchestrator | Sunday 29 March 2026 01:08:43 +0000 (0:00:01.557) 0:00:52.806 ********** 2026-03-29 01:09:50.466795 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:09:50.466804 | orchestrator | 2026-03-29 01:09:50.466816 | orchestrator | TASK [grafana : Creating grafana database user and setting permissions] ******** 2026-03-29 01:09:50.466821 | orchestrator | Sunday 29 March 2026 01:08:46 +0000 (0:00:02.965) 0:00:55.774 ********** 2026-03-29 01:09:50.466827 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:09:50.466832 | orchestrator | 2026-03-29 01:09:50.466837 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-03-29 01:09:50.466842 | orchestrator | Sunday 29 March 2026 01:08:49 +0000 (0:00:02.545) 0:00:58.319 ********** 2026-03-29 01:09:50.466847 | orchestrator | 2026-03-29 01:09:50.466852 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-03-29 01:09:50.466857 | orchestrator | Sunday 29 March 2026 01:08:49 +0000 (0:00:00.056) 0:00:58.375 ********** 2026-03-29 01:09:50.466862 | orchestrator | 2026-03-29 01:09:50.466868 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-03-29 01:09:50.466873 | orchestrator | Sunday 29 March 2026 01:08:49 +0000 (0:00:00.058) 0:00:58.434 ********** 2026-03-29 01:09:50.466901 | orchestrator | 2026-03-29 01:09:50.466907 | orchestrator | RUNNING HANDLER [grafana : Restart first grafana container] ******************** 2026-03-29 01:09:50.466912 | orchestrator | Sunday 29 March 2026 01:08:49 +0000 (0:00:00.062) 0:00:58.496 ********** 2026-03-29 01:09:50.466917 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:09:50.466922 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:09:50.466927 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:09:50.466932 | orchestrator | 2026-03-29 01:09:50.466937 | orchestrator | RUNNING HANDLER [grafana : Waiting for grafana to start on first node] ********* 2026-03-29 01:09:50.466946 | orchestrator | Sunday 29 March 2026 01:08:51 +0000 (0:00:02.053) 0:01:00.550 ********** 2026-03-29 01:09:50.466952 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:09:50.466957 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:09:50.466962 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (12 retries left). 2026-03-29 01:09:50.466968 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (11 retries left). 2026-03-29 01:09:50.466973 | orchestrator | ok: [testbed-node-0] 2026-03-29 01:09:50.466979 | orchestrator | 2026-03-29 01:09:50.466984 | orchestrator | RUNNING HANDLER [grafana : Restart remaining grafana containers] *************** 2026-03-29 01:09:50.466989 | orchestrator | Sunday 29 March 2026 01:09:17 +0000 (0:00:26.001) 0:01:26.552 ********** 2026-03-29 01:09:50.466994 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:09:50.466999 | orchestrator | changed: [testbed-node-1] 2026-03-29 01:09:50.467005 | orchestrator | changed: [testbed-node-2] 2026-03-29 01:09:50.467010 | orchestrator | 2026-03-29 01:09:50.467015 | orchestrator | TASK [grafana : Wait for grafana application ready] **************************** 2026-03-29 01:09:50.467021 | orchestrator | Sunday 29 March 2026 01:09:41 +0000 (0:00:24.167) 0:01:50.719 ********** 2026-03-29 01:09:50.467026 | orchestrator | ok: [testbed-node-0] 2026-03-29 01:09:50.467031 | orchestrator | 2026-03-29 01:09:50.467036 | orchestrator | TASK [grafana : Remove old grafana docker volume] ****************************** 2026-03-29 01:09:50.467041 | orchestrator | Sunday 29 March 2026 01:09:44 +0000 (0:00:03.058) 0:01:53.778 ********** 2026-03-29 01:09:50.467046 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:09:50.467051 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:09:50.467060 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:09:50.467068 | orchestrator | 2026-03-29 01:09:50.467076 | orchestrator | TASK [grafana : Enable grafana datasources] ************************************ 2026-03-29 01:09:50.467084 | orchestrator | Sunday 29 March 2026 01:09:44 +0000 (0:00:00.253) 0:01:54.032 ********** 2026-03-29 01:09:50.467093 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'influxdb', 'value': {'enabled': False, 'data': {'isDefault': True, 'database': 'telegraf', 'name': 'telegraf', 'type': 'influxdb', 'url': 'https://api-int.testbed.osism.xyz:8086', 'access': 'proxy', 'basicAuth': False}}})  2026-03-29 01:09:50.467102 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'data': {'name': 'opensearch', 'type': 'grafana-opensearch-datasource', 'access': 'proxy', 'url': 'https://api-int.testbed.osism.xyz:9200', 'jsonData': {'flavor': 'OpenSearch', 'database': 'flog-*', 'version': '2.11.1', 'timeField': '@timestamp', 'logLevelField': 'log_level'}}}}) 2026-03-29 01:09:50.467112 | orchestrator | 2026-03-29 01:09:50.467120 | orchestrator | TASK [grafana : Disable Getting Started panel] ********************************* 2026-03-29 01:09:50.467128 | orchestrator | Sunday 29 March 2026 01:09:47 +0000 (0:00:02.366) 0:01:56.398 ********** 2026-03-29 01:09:50.467136 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:09:50.467144 | orchestrator | 2026-03-29 01:09:50.467152 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 01:09:50.467162 | orchestrator | testbed-node-0 : ok=21  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-29 01:09:50.467172 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-29 01:09:50.467186 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-29 01:09:50.467195 | orchestrator | 2026-03-29 01:09:50.467204 | orchestrator | 2026-03-29 01:09:50.467213 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-29 01:09:50.467219 | orchestrator | Sunday 29 March 2026 01:09:47 +0000 (0:00:00.255) 0:01:56.654 ********** 2026-03-29 01:09:50.467224 | orchestrator | =============================================================================== 2026-03-29 01:09:50.467229 | orchestrator | grafana : Copying over custom dashboards ------------------------------- 36.81s 2026-03-29 01:09:50.467238 | orchestrator | grafana : Waiting for grafana to start on first node ------------------- 26.00s 2026-03-29 01:09:50.467243 | orchestrator | grafana : Restart remaining grafana containers ------------------------- 24.17s 2026-03-29 01:09:50.467248 | orchestrator | grafana : Wait for grafana application ready ---------------------------- 3.06s 2026-03-29 01:09:50.467253 | orchestrator | grafana : Creating grafana database ------------------------------------- 2.97s 2026-03-29 01:09:50.467258 | orchestrator | grafana : Creating grafana database user and setting permissions -------- 2.55s 2026-03-29 01:09:50.467263 | orchestrator | grafana : Enable grafana datasources ------------------------------------ 2.37s 2026-03-29 01:09:50.467268 | orchestrator | grafana : Restart first grafana container ------------------------------- 2.05s 2026-03-29 01:09:50.467273 | orchestrator | grafana : Check grafana containers -------------------------------------- 1.56s 2026-03-29 01:09:50.467278 | orchestrator | service-cert-copy : grafana | Copying over extra CA certificates -------- 1.50s 2026-03-29 01:09:50.467283 | orchestrator | grafana : Configuring Prometheus as data source for Grafana ------------- 1.41s 2026-03-29 01:09:50.467288 | orchestrator | grafana : Copying over config.json files -------------------------------- 1.40s 2026-03-29 01:09:50.467293 | orchestrator | grafana : Copying over grafana.ini -------------------------------------- 1.34s 2026-03-29 01:09:50.467303 | orchestrator | grafana : Configuring dashboards provisioning --------------------------- 1.33s 2026-03-29 01:09:50.467309 | orchestrator | grafana : Ensuring config directories exist ----------------------------- 1.15s 2026-03-29 01:09:50.467314 | orchestrator | grafana : Find custom grafana dashboards -------------------------------- 0.99s 2026-03-29 01:09:50.467319 | orchestrator | grafana : Check if extra configuration file exists ---------------------- 0.80s 2026-03-29 01:09:50.467324 | orchestrator | grafana : Find templated grafana dashboards ----------------------------- 0.69s 2026-03-29 01:09:50.467329 | orchestrator | grafana : include_tasks ------------------------------------------------- 0.63s 2026-03-29 01:09:50.467334 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS key ----- 0.58s 2026-03-29 01:09:50.467339 | orchestrator | 2026-03-29 01:09:50 | INFO  | Task 05bed01a-cd1b-437f-92a2-658822fb3875 is in state STARTED 2026-03-29 01:09:50.467345 | orchestrator | 2026-03-29 01:09:50 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:09:53.521753 | orchestrator | 2026-03-29 01:09:53 | INFO  | Task eaf45d79-acac-464b-9378-52c460a1ae78 is in state STARTED 2026-03-29 01:09:53.523066 | orchestrator | 2026-03-29 01:09:53 | INFO  | Task 05bed01a-cd1b-437f-92a2-658822fb3875 is in state STARTED 2026-03-29 01:09:53.523109 | orchestrator | 2026-03-29 01:09:53 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:09:56.574662 | orchestrator | 2026-03-29 01:09:56 | INFO  | Task eaf45d79-acac-464b-9378-52c460a1ae78 is in state STARTED 2026-03-29 01:09:56.576303 | orchestrator | 2026-03-29 01:09:56 | INFO  | Task 05bed01a-cd1b-437f-92a2-658822fb3875 is in state STARTED 2026-03-29 01:09:56.576380 | orchestrator | 2026-03-29 01:09:56 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:09:59.626326 | orchestrator | 2026-03-29 01:09:59 | INFO  | Task eaf45d79-acac-464b-9378-52c460a1ae78 is in state STARTED 2026-03-29 01:09:59.626897 | orchestrator | 2026-03-29 01:09:59 | INFO  | Task 05bed01a-cd1b-437f-92a2-658822fb3875 is in state STARTED 2026-03-29 01:09:59.627298 | orchestrator | 2026-03-29 01:09:59 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:10:02.667581 | orchestrator | 2026-03-29 01:10:02 | INFO  | Task eaf45d79-acac-464b-9378-52c460a1ae78 is in state STARTED 2026-03-29 01:10:02.670567 | orchestrator | 2026-03-29 01:10:02 | INFO  | Task 05bed01a-cd1b-437f-92a2-658822fb3875 is in state STARTED 2026-03-29 01:10:02.670614 | orchestrator | 2026-03-29 01:10:02 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:10:05.720668 | orchestrator | 2026-03-29 01:10:05 | INFO  | Task eaf45d79-acac-464b-9378-52c460a1ae78 is in state STARTED 2026-03-29 01:10:05.721734 | orchestrator | 2026-03-29 01:10:05 | INFO  | Task 05bed01a-cd1b-437f-92a2-658822fb3875 is in state STARTED 2026-03-29 01:10:05.721768 | orchestrator | 2026-03-29 01:10:05 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:10:08.763222 | orchestrator | 2026-03-29 01:10:08 | INFO  | Task eaf45d79-acac-464b-9378-52c460a1ae78 is in state STARTED 2026-03-29 01:10:08.765234 | orchestrator | 2026-03-29 01:10:08 | INFO  | Task 05bed01a-cd1b-437f-92a2-658822fb3875 is in state STARTED 2026-03-29 01:10:08.765275 | orchestrator | 2026-03-29 01:10:08 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:10:11.805805 | orchestrator | 2026-03-29 01:10:11 | INFO  | Task eaf45d79-acac-464b-9378-52c460a1ae78 is in state STARTED 2026-03-29 01:10:11.806731 | orchestrator | 2026-03-29 01:10:11 | INFO  | Task 05bed01a-cd1b-437f-92a2-658822fb3875 is in state STARTED 2026-03-29 01:10:11.807340 | orchestrator | 2026-03-29 01:10:11 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:10:14.853033 | orchestrator | 2026-03-29 01:10:14 | INFO  | Task eaf45d79-acac-464b-9378-52c460a1ae78 is in state STARTED 2026-03-29 01:10:14.857077 | orchestrator | 2026-03-29 01:10:14 | INFO  | Task 05bed01a-cd1b-437f-92a2-658822fb3875 is in state SUCCESS 2026-03-29 01:10:14.859026 | orchestrator | 2026-03-29 01:10:14.859078 | orchestrator | 2026-03-29 01:10:14.859084 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-29 01:10:14.859089 | orchestrator | 2026-03-29 01:10:14.859093 | orchestrator | TASK [Group hosts based on OpenStack release] ********************************** 2026-03-29 01:10:14.859099 | orchestrator | Sunday 29 March 2026 01:01:09 +0000 (0:00:00.248) 0:00:00.248 ********** 2026-03-29 01:10:14.859106 | orchestrator | changed: [testbed-manager] 2026-03-29 01:10:14.859111 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:10:14.859115 | orchestrator | changed: [testbed-node-1] 2026-03-29 01:10:14.859119 | orchestrator | changed: [testbed-node-2] 2026-03-29 01:10:14.859124 | orchestrator | changed: [testbed-node-3] 2026-03-29 01:10:14.859127 | orchestrator | changed: [testbed-node-4] 2026-03-29 01:10:14.859132 | orchestrator | changed: [testbed-node-5] 2026-03-29 01:10:14.859136 | orchestrator | 2026-03-29 01:10:14.859140 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-29 01:10:14.859144 | orchestrator | Sunday 29 March 2026 01:01:09 +0000 (0:00:00.558) 0:00:00.807 ********** 2026-03-29 01:10:14.859148 | orchestrator | changed: [testbed-manager] 2026-03-29 01:10:14.859151 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:10:14.859155 | orchestrator | changed: [testbed-node-1] 2026-03-29 01:10:14.859159 | orchestrator | changed: [testbed-node-2] 2026-03-29 01:10:14.859163 | orchestrator | changed: [testbed-node-3] 2026-03-29 01:10:14.859166 | orchestrator | changed: [testbed-node-4] 2026-03-29 01:10:14.859170 | orchestrator | changed: [testbed-node-5] 2026-03-29 01:10:14.859174 | orchestrator | 2026-03-29 01:10:14.859177 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-29 01:10:14.859181 | orchestrator | Sunday 29 March 2026 01:01:10 +0000 (0:00:00.714) 0:00:01.521 ********** 2026-03-29 01:10:14.859202 | orchestrator | changed: [testbed-manager] => (item=enable_nova_True) 2026-03-29 01:10:14.859207 | orchestrator | changed: [testbed-node-0] => (item=enable_nova_True) 2026-03-29 01:10:14.859211 | orchestrator | changed: [testbed-node-1] => (item=enable_nova_True) 2026-03-29 01:10:14.859214 | orchestrator | changed: [testbed-node-2] => (item=enable_nova_True) 2026-03-29 01:10:14.859218 | orchestrator | changed: [testbed-node-3] => (item=enable_nova_True) 2026-03-29 01:10:14.859222 | orchestrator | changed: [testbed-node-4] => (item=enable_nova_True) 2026-03-29 01:10:14.859225 | orchestrator | changed: [testbed-node-5] => (item=enable_nova_True) 2026-03-29 01:10:14.859229 | orchestrator | 2026-03-29 01:10:14.859233 | orchestrator | PLAY [Bootstrap nova API databases] ******************************************** 2026-03-29 01:10:14.859236 | orchestrator | 2026-03-29 01:10:14.859240 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2026-03-29 01:10:14.859244 | orchestrator | Sunday 29 March 2026 01:01:10 +0000 (0:00:00.626) 0:00:02.147 ********** 2026-03-29 01:10:14.859248 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 01:10:14.859252 | orchestrator | 2026-03-29 01:10:14.859255 | orchestrator | TASK [nova : Creating Nova databases] ****************************************** 2026-03-29 01:10:14.859259 | orchestrator | Sunday 29 March 2026 01:01:11 +0000 (0:00:00.624) 0:00:02.772 ********** 2026-03-29 01:10:14.859264 | orchestrator | changed: [testbed-node-0] => (item=nova_cell0) 2026-03-29 01:10:14.859268 | orchestrator | changed: [testbed-node-0] => (item=nova_api) 2026-03-29 01:10:14.859272 | orchestrator | 2026-03-29 01:10:14.859276 | orchestrator | TASK [nova : Creating Nova databases user and setting permissions] ************* 2026-03-29 01:10:14.859279 | orchestrator | Sunday 29 March 2026 01:01:16 +0000 (0:00:05.251) 0:00:08.023 ********** 2026-03-29 01:10:14.859283 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-29 01:10:14.859287 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-29 01:10:14.859291 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:10:14.859295 | orchestrator | 2026-03-29 01:10:14.859298 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2026-03-29 01:10:14.859302 | orchestrator | Sunday 29 March 2026 01:01:20 +0000 (0:00:04.111) 0:00:12.135 ********** 2026-03-29 01:10:14.859306 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:10:14.859310 | orchestrator | 2026-03-29 01:10:14.859313 | orchestrator | TASK [nova : Copying over config.json files for nova-api-bootstrap] ************ 2026-03-29 01:10:14.859317 | orchestrator | Sunday 29 March 2026 01:01:21 +0000 (0:00:00.833) 0:00:12.968 ********** 2026-03-29 01:10:14.859321 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:10:14.859324 | orchestrator | 2026-03-29 01:10:14.859328 | orchestrator | TASK [nova : Copying over nova.conf for nova-api-bootstrap] ******************** 2026-03-29 01:10:14.859332 | orchestrator | Sunday 29 March 2026 01:01:23 +0000 (0:00:01.332) 0:00:14.301 ********** 2026-03-29 01:10:14.859336 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:10:14.859339 | orchestrator | 2026-03-29 01:10:14.859343 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-03-29 01:10:14.859347 | orchestrator | Sunday 29 March 2026 01:01:26 +0000 (0:00:03.119) 0:00:17.420 ********** 2026-03-29 01:10:14.859351 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:10:14.859355 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:10:14.859358 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:10:14.859362 | orchestrator | 2026-03-29 01:10:14.859366 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2026-03-29 01:10:14.859369 | orchestrator | Sunday 29 March 2026 01:01:26 +0000 (0:00:00.639) 0:00:18.060 ********** 2026-03-29 01:10:14.859373 | orchestrator | ok: [testbed-node-0] 2026-03-29 01:10:14.859377 | orchestrator | 2026-03-29 01:10:14.859381 | orchestrator | TASK [nova : Create cell0 mappings] ******************************************** 2026-03-29 01:10:14.859385 | orchestrator | Sunday 29 March 2026 01:02:00 +0000 (0:00:33.237) 0:00:51.297 ********** 2026-03-29 01:10:14.859393 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:10:14.859397 | orchestrator | 2026-03-29 01:10:14.859400 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-03-29 01:10:14.859404 | orchestrator | Sunday 29 March 2026 01:02:16 +0000 (0:00:16.377) 0:01:07.675 ********** 2026-03-29 01:10:14.859422 | orchestrator | ok: [testbed-node-0] 2026-03-29 01:10:14.859432 | orchestrator | 2026-03-29 01:10:14.859440 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-03-29 01:10:14.859446 | orchestrator | Sunday 29 March 2026 01:02:31 +0000 (0:00:14.595) 0:01:22.271 ********** 2026-03-29 01:10:14.859462 | orchestrator | ok: [testbed-node-0] 2026-03-29 01:10:14.859468 | orchestrator | 2026-03-29 01:10:14.859493 | orchestrator | TASK [nova : Update cell0 mappings] ******************************************** 2026-03-29 01:10:14.859499 | orchestrator | Sunday 29 March 2026 01:02:31 +0000 (0:00:00.606) 0:01:22.877 ********** 2026-03-29 01:10:14.859504 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:10:14.859510 | orchestrator | 2026-03-29 01:10:14.859516 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-03-29 01:10:14.859522 | orchestrator | Sunday 29 March 2026 01:02:32 +0000 (0:00:00.402) 0:01:23.280 ********** 2026-03-29 01:10:14.859529 | orchestrator | included: /ansible/roles/nova/tasks/bootstrap_service.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 01:10:14.859535 | orchestrator | 2026-03-29 01:10:14.859541 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2026-03-29 01:10:14.859547 | orchestrator | Sunday 29 March 2026 01:02:32 +0000 (0:00:00.585) 0:01:23.865 ********** 2026-03-29 01:10:14.859552 | orchestrator | ok: [testbed-node-0] 2026-03-29 01:10:14.859559 | orchestrator | 2026-03-29 01:10:14.859565 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2026-03-29 01:10:14.859570 | orchestrator | Sunday 29 March 2026 01:02:52 +0000 (0:00:19.664) 0:01:43.530 ********** 2026-03-29 01:10:14.859576 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:10:14.859584 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:10:14.859590 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:10:14.859596 | orchestrator | 2026-03-29 01:10:14.859602 | orchestrator | PLAY [Bootstrap nova cell databases] ******************************************* 2026-03-29 01:10:14.859608 | orchestrator | 2026-03-29 01:10:14.859614 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2026-03-29 01:10:14.859620 | orchestrator | Sunday 29 March 2026 01:02:52 +0000 (0:00:00.559) 0:01:44.090 ********** 2026-03-29 01:10:14.859627 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 01:10:14.859633 | orchestrator | 2026-03-29 01:10:14.859639 | orchestrator | TASK [nova-cell : Creating Nova cell database] ********************************* 2026-03-29 01:10:14.859646 | orchestrator | Sunday 29 March 2026 01:02:54 +0000 (0:00:01.807) 0:01:45.897 ********** 2026-03-29 01:10:14.859652 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:10:14.859658 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:10:14.859664 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:10:14.859670 | orchestrator | 2026-03-29 01:10:14.859676 | orchestrator | TASK [nova-cell : Creating Nova cell database user and setting permissions] **** 2026-03-29 01:10:14.859683 | orchestrator | Sunday 29 March 2026 01:02:57 +0000 (0:00:02.338) 0:01:48.235 ********** 2026-03-29 01:10:14.859689 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:10:14.859695 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:10:14.859701 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:10:14.859707 | orchestrator | 2026-03-29 01:10:14.859713 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2026-03-29 01:10:14.859718 | orchestrator | Sunday 29 March 2026 01:02:59 +0000 (0:00:02.463) 0:01:50.699 ********** 2026-03-29 01:10:14.859723 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:10:14.859729 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:10:14.859734 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:10:14.859741 | orchestrator | 2026-03-29 01:10:14.859746 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2026-03-29 01:10:14.859758 | orchestrator | Sunday 29 March 2026 01:03:00 +0000 (0:00:01.465) 0:01:52.165 ********** 2026-03-29 01:10:14.859765 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-03-29 01:10:14.859771 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:10:14.859778 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-03-29 01:10:14.859784 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:10:14.859790 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-03-29 01:10:14.859796 | orchestrator | ok: [testbed-node-0 -> {{ service_rabbitmq_delegate_host }}] 2026-03-29 01:10:14.859803 | orchestrator | 2026-03-29 01:10:14.859807 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2026-03-29 01:10:14.859811 | orchestrator | Sunday 29 March 2026 01:03:09 +0000 (0:00:08.926) 0:02:01.092 ********** 2026-03-29 01:10:14.859814 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:10:14.859818 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:10:14.859822 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:10:14.859825 | orchestrator | 2026-03-29 01:10:14.859829 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2026-03-29 01:10:14.859834 | orchestrator | Sunday 29 March 2026 01:03:10 +0000 (0:00:00.601) 0:02:01.693 ********** 2026-03-29 01:10:14.859840 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-03-29 01:10:14.859846 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:10:14.859851 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-03-29 01:10:14.859857 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:10:14.859864 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-03-29 01:10:14.859869 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:10:14.859875 | orchestrator | 2026-03-29 01:10:14.859880 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2026-03-29 01:10:14.859886 | orchestrator | Sunday 29 March 2026 01:03:13 +0000 (0:00:03.030) 0:02:04.724 ********** 2026-03-29 01:10:14.859891 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:10:14.859897 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:10:14.859903 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:10:14.859909 | orchestrator | 2026-03-29 01:10:14.859916 | orchestrator | TASK [nova-cell : Copying over config.json files for nova-cell-bootstrap] ****** 2026-03-29 01:10:14.859922 | orchestrator | Sunday 29 March 2026 01:03:14 +0000 (0:00:01.233) 0:02:05.957 ********** 2026-03-29 01:10:14.859928 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:10:14.859934 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:10:14.859940 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:10:14.859946 | orchestrator | 2026-03-29 01:10:14.859958 | orchestrator | TASK [nova-cell : Copying over nova.conf for nova-cell-bootstrap] ************** 2026-03-29 01:10:14.859965 | orchestrator | Sunday 29 March 2026 01:03:16 +0000 (0:00:01.471) 0:02:07.429 ********** 2026-03-29 01:10:14.859971 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:10:14.859977 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:10:14.859988 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:10:14.859993 | orchestrator | 2026-03-29 01:10:14.859997 | orchestrator | TASK [nova-cell : Running Nova cell bootstrap container] *********************** 2026-03-29 01:10:14.860000 | orchestrator | Sunday 29 March 2026 01:03:19 +0000 (0:00:02.941) 0:02:10.370 ********** 2026-03-29 01:10:14.860004 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:10:14.860008 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:10:14.860012 | orchestrator | ok: [testbed-node-0] 2026-03-29 01:10:14.860015 | orchestrator | 2026-03-29 01:10:14.860019 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-03-29 01:10:14.860023 | orchestrator | Sunday 29 March 2026 01:03:41 +0000 (0:00:22.166) 0:02:32.536 ********** 2026-03-29 01:10:14.860029 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:10:14.860035 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:10:14.860040 | orchestrator | ok: [testbed-node-0] 2026-03-29 01:10:14.860046 | orchestrator | 2026-03-29 01:10:14.860051 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-03-29 01:10:14.860062 | orchestrator | Sunday 29 March 2026 01:03:54 +0000 (0:00:13.313) 0:02:45.850 ********** 2026-03-29 01:10:14.860068 | orchestrator | ok: [testbed-node-0] 2026-03-29 01:10:14.860073 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:10:14.860079 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:10:14.860084 | orchestrator | 2026-03-29 01:10:14.860090 | orchestrator | TASK [nova-cell : Create cell] ************************************************* 2026-03-29 01:10:14.860095 | orchestrator | Sunday 29 March 2026 01:03:55 +0000 (0:00:00.706) 0:02:46.557 ********** 2026-03-29 01:10:14.860101 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:10:14.860107 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:10:14.860113 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:10:14.860119 | orchestrator | 2026-03-29 01:10:14.860124 | orchestrator | TASK [nova-cell : Update cell] ************************************************* 2026-03-29 01:10:14.860130 | orchestrator | Sunday 29 March 2026 01:04:09 +0000 (0:00:13.780) 0:03:00.337 ********** 2026-03-29 01:10:14.860135 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:10:14.860141 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:10:14.860147 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:10:14.860153 | orchestrator | 2026-03-29 01:10:14.860158 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2026-03-29 01:10:14.860165 | orchestrator | Sunday 29 March 2026 01:04:10 +0000 (0:00:01.376) 0:03:01.714 ********** 2026-03-29 01:10:14.860172 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:10:14.860176 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:10:14.860179 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:10:14.860183 | orchestrator | 2026-03-29 01:10:14.860187 | orchestrator | PLAY [Apply role nova] ********************************************************* 2026-03-29 01:10:14.860191 | orchestrator | 2026-03-29 01:10:14.860194 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-03-29 01:10:14.860198 | orchestrator | Sunday 29 March 2026 01:04:10 +0000 (0:00:00.328) 0:03:02.043 ********** 2026-03-29 01:10:14.860202 | orchestrator | included: /ansible/roles/nova/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 01:10:14.860207 | orchestrator | 2026-03-29 01:10:14.860210 | orchestrator | TASK [service-ks-register : nova | Creating services] ************************** 2026-03-29 01:10:14.860214 | orchestrator | Sunday 29 March 2026 01:04:11 +0000 (0:00:00.659) 0:03:02.702 ********** 2026-03-29 01:10:14.860218 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy (compute_legacy))  2026-03-29 01:10:14.860222 | orchestrator | changed: [testbed-node-0] => (item=nova (compute)) 2026-03-29 01:10:14.860226 | orchestrator | 2026-03-29 01:10:14.860229 | orchestrator | TASK [service-ks-register : nova | Creating endpoints] ************************* 2026-03-29 01:10:14.860233 | orchestrator | Sunday 29 March 2026 01:04:15 +0000 (0:00:03.535) 0:03:06.238 ********** 2026-03-29 01:10:14.860237 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api-int.testbed.osism.xyz:8774/v2/%(tenant_id)s -> internal)  2026-03-29 01:10:14.860242 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api.testbed.osism.xyz:8774/v2/%(tenant_id)s -> public)  2026-03-29 01:10:14.860246 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api-int.testbed.osism.xyz:8774/v2.1 -> internal) 2026-03-29 01:10:14.860250 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api.testbed.osism.xyz:8774/v2.1 -> public) 2026-03-29 01:10:14.860254 | orchestrator | 2026-03-29 01:10:14.860257 | orchestrator | TASK [service-ks-register : nova | Creating projects] ************************** 2026-03-29 01:10:14.860261 | orchestrator | Sunday 29 March 2026 01:04:22 +0000 (0:00:07.150) 0:03:13.388 ********** 2026-03-29 01:10:14.860265 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-29 01:10:14.860269 | orchestrator | 2026-03-29 01:10:14.860272 | orchestrator | TASK [service-ks-register : nova | Creating users] ***************************** 2026-03-29 01:10:14.860276 | orchestrator | Sunday 29 March 2026 01:04:25 +0000 (0:00:03.441) 0:03:16.829 ********** 2026-03-29 01:10:14.860284 | orchestrator | changed: [testbed-node-0] => (item=nova -> service) 2026-03-29 01:10:14.860288 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-29 01:10:14.860292 | orchestrator | 2026-03-29 01:10:14.860295 | orchestrator | TASK [service-ks-register : nova | Creating roles] ***************************** 2026-03-29 01:10:14.860299 | orchestrator | Sunday 29 March 2026 01:04:29 +0000 (0:00:03.423) 0:03:20.253 ********** 2026-03-29 01:10:14.860303 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-29 01:10:14.860307 | orchestrator | 2026-03-29 01:10:14.860310 | orchestrator | TASK [service-ks-register : nova | Granting user roles] ************************ 2026-03-29 01:10:14.860314 | orchestrator | Sunday 29 March 2026 01:04:32 +0000 (0:00:03.309) 0:03:23.562 ********** 2026-03-29 01:10:14.860322 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> admin) 2026-03-29 01:10:14.860326 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> service) 2026-03-29 01:10:14.860330 | orchestrator | 2026-03-29 01:10:14.860333 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2026-03-29 01:10:14.860342 | orchestrator | Sunday 29 March 2026 01:04:39 +0000 (0:00:07.378) 0:03:30.941 ********** 2026-03-29 01:10:14.860350 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-29 01:10:14.860357 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-29 01:10:14.860362 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-29 01:10:14.860380 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-29 01:10:14.860385 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-29 01:10:14.860390 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-29 01:10:14.860394 | orchestrator | 2026-03-29 01:10:14.860398 | orchestrator | TASK [nova : Check if policies shall be overwritten] *************************** 2026-03-29 01:10:14.860401 | orchestrator | Sunday 29 March 2026 01:04:43 +0000 (0:00:03.874) 0:03:34.815 ********** 2026-03-29 01:10:14.860405 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:10:14.860411 | orchestrator | 2026-03-29 01:10:14.860416 | orchestrator | TASK [nova : Set nova policy file] ********************************************* 2026-03-29 01:10:14.860422 | orchestrator | Sunday 29 March 2026 01:04:43 +0000 (0:00:00.102) 0:03:34.918 ********** 2026-03-29 01:10:14.860427 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:10:14.860433 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:10:14.860438 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:10:14.860443 | orchestrator | 2026-03-29 01:10:14.860449 | orchestrator | TASK [nova : Check for vendordata file] **************************************** 2026-03-29 01:10:14.860454 | orchestrator | Sunday 29 March 2026 01:04:43 +0000 (0:00:00.288) 0:03:35.207 ********** 2026-03-29 01:10:14.860460 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-29 01:10:14.860465 | orchestrator | 2026-03-29 01:10:14.860471 | orchestrator | TASK [nova : Set vendordata file path] ***************************************** 2026-03-29 01:10:14.860582 | orchestrator | Sunday 29 March 2026 01:04:44 +0000 (0:00:00.832) 0:03:36.039 ********** 2026-03-29 01:10:14.860596 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:10:14.860601 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:10:14.860605 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:10:14.860608 | orchestrator | 2026-03-29 01:10:14.860612 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-03-29 01:10:14.860616 | orchestrator | Sunday 29 March 2026 01:04:45 +0000 (0:00:00.500) 0:03:36.540 ********** 2026-03-29 01:10:14.860620 | orchestrator | included: /ansible/roles/nova/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 01:10:14.860623 | orchestrator | 2026-03-29 01:10:14.860627 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2026-03-29 01:10:14.860631 | orchestrator | Sunday 29 March 2026 01:04:47 +0000 (0:00:01.989) 0:03:38.529 ********** 2026-03-29 01:10:14.860639 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-29 01:10:14.860658 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-29 01:10:14.860663 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-29 01:10:14.860671 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-29 01:10:14.860676 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-29 01:10:14.860692 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-29 01:10:14.860696 | orchestrator | 2026-03-29 01:10:14.860700 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2026-03-29 01:10:14.860703 | orchestrator | Sunday 29 March 2026 01:04:50 +0000 (0:00:03.434) 0:03:41.964 ********** 2026-03-29 01:10:14.860708 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-29 01:10:14.860712 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-29 01:10:14.860719 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:10:14.860723 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-29 01:10:14.860728 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-29 01:10:14.860734 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:10:14.860743 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-29 01:10:14.860747 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-29 01:10:14.860755 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:10:14.860759 | orchestrator | 2026-03-29 01:10:14.860762 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2026-03-29 01:10:14.860766 | orchestrator | Sunday 29 March 2026 01:04:51 +0000 (0:00:01.069) 0:03:43.034 ********** 2026-03-29 01:10:14.860770 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-29 01:10:14.860774 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-29 01:10:14.860778 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:10:14.860789 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-29 01:10:14.860793 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-29 01:10:14.860800 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:10:14.860804 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-29 01:10:14.860808 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-29 01:10:14.860812 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:10:14.860816 | orchestrator | 2026-03-29 01:10:14.860820 | orchestrator | TASK [nova : Copying over config.json files for services] ********************** 2026-03-29 01:10:14.860824 | orchestrator | Sunday 29 March 2026 01:04:53 +0000 (0:00:01.316) 0:03:44.351 ********** 2026-03-29 01:10:14.860835 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-29 01:10:14.860840 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-29 01:10:14.860848 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-29 01:10:14.860852 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-29 01:10:14.860864 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-29 01:10:14.860868 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-29 01:10:14.860872 | orchestrator | 2026-03-29 01:10:14.860876 | orchestrator | TASK [nova : Copying over nova.conf] ******************************************* 2026-03-29 01:10:14.860885 | orchestrator | Sunday 29 March 2026 01:04:56 +0000 (0:00:03.304) 0:03:47.656 ********** 2026-03-29 01:10:14.860889 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-29 01:10:14.860894 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-29 01:10:14.860904 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-29 01:10:14.860908 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-29 01:10:14.860916 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-29 01:10:14.860920 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-29 01:10:14.860923 | orchestrator | 2026-03-29 01:10:14.860927 | orchestrator | TASK [nova : Copying over existing policy file] ******************************** 2026-03-29 01:10:14.860931 | orchestrator | Sunday 29 March 2026 01:05:05 +0000 (0:00:09.557) 0:03:57.213 ********** 2026-03-29 01:10:14.860935 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-29 01:10:14.860944 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-29 01:10:14.860948 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:10:14.860952 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-29 01:10:14.860960 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-29 01:10:14.860964 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:10:14.860968 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-29 01:10:14.860972 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-29 01:10:14.860976 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:10:14.860980 | orchestrator | 2026-03-29 01:10:14.860986 | orchestrator | TASK [nova : Copying over nova-api-wsgi.conf] ********************************** 2026-03-29 01:10:14.860990 | orchestrator | Sunday 29 March 2026 01:05:07 +0000 (0:00:01.106) 0:03:58.319 ********** 2026-03-29 01:10:14.860994 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:10:14.860997 | orchestrator | changed: [testbed-node-2] 2026-03-29 01:10:14.861001 | orchestrator | changed: [testbed-node-1] 2026-03-29 01:10:14.861005 | orchestrator | 2026-03-29 01:10:14.861015 | orchestrator | TASK [nova : Copying over vendordata file] ************************************* 2026-03-29 01:10:14.861019 | orchestrator | Sunday 29 March 2026 01:05:09 +0000 (0:00:02.493) 0:04:00.813 ********** 2026-03-29 01:10:14.861023 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:10:14.861027 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:10:14.861030 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:10:14.861054 | orchestrator | 2026-03-29 01:10:14.861058 | orchestrator | TASK [nova : Check nova containers] ******************************************** 2026-03-29 01:10:14.861062 | orchestrator | Sunday 29 March 2026 01:05:10 +0000 (0:00:00.454) 0:04:01.268 ********** 2026-03-29 01:10:14.861066 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-29 01:10:14.861071 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-29 01:10:14.861081 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-29 01:10:14.861090 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-29 01:10:14.861094 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-29 01:10:14.861098 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-29 01:10:14.861102 | orchestrator | 2026-03-29 01:10:14.861106 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-03-29 01:10:14.861110 | orchestrator | Sunday 29 March 2026 01:05:12 +0000 (0:00:02.541) 0:04:03.809 ********** 2026-03-29 01:10:14.861114 | orchestrator | 2026-03-29 01:10:14.861117 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-03-29 01:10:14.861121 | orchestrator | Sunday 29 March 2026 01:05:12 +0000 (0:00:00.233) 0:04:04.043 ********** 2026-03-29 01:10:14.861125 | orchestrator | 2026-03-29 01:10:14.861129 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-03-29 01:10:14.861132 | orchestrator | Sunday 29 March 2026 01:05:13 +0000 (0:00:00.225) 0:04:04.268 ********** 2026-03-29 01:10:14.861136 | orchestrator | 2026-03-29 01:10:14.861140 | orchestrator | RUNNING HANDLER [nova : Restart nova-scheduler container] ********************** 2026-03-29 01:10:14.861144 | orchestrator | Sunday 29 March 2026 01:05:13 +0000 (0:00:00.230) 0:04:04.499 ********** 2026-03-29 01:10:14.861147 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:10:14.861151 | orchestrator | changed: [testbed-node-2] 2026-03-29 01:10:14.861155 | orchestrator | changed: [testbed-node-1] 2026-03-29 01:10:14.861159 | orchestrator | 2026-03-29 01:10:14.861162 | orchestrator | RUNNING HANDLER [nova : Restart nova-api container] **************************** 2026-03-29 01:10:14.861166 | orchestrator | Sunday 29 March 2026 01:05:33 +0000 (0:00:20.041) 0:04:24.540 ********** 2026-03-29 01:10:14.861170 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:10:14.861174 | orchestrator | changed: [testbed-node-2] 2026-03-29 01:10:14.861177 | orchestrator | changed: [testbed-node-1] 2026-03-29 01:10:14.861181 | orchestrator | 2026-03-29 01:10:14.861185 | orchestrator | PLAY [Apply role nova-cell] **************************************************** 2026-03-29 01:10:14.861189 | orchestrator | 2026-03-29 01:10:14.861192 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-03-29 01:10:14.861200 | orchestrator | Sunday 29 March 2026 01:05:39 +0000 (0:00:06.058) 0:04:30.599 ********** 2026-03-29 01:10:14.861204 | orchestrator | included: /ansible/roles/nova-cell/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 01:10:14.861209 | orchestrator | 2026-03-29 01:10:14.861213 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-03-29 01:10:14.861216 | orchestrator | Sunday 29 March 2026 01:05:40 +0000 (0:00:01.399) 0:04:31.999 ********** 2026-03-29 01:10:14.861220 | orchestrator | skipping: [testbed-node-3] 2026-03-29 01:10:14.861224 | orchestrator | skipping: [testbed-node-4] 2026-03-29 01:10:14.861227 | orchestrator | skipping: [testbed-node-5] 2026-03-29 01:10:14.861231 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:10:14.861235 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:10:14.861239 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:10:14.861243 | orchestrator | 2026-03-29 01:10:14.861247 | orchestrator | TASK [Load and persist br_netfilter module] ************************************ 2026-03-29 01:10:14.861251 | orchestrator | Sunday 29 March 2026 01:05:42 +0000 (0:00:01.484) 0:04:33.483 ********** 2026-03-29 01:10:14.861254 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:10:14.861258 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:10:14.861262 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:10:14.861269 | orchestrator | included: module-load for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-29 01:10:14.861272 | orchestrator | 2026-03-29 01:10:14.861276 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-03-29 01:10:14.861283 | orchestrator | Sunday 29 March 2026 01:05:43 +0000 (0:00:00.920) 0:04:34.403 ********** 2026-03-29 01:10:14.861290 | orchestrator | ok: [testbed-node-4] => (item=br_netfilter) 2026-03-29 01:10:14.861296 | orchestrator | ok: [testbed-node-3] => (item=br_netfilter) 2026-03-29 01:10:14.861301 | orchestrator | ok: [testbed-node-5] => (item=br_netfilter) 2026-03-29 01:10:14.861307 | orchestrator | 2026-03-29 01:10:14.861312 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-03-29 01:10:14.861318 | orchestrator | Sunday 29 March 2026 01:05:44 +0000 (0:00:01.427) 0:04:35.831 ********** 2026-03-29 01:10:14.861325 | orchestrator | changed: [testbed-node-3] => (item=br_netfilter) 2026-03-29 01:10:14.861331 | orchestrator | changed: [testbed-node-4] => (item=br_netfilter) 2026-03-29 01:10:14.861337 | orchestrator | changed: [testbed-node-5] => (item=br_netfilter) 2026-03-29 01:10:14.861343 | orchestrator | 2026-03-29 01:10:14.861350 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-03-29 01:10:14.861356 | orchestrator | Sunday 29 March 2026 01:05:46 +0000 (0:00:01.558) 0:04:37.389 ********** 2026-03-29 01:10:14.861362 | orchestrator | skipping: [testbed-node-3] => (item=br_netfilter)  2026-03-29 01:10:14.861366 | orchestrator | skipping: [testbed-node-3] 2026-03-29 01:10:14.861370 | orchestrator | skipping: [testbed-node-4] => (item=br_netfilter)  2026-03-29 01:10:14.861373 | orchestrator | skipping: [testbed-node-4] 2026-03-29 01:10:14.861377 | orchestrator | skipping: [testbed-node-5] => (item=br_netfilter)  2026-03-29 01:10:14.861381 | orchestrator | skipping: [testbed-node-5] 2026-03-29 01:10:14.861384 | orchestrator | 2026-03-29 01:10:14.861388 | orchestrator | TASK [nova-cell : Enable bridge-nf-call sysctl variables] ********************** 2026-03-29 01:10:14.861392 | orchestrator | Sunday 29 March 2026 01:05:48 +0000 (0:00:02.052) 0:04:39.442 ********** 2026-03-29 01:10:14.861396 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-29 01:10:14.861399 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-29 01:10:14.861403 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:10:14.861407 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables) 2026-03-29 01:10:14.861410 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables) 2026-03-29 01:10:14.861414 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables) 2026-03-29 01:10:14.861423 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-29 01:10:14.861427 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-29 01:10:14.861431 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:10:14.861435 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-29 01:10:14.861439 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-29 01:10:14.861442 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:10:14.861446 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-03-29 01:10:14.861450 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-03-29 01:10:14.861454 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-03-29 01:10:14.861457 | orchestrator | 2026-03-29 01:10:14.861461 | orchestrator | TASK [nova-cell : Install udev kolla kvm rules] ******************************** 2026-03-29 01:10:14.861465 | orchestrator | Sunday 29 March 2026 01:05:49 +0000 (0:00:01.384) 0:04:40.826 ********** 2026-03-29 01:10:14.861468 | orchestrator | changed: [testbed-node-3] 2026-03-29 01:10:14.861472 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:10:14.861494 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:10:14.861499 | orchestrator | changed: [testbed-node-4] 2026-03-29 01:10:14.861502 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:10:14.861507 | orchestrator | changed: [testbed-node-5] 2026-03-29 01:10:14.861513 | orchestrator | 2026-03-29 01:10:14.861518 | orchestrator | TASK [nova-cell : Mask qemu-kvm service] *************************************** 2026-03-29 01:10:14.861524 | orchestrator | Sunday 29 March 2026 01:05:51 +0000 (0:00:01.933) 0:04:42.759 ********** 2026-03-29 01:10:14.861529 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:10:14.861535 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:10:14.861541 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:10:14.861547 | orchestrator | changed: [testbed-node-3] 2026-03-29 01:10:14.861552 | orchestrator | changed: [testbed-node-5] 2026-03-29 01:10:14.861558 | orchestrator | changed: [testbed-node-4] 2026-03-29 01:10:14.861564 | orchestrator | 2026-03-29 01:10:14.861570 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2026-03-29 01:10:14.861576 | orchestrator | Sunday 29 March 2026 01:05:53 +0000 (0:00:02.082) 0:04:44.842 ********** 2026-03-29 01:10:14.861588 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-29 01:10:14.861599 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-29 01:10:14.861606 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-29 01:10:14.861618 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-29 01:10:14.861623 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-29 01:10:14.861631 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-29 01:10:14.861646 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-29 01:10:14.861654 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-29 01:10:14.861666 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-29 01:10:14.861673 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-29 01:10:14.861680 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-29 01:10:14.861686 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-29 01:10:14.861694 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-29 01:10:14.861705 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-29 01:10:14.861715 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-29 01:10:14.861722 | orchestrator | 2026-03-29 01:10:14.861728 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-03-29 01:10:14.861734 | orchestrator | Sunday 29 March 2026 01:05:56 +0000 (0:00:03.196) 0:04:48.038 ********** 2026-03-29 01:10:14.861741 | orchestrator | included: /ansible/roles/nova-cell/tasks/copy-certs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 01:10:14.861747 | orchestrator | 2026-03-29 01:10:14.861751 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2026-03-29 01:10:14.861755 | orchestrator | Sunday 29 March 2026 01:05:58 +0000 (0:00:02.180) 0:04:50.219 ********** 2026-03-29 01:10:14.861759 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-29 01:10:14.861763 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-29 01:10:14.861773 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-29 01:10:14.861780 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-29 01:10:14.861785 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-29 01:10:14.861789 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-29 01:10:14.861793 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-29 01:10:14.861797 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-29 01:10:14.861801 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-29 01:10:14.861811 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-29 01:10:14.861823 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-29 01:10:14.861829 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-29 01:10:14.861835 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-29 01:10:14.861841 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-29 01:10:14.861847 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-29 01:10:14.861854 | orchestrator | 2026-03-29 01:10:14.861860 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2026-03-29 01:10:14.861870 | orchestrator | Sunday 29 March 2026 01:06:03 +0000 (0:00:04.413) 0:04:54.633 ********** 2026-03-29 01:10:14.862144 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-29 01:10:14.862166 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-29 01:10:14.862171 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-29 01:10:14.862176 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-29 01:10:14.862180 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-29 01:10:14.862184 | orchestrator | skipping: [testbed-node-3] 2026-03-29 01:10:14.862196 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-29 01:10:14.862207 | orchestrator | skipping: [testbed-node-4] 2026-03-29 01:10:14.862211 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-29 01:10:14.862215 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-29 01:10:14.862219 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-29 01:10:14.862223 | orchestrator | skipping: [testbed-node-5] 2026-03-29 01:10:14.862227 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-29 01:10:14.862231 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-29 01:10:14.862238 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:10:14.862247 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-29 01:10:14.862251 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-29 01:10:14.862255 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:10:14.862259 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-29 01:10:14.862263 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-29 01:10:14.862267 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:10:14.862271 | orchestrator | 2026-03-29 01:10:14.862275 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2026-03-29 01:10:14.862279 | orchestrator | Sunday 29 March 2026 01:06:05 +0000 (0:00:02.309) 0:04:56.942 ********** 2026-03-29 01:10:14.862286 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-29 01:10:14.862296 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-29 01:10:14.862308 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-29 01:10:14.862315 | orchestrator | skipping: [testbed-node-4] 2026-03-29 01:10:14.862321 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-29 01:10:14.862327 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-29 01:10:14.862334 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-29 01:10:14.862340 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-29 01:10:14.862350 | orchestrator | skipping: [testbed-node-3] 2026-03-29 01:10:14.862360 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-29 01:10:14.862371 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-29 01:10:14.862377 | orchestrator | skipping: [testbed-node-5] 2026-03-29 01:10:14.862383 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-29 01:10:14.862389 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-29 01:10:14.862396 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:10:14.862402 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-29 01:10:14.862413 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-29 01:10:14.862418 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:10:14.862422 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-29 01:10:14.862432 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-29 01:10:14.862436 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:10:14.862440 | orchestrator | 2026-03-29 01:10:14.862444 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-03-29 01:10:14.862448 | orchestrator | Sunday 29 March 2026 01:06:08 +0000 (0:00:02.816) 0:04:59.759 ********** 2026-03-29 01:10:14.862452 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:10:14.862455 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:10:14.862459 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:10:14.862463 | orchestrator | included: /ansible/roles/nova-cell/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-29 01:10:14.862467 | orchestrator | 2026-03-29 01:10:14.862471 | orchestrator | TASK [nova-cell : Check nova keyring file] ************************************* 2026-03-29 01:10:14.862496 | orchestrator | Sunday 29 March 2026 01:06:09 +0000 (0:00:01.176) 0:05:00.935 ********** 2026-03-29 01:10:14.862501 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-29 01:10:14.862504 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-29 01:10:14.862508 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-29 01:10:14.862512 | orchestrator | 2026-03-29 01:10:14.862516 | orchestrator | TASK [nova-cell : Check cinder keyring file] *********************************** 2026-03-29 01:10:14.862520 | orchestrator | Sunday 29 March 2026 01:06:11 +0000 (0:00:01.343) 0:05:02.278 ********** 2026-03-29 01:10:14.862523 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-29 01:10:14.862527 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-29 01:10:14.862531 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-29 01:10:14.862535 | orchestrator | 2026-03-29 01:10:14.862538 | orchestrator | TASK [nova-cell : Extract nova key from file] ********************************** 2026-03-29 01:10:14.862542 | orchestrator | Sunday 29 March 2026 01:06:12 +0000 (0:00:01.455) 0:05:03.734 ********** 2026-03-29 01:10:14.862546 | orchestrator | ok: [testbed-node-3] 2026-03-29 01:10:14.862550 | orchestrator | ok: [testbed-node-4] 2026-03-29 01:10:14.862554 | orchestrator | ok: [testbed-node-5] 2026-03-29 01:10:14.862558 | orchestrator | 2026-03-29 01:10:14.862562 | orchestrator | TASK [nova-cell : Extract cinder key from file] ******************************** 2026-03-29 01:10:14.862569 | orchestrator | Sunday 29 March 2026 01:06:13 +0000 (0:00:00.809) 0:05:04.543 ********** 2026-03-29 01:10:14.862573 | orchestrator | ok: [testbed-node-3] 2026-03-29 01:10:14.862577 | orchestrator | ok: [testbed-node-4] 2026-03-29 01:10:14.862581 | orchestrator | ok: [testbed-node-5] 2026-03-29 01:10:14.862585 | orchestrator | 2026-03-29 01:10:14.862589 | orchestrator | TASK [nova-cell : Copy over ceph nova keyring file] **************************** 2026-03-29 01:10:14.862593 | orchestrator | Sunday 29 March 2026 01:06:14 +0000 (0:00:01.080) 0:05:05.623 ********** 2026-03-29 01:10:14.862597 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-03-29 01:10:14.862601 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-03-29 01:10:14.862605 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-03-29 01:10:14.862609 | orchestrator | 2026-03-29 01:10:14.862612 | orchestrator | TASK [nova-cell : Copy over ceph cinder keyring file] ************************** 2026-03-29 01:10:14.862616 | orchestrator | Sunday 29 March 2026 01:06:15 +0000 (0:00:01.162) 0:05:06.786 ********** 2026-03-29 01:10:14.862620 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-03-29 01:10:14.862624 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-03-29 01:10:14.862628 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-03-29 01:10:14.862631 | orchestrator | 2026-03-29 01:10:14.862635 | orchestrator | TASK [nova-cell : Copy over ceph.conf] ***************************************** 2026-03-29 01:10:14.862639 | orchestrator | Sunday 29 March 2026 01:06:16 +0000 (0:00:01.292) 0:05:08.079 ********** 2026-03-29 01:10:14.862643 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-03-29 01:10:14.862646 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-03-29 01:10:14.862650 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-03-29 01:10:14.862654 | orchestrator | changed: [testbed-node-3] => (item=nova-libvirt) 2026-03-29 01:10:14.862658 | orchestrator | changed: [testbed-node-4] => (item=nova-libvirt) 2026-03-29 01:10:14.862661 | orchestrator | changed: [testbed-node-5] => (item=nova-libvirt) 2026-03-29 01:10:14.862665 | orchestrator | 2026-03-29 01:10:14.862669 | orchestrator | TASK [nova-cell : Ensure /etc/ceph directory exists (host libvirt)] ************ 2026-03-29 01:10:14.862673 | orchestrator | Sunday 29 March 2026 01:06:21 +0000 (0:00:04.297) 0:05:12.377 ********** 2026-03-29 01:10:14.862676 | orchestrator | skipping: [testbed-node-3] 2026-03-29 01:10:14.862680 | orchestrator | skipping: [testbed-node-4] 2026-03-29 01:10:14.862684 | orchestrator | skipping: [testbed-node-5] 2026-03-29 01:10:14.862688 | orchestrator | 2026-03-29 01:10:14.862691 | orchestrator | TASK [nova-cell : Copy over ceph.conf (host libvirt)] ************************** 2026-03-29 01:10:14.862695 | orchestrator | Sunday 29 March 2026 01:06:21 +0000 (0:00:00.326) 0:05:12.703 ********** 2026-03-29 01:10:14.862699 | orchestrator | skipping: [testbed-node-3] 2026-03-29 01:10:14.862703 | orchestrator | skipping: [testbed-node-4] 2026-03-29 01:10:14.862706 | orchestrator | skipping: [testbed-node-5] 2026-03-29 01:10:14.862710 | orchestrator | 2026-03-29 01:10:14.862714 | orchestrator | TASK [nova-cell : Ensuring libvirt secrets directory exists] ******************* 2026-03-29 01:10:14.862718 | orchestrator | Sunday 29 March 2026 01:06:21 +0000 (0:00:00.290) 0:05:12.994 ********** 2026-03-29 01:10:14.862722 | orchestrator | changed: [testbed-node-4] 2026-03-29 01:10:14.862726 | orchestrator | changed: [testbed-node-3] 2026-03-29 01:10:14.862729 | orchestrator | changed: [testbed-node-5] 2026-03-29 01:10:14.862733 | orchestrator | 2026-03-29 01:10:14.862740 | orchestrator | TASK [nova-cell : Pushing nova secret xml for libvirt] ************************* 2026-03-29 01:10:14.862744 | orchestrator | Sunday 29 March 2026 01:06:23 +0000 (0:00:01.402) 0:05:14.397 ********** 2026-03-29 01:10:14.862750 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-03-29 01:10:14.862757 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-03-29 01:10:14.862770 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-03-29 01:10:14.862777 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-03-29 01:10:14.862783 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-03-29 01:10:14.862788 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-03-29 01:10:14.862792 | orchestrator | 2026-03-29 01:10:14.862797 | orchestrator | TASK [nova-cell : Pushing secrets key for libvirt] ***************************** 2026-03-29 01:10:14.862801 | orchestrator | Sunday 29 March 2026 01:06:26 +0000 (0:00:03.575) 0:05:17.972 ********** 2026-03-29 01:10:14.862805 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-29 01:10:14.862810 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-29 01:10:14.862814 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-29 01:10:14.862818 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-29 01:10:14.862823 | orchestrator | changed: [testbed-node-3] 2026-03-29 01:10:14.862827 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-29 01:10:14.862832 | orchestrator | changed: [testbed-node-4] 2026-03-29 01:10:14.862836 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-29 01:10:14.862840 | orchestrator | changed: [testbed-node-5] 2026-03-29 01:10:14.862845 | orchestrator | 2026-03-29 01:10:14.862849 | orchestrator | TASK [nova-cell : Include tasks from qemu_wrapper.yml] ************************* 2026-03-29 01:10:14.862854 | orchestrator | Sunday 29 March 2026 01:06:30 +0000 (0:00:03.374) 0:05:21.346 ********** 2026-03-29 01:10:14.862858 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:10:14.862862 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:10:14.862865 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:10:14.862869 | orchestrator | included: /ansible/roles/nova-cell/tasks/qemu_wrapper.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-29 01:10:14.862873 | orchestrator | 2026-03-29 01:10:14.862877 | orchestrator | TASK [nova-cell : Check qemu wrapper file] ************************************* 2026-03-29 01:10:14.862881 | orchestrator | Sunday 29 March 2026 01:06:32 +0000 (0:00:02.176) 0:05:23.523 ********** 2026-03-29 01:10:14.862884 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-29 01:10:14.862888 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-29 01:10:14.862892 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-29 01:10:14.862895 | orchestrator | 2026-03-29 01:10:14.862899 | orchestrator | TASK [nova-cell : Copy qemu wrapper] ******************************************* 2026-03-29 01:10:14.862903 | orchestrator | Sunday 29 March 2026 01:06:33 +0000 (0:00:00.959) 0:05:24.482 ********** 2026-03-29 01:10:14.862907 | orchestrator | skipping: [testbed-node-3] 2026-03-29 01:10:14.862911 | orchestrator | skipping: [testbed-node-4] 2026-03-29 01:10:14.862914 | orchestrator | skipping: [testbed-node-5] 2026-03-29 01:10:14.862918 | orchestrator | 2026-03-29 01:10:14.862922 | orchestrator | TASK [nova-cell : Check if policies shall be overwritten] ********************** 2026-03-29 01:10:14.862925 | orchestrator | Sunday 29 March 2026 01:06:33 +0000 (0:00:00.302) 0:05:24.785 ********** 2026-03-29 01:10:14.862929 | orchestrator | skipping: [testbed-node-3] 2026-03-29 01:10:14.862933 | orchestrator | 2026-03-29 01:10:14.862937 | orchestrator | TASK [nova-cell : Set nova policy file] **************************************** 2026-03-29 01:10:14.862941 | orchestrator | Sunday 29 March 2026 01:06:33 +0000 (0:00:00.129) 0:05:24.914 ********** 2026-03-29 01:10:14.862944 | orchestrator | skipping: [testbed-node-3] 2026-03-29 01:10:14.862948 | orchestrator | skipping: [testbed-node-4] 2026-03-29 01:10:14.862952 | orchestrator | skipping: [testbed-node-5] 2026-03-29 01:10:14.862956 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:10:14.862959 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:10:14.862963 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:10:14.862971 | orchestrator | 2026-03-29 01:10:14.862975 | orchestrator | TASK [nova-cell : Check for vendordata file] *********************************** 2026-03-29 01:10:14.862978 | orchestrator | Sunday 29 March 2026 01:06:34 +0000 (0:00:00.805) 0:05:25.719 ********** 2026-03-29 01:10:14.862982 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-29 01:10:14.862986 | orchestrator | 2026-03-29 01:10:14.862990 | orchestrator | TASK [nova-cell : Set vendordata file path] ************************************ 2026-03-29 01:10:14.862994 | orchestrator | Sunday 29 March 2026 01:06:35 +0000 (0:00:00.785) 0:05:26.504 ********** 2026-03-29 01:10:14.862997 | orchestrator | skipping: [testbed-node-3] 2026-03-29 01:10:14.863001 | orchestrator | skipping: [testbed-node-4] 2026-03-29 01:10:14.863005 | orchestrator | skipping: [testbed-node-5] 2026-03-29 01:10:14.863009 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:10:14.863013 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:10:14.863016 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:10:14.863020 | orchestrator | 2026-03-29 01:10:14.863024 | orchestrator | TASK [nova-cell : Copying over config.json files for services] ***************** 2026-03-29 01:10:14.863028 | orchestrator | Sunday 29 March 2026 01:06:35 +0000 (0:00:00.547) 0:05:27.052 ********** 2026-03-29 01:10:14.863040 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-29 01:10:14.863045 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-29 01:10:14.863049 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-29 01:10:14.863054 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-29 01:10:14.863064 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-29 01:10:14.863071 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-29 01:10:14.863078 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-29 01:10:14.863083 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-29 01:10:14.863086 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-29 01:10:14.863090 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-29 01:10:14.863099 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-29 01:10:14.863103 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-29 01:10:14.863112 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-29 01:10:14.863117 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-29 01:10:14.863121 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-29 01:10:14.863125 | orchestrator | 2026-03-29 01:10:14.863129 | orchestrator | TASK [nova-cell : Copying over nova.conf] ************************************** 2026-03-29 01:10:14.863133 | orchestrator | Sunday 29 March 2026 01:06:39 +0000 (0:00:03.804) 0:05:30.857 ********** 2026-03-29 01:10:14.863139 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-29 01:10:14.863149 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-29 01:10:14.863156 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-29 01:10:14.863166 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-29 01:10:14.863172 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-29 01:10:14.863178 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-29 01:10:14.863270 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-29 01:10:14.863287 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-29 01:10:14.863298 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_2026-03-29 01:10:14 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:10:14.863303 | orchestrator | lite.html'], 'timeout': '30'}}}) 2026-03-29 01:10:14.863307 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-29 01:10:14.863311 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-29 01:10:14.863321 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-29 01:10:14.863325 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-29 01:10:14.863330 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-29 01:10:14.863333 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-29 01:10:14.863337 | orchestrator | 2026-03-29 01:10:14.863344 | orchestrator | TASK [nova-cell : Copying over Nova compute provider config] ******************* 2026-03-29 01:10:14.863348 | orchestrator | Sunday 29 March 2026 01:06:46 +0000 (0:00:06.551) 0:05:37.408 ********** 2026-03-29 01:10:14.863352 | orchestrator | skipping: [testbed-node-3] 2026-03-29 01:10:14.863356 | orchestrator | skipping: [testbed-node-4] 2026-03-29 01:10:14.863362 | orchestrator | skipping: [testbed-node-5] 2026-03-29 01:10:14.863366 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:10:14.863370 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:10:14.863374 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:10:14.863377 | orchestrator | 2026-03-29 01:10:14.863381 | orchestrator | TASK [nova-cell : Copying over libvirt configuration] ************************** 2026-03-29 01:10:14.863385 | orchestrator | Sunday 29 March 2026 01:06:47 +0000 (0:00:01.374) 0:05:38.783 ********** 2026-03-29 01:10:14.863389 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-03-29 01:10:14.863392 | orchestrator | changed: [testbed-node-3] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-03-29 01:10:14.863396 | orchestrator | changed: [testbed-node-5] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-03-29 01:10:14.863400 | orchestrator | changed: [testbed-node-4] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-03-29 01:10:14.863404 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-03-29 01:10:14.863407 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-03-29 01:10:14.863411 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:10:14.863421 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-03-29 01:10:14.863425 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-03-29 01:10:14.863429 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:10:14.863433 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-03-29 01:10:14.863437 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:10:14.863440 | orchestrator | changed: [testbed-node-3] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-03-29 01:10:14.863444 | orchestrator | changed: [testbed-node-5] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-03-29 01:10:14.863448 | orchestrator | changed: [testbed-node-4] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-03-29 01:10:14.863452 | orchestrator | 2026-03-29 01:10:14.863455 | orchestrator | TASK [nova-cell : Copying over libvirt TLS keys] ******************************* 2026-03-29 01:10:14.863459 | orchestrator | Sunday 29 March 2026 01:06:52 +0000 (0:00:04.491) 0:05:43.274 ********** 2026-03-29 01:10:14.863463 | orchestrator | skipping: [testbed-node-3] 2026-03-29 01:10:14.863467 | orchestrator | skipping: [testbed-node-4] 2026-03-29 01:10:14.863470 | orchestrator | skipping: [testbed-node-5] 2026-03-29 01:10:14.863513 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:10:14.863518 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:10:14.863521 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:10:14.863525 | orchestrator | 2026-03-29 01:10:14.863529 | orchestrator | TASK [nova-cell : Copying over libvirt SASL configuration] ********************* 2026-03-29 01:10:14.863533 | orchestrator | Sunday 29 March 2026 01:06:52 +0000 (0:00:00.660) 0:05:43.935 ********** 2026-03-29 01:10:14.863537 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-03-29 01:10:14.863541 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-03-29 01:10:14.863545 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-03-29 01:10:14.863549 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-03-29 01:10:14.863552 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-03-29 01:10:14.863556 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-03-29 01:10:14.863560 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-03-29 01:10:14.863563 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-03-29 01:10:14.863567 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-03-29 01:10:14.863571 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-03-29 01:10:14.863575 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:10:14.863578 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-03-29 01:10:14.863582 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-03-29 01:10:14.863586 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:10:14.863589 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-03-29 01:10:14.863593 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:10:14.863600 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-03-29 01:10:14.863608 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-03-29 01:10:14.863615 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-03-29 01:10:14.863619 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-03-29 01:10:14.863622 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-03-29 01:10:14.863626 | orchestrator | 2026-03-29 01:10:14.863630 | orchestrator | TASK [nova-cell : Copying files for nova-ssh] ********************************** 2026-03-29 01:10:14.863634 | orchestrator | Sunday 29 March 2026 01:06:58 +0000 (0:00:05.625) 0:05:49.561 ********** 2026-03-29 01:10:14.863638 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-03-29 01:10:14.863641 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-03-29 01:10:14.863645 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-03-29 01:10:14.863649 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-03-29 01:10:14.863653 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-29 01:10:14.863657 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-29 01:10:14.863660 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-03-29 01:10:14.863664 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-29 01:10:14.863668 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-03-29 01:10:14.863671 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-29 01:10:14.863675 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-29 01:10:14.863679 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-03-29 01:10:14.863683 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:10:14.863686 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-03-29 01:10:14.863690 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:10:14.863694 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-29 01:10:14.863698 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-29 01:10:14.863701 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-29 01:10:14.863705 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-29 01:10:14.863709 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-03-29 01:10:14.863713 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:10:14.863716 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-29 01:10:14.863720 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-29 01:10:14.863724 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-29 01:10:14.863727 | orchestrator | changed: [testbed-node-3] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-29 01:10:14.863731 | orchestrator | changed: [testbed-node-5] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-29 01:10:14.863735 | orchestrator | changed: [testbed-node-4] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-29 01:10:14.863738 | orchestrator | 2026-03-29 01:10:14.863742 | orchestrator | TASK [nova-cell : Copying VMware vCenter CA file] ****************************** 2026-03-29 01:10:14.863749 | orchestrator | Sunday 29 March 2026 01:07:04 +0000 (0:00:06.250) 0:05:55.811 ********** 2026-03-29 01:10:14.863753 | orchestrator | skipping: [testbed-node-3] 2026-03-29 01:10:14.863757 | orchestrator | skipping: [testbed-node-4] 2026-03-29 01:10:14.863761 | orchestrator | skipping: [testbed-node-5] 2026-03-29 01:10:14.863764 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:10:14.863768 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:10:14.863772 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:10:14.863776 | orchestrator | 2026-03-29 01:10:14.863779 | orchestrator | TASK [nova-cell : Copying 'release' file for nova_compute] ********************* 2026-03-29 01:10:14.863783 | orchestrator | Sunday 29 March 2026 01:07:05 +0000 (0:00:00.515) 0:05:56.327 ********** 2026-03-29 01:10:14.863787 | orchestrator | skipping: [testbed-node-3] 2026-03-29 01:10:14.863791 | orchestrator | skipping: [testbed-node-4] 2026-03-29 01:10:14.863794 | orchestrator | skipping: [testbed-node-5] 2026-03-29 01:10:14.863798 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:10:14.863802 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:10:14.863806 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:10:14.863810 | orchestrator | 2026-03-29 01:10:14.863813 | orchestrator | TASK [nova-cell : Generating 'hostnqn' file for nova_compute] ****************** 2026-03-29 01:10:14.863817 | orchestrator | Sunday 29 March 2026 01:07:05 +0000 (0:00:00.665) 0:05:56.993 ********** 2026-03-29 01:10:14.863821 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:10:14.863825 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:10:14.863828 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:10:14.863836 | orchestrator | changed: [testbed-node-3] 2026-03-29 01:10:14.863839 | orchestrator | changed: [testbed-node-5] 2026-03-29 01:10:14.863843 | orchestrator | changed: [testbed-node-4] 2026-03-29 01:10:14.863847 | orchestrator | 2026-03-29 01:10:14.863851 | orchestrator | TASK [nova-cell : Generating 'hostid' file for nova_compute] ******************* 2026-03-29 01:10:14.863857 | orchestrator | Sunday 29 March 2026 01:07:07 +0000 (0:00:01.602) 0:05:58.596 ********** 2026-03-29 01:10:14.863861 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:10:14.863865 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:10:14.863869 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:10:14.863872 | orchestrator | changed: [testbed-node-3] 2026-03-29 01:10:14.863876 | orchestrator | changed: [testbed-node-4] 2026-03-29 01:10:14.863880 | orchestrator | changed: [testbed-node-5] 2026-03-29 01:10:14.863883 | orchestrator | 2026-03-29 01:10:14.863887 | orchestrator | TASK [nova-cell : Copying over existing policy file] *************************** 2026-03-29 01:10:14.863891 | orchestrator | Sunday 29 March 2026 01:07:09 +0000 (0:00:02.263) 0:06:00.859 ********** 2026-03-29 01:10:14.863895 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-29 01:10:14.863900 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-29 01:10:14.863908 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-29 01:10:14.863912 | orchestrator | skipping: [testbed-node-3] 2026-03-29 01:10:14.863916 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-29 01:10:14.863925 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-29 01:10:14.863930 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-29 01:10:14.863934 | orchestrator | skipping: [testbed-node-4] 2026-03-29 01:10:14.863937 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-29 01:10:14.863942 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-29 01:10:14.863949 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:10:14.863953 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-29 01:10:14.863957 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-29 01:10:14.864000 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-29 01:10:14.864005 | orchestrator | skipping: [testbed-node-5] 2026-03-29 01:10:14.864009 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-29 01:10:14.864013 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-29 01:10:14.864022 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:10:14.864028 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-29 01:10:14.864034 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-29 01:10:14.864040 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:10:14.864046 | orchestrator | 2026-03-29 01:10:14.864052 | orchestrator | TASK [nova-cell : Copying over vendordata file to containers] ****************** 2026-03-29 01:10:14.864059 | orchestrator | Sunday 29 March 2026 01:07:12 +0000 (0:00:02.726) 0:06:03.585 ********** 2026-03-29 01:10:14.864064 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2026-03-29 01:10:14.864070 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2026-03-29 01:10:14.864077 | orchestrator | skipping: [testbed-node-3] 2026-03-29 01:10:14.864082 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2026-03-29 01:10:14.864089 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2026-03-29 01:10:14.864096 | orchestrator | skipping: [testbed-node-4] 2026-03-29 01:10:14.864102 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2026-03-29 01:10:14.864108 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2026-03-29 01:10:14.864114 | orchestrator | skipping: [testbed-node-5] 2026-03-29 01:10:14.864119 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2026-03-29 01:10:14.864125 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2026-03-29 01:10:14.864132 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:10:14.864137 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2026-03-29 01:10:14.864143 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2026-03-29 01:10:14.864149 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:10:14.864154 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2026-03-29 01:10:14.864160 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2026-03-29 01:10:14.864165 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:10:14.864172 | orchestrator | 2026-03-29 01:10:14.864185 | orchestrator | TASK [nova-cell : Check nova-cell containers] ********************************** 2026-03-29 01:10:14.864190 | orchestrator | Sunday 29 March 2026 01:07:13 +0000 (0:00:01.186) 0:06:04.772 ********** 2026-03-29 01:10:14.864201 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-29 01:10:14.864213 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-29 01:10:14.864220 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-29 01:10:14.864226 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-29 01:10:14.864233 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-29 01:10:14.864248 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-29 01:10:14.864256 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-29 01:10:14.864260 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-29 01:10:14.864264 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-29 01:10:14.864268 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-29 01:10:14.864272 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-29 01:10:14.864280 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-29 01:10:14.864285 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-29 01:10:14.864292 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-29 01:10:14.864296 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-29 01:10:14.864300 | orchestrator | 2026-03-29 01:10:14.864303 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-03-29 01:10:14.864307 | orchestrator | Sunday 29 March 2026 01:07:17 +0000 (0:00:03.630) 0:06:08.402 ********** 2026-03-29 01:10:14.864311 | orchestrator | skipping: [testbed-node-3] 2026-03-29 01:10:14.864315 | orchestrator | skipping: [testbed-node-4] 2026-03-29 01:10:14.864319 | orchestrator | skipping: [testbed-node-5] 2026-03-29 01:10:14.864323 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:10:14.864326 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:10:14.864330 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:10:14.864334 | orchestrator | 2026-03-29 01:10:14.864338 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-29 01:10:14.864342 | orchestrator | Sunday 29 March 2026 01:07:18 +0000 (0:00:01.211) 0:06:09.614 ********** 2026-03-29 01:10:14.864345 | orchestrator | 2026-03-29 01:10:14.864349 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-29 01:10:14.864353 | orchestrator | Sunday 29 March 2026 01:07:18 +0000 (0:00:00.109) 0:06:09.723 ********** 2026-03-29 01:10:14.864357 | orchestrator | 2026-03-29 01:10:14.864361 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-29 01:10:14.864364 | orchestrator | Sunday 29 March 2026 01:07:18 +0000 (0:00:00.100) 0:06:09.824 ********** 2026-03-29 01:10:14.864368 | orchestrator | 2026-03-29 01:10:14.864372 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-29 01:10:14.864376 | orchestrator | Sunday 29 March 2026 01:07:18 +0000 (0:00:00.103) 0:06:09.928 ********** 2026-03-29 01:10:14.864379 | orchestrator | 2026-03-29 01:10:14.864383 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-29 01:10:14.864387 | orchestrator | Sunday 29 March 2026 01:07:18 +0000 (0:00:00.100) 0:06:10.029 ********** 2026-03-29 01:10:14.864391 | orchestrator | 2026-03-29 01:10:14.864394 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-29 01:10:14.864398 | orchestrator | Sunday 29 March 2026 01:07:19 +0000 (0:00:00.268) 0:06:10.297 ********** 2026-03-29 01:10:14.864402 | orchestrator | 2026-03-29 01:10:14.864409 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-conductor container] ***************** 2026-03-29 01:10:14.864412 | orchestrator | Sunday 29 March 2026 01:07:19 +0000 (0:00:00.124) 0:06:10.421 ********** 2026-03-29 01:10:14.864416 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:10:14.864420 | orchestrator | changed: [testbed-node-1] 2026-03-29 01:10:14.864424 | orchestrator | changed: [testbed-node-2] 2026-03-29 01:10:14.864428 | orchestrator | 2026-03-29 01:10:14.864431 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-novncproxy container] **************** 2026-03-29 01:10:14.864435 | orchestrator | Sunday 29 March 2026 01:07:33 +0000 (0:00:13.977) 0:06:24.398 ********** 2026-03-29 01:10:14.864439 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:10:14.864443 | orchestrator | changed: [testbed-node-2] 2026-03-29 01:10:14.864446 | orchestrator | changed: [testbed-node-1] 2026-03-29 01:10:14.864450 | orchestrator | 2026-03-29 01:10:14.864456 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-ssh container] *********************** 2026-03-29 01:10:14.864460 | orchestrator | Sunday 29 March 2026 01:07:49 +0000 (0:00:16.034) 0:06:40.433 ********** 2026-03-29 01:10:14.864464 | orchestrator | changed: [testbed-node-3] 2026-03-29 01:10:14.864470 | orchestrator | changed: [testbed-node-5] 2026-03-29 01:10:14.864491 | orchestrator | changed: [testbed-node-4] 2026-03-29 01:10:14.864498 | orchestrator | 2026-03-29 01:10:14.864504 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-libvirt container] ******************* 2026-03-29 01:10:14.864508 | orchestrator | Sunday 29 March 2026 01:08:10 +0000 (0:00:21.307) 0:07:01.741 ********** 2026-03-29 01:10:14.864512 | orchestrator | changed: [testbed-node-5] 2026-03-29 01:10:14.864515 | orchestrator | changed: [testbed-node-3] 2026-03-29 01:10:14.864519 | orchestrator | changed: [testbed-node-4] 2026-03-29 01:10:14.864523 | orchestrator | 2026-03-29 01:10:14.864527 | orchestrator | RUNNING HANDLER [nova-cell : Checking libvirt container is ready] ************** 2026-03-29 01:10:14.864530 | orchestrator | Sunday 29 March 2026 01:08:36 +0000 (0:00:25.675) 0:07:27.416 ********** 2026-03-29 01:10:14.864534 | orchestrator | FAILED - RETRYING: [testbed-node-3]: Checking libvirt container is ready (10 retries left). 2026-03-29 01:10:14.864538 | orchestrator | FAILED - RETRYING: [testbed-node-4]: Checking libvirt container is ready (10 retries left). 2026-03-29 01:10:14.864542 | orchestrator | FAILED - RETRYING: [testbed-node-5]: Checking libvirt container is ready (10 retries left). 2026-03-29 01:10:14.864546 | orchestrator | changed: [testbed-node-4] 2026-03-29 01:10:14.864550 | orchestrator | changed: [testbed-node-5] 2026-03-29 01:10:14.864554 | orchestrator | changed: [testbed-node-3] 2026-03-29 01:10:14.864557 | orchestrator | 2026-03-29 01:10:14.864561 | orchestrator | RUNNING HANDLER [nova-cell : Create libvirt SASL user] ************************* 2026-03-29 01:10:14.864565 | orchestrator | Sunday 29 March 2026 01:08:42 +0000 (0:00:06.090) 0:07:33.507 ********** 2026-03-29 01:10:14.864569 | orchestrator | changed: [testbed-node-3] 2026-03-29 01:10:14.864573 | orchestrator | changed: [testbed-node-4] 2026-03-29 01:10:14.864576 | orchestrator | changed: [testbed-node-5] 2026-03-29 01:10:14.864580 | orchestrator | 2026-03-29 01:10:14.864584 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-compute container] ******************* 2026-03-29 01:10:14.864588 | orchestrator | Sunday 29 March 2026 01:08:43 +0000 (0:00:00.886) 0:07:34.393 ********** 2026-03-29 01:10:14.864591 | orchestrator | changed: [testbed-node-3] 2026-03-29 01:10:14.864595 | orchestrator | changed: [testbed-node-4] 2026-03-29 01:10:14.864599 | orchestrator | changed: [testbed-node-5] 2026-03-29 01:10:14.864602 | orchestrator | 2026-03-29 01:10:14.864606 | orchestrator | RUNNING HANDLER [nova-cell : Wait for nova-compute services to update service versions] *** 2026-03-29 01:10:14.864610 | orchestrator | Sunday 29 March 2026 01:09:01 +0000 (0:00:18.428) 0:07:52.822 ********** 2026-03-29 01:10:14.864614 | orchestrator | skipping: [testbed-node-3] 2026-03-29 01:10:14.864618 | orchestrator | 2026-03-29 01:10:14.864621 | orchestrator | TASK [nova-cell : Waiting for nova-compute services to register themselves] **** 2026-03-29 01:10:14.864625 | orchestrator | Sunday 29 March 2026 01:09:01 +0000 (0:00:00.340) 0:07:53.162 ********** 2026-03-29 01:10:14.864629 | orchestrator | skipping: [testbed-node-5] 2026-03-29 01:10:14.864637 | orchestrator | skipping: [testbed-node-4] 2026-03-29 01:10:14.864640 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:10:14.864644 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:10:14.864648 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:10:14.864652 | orchestrator | FAILED - RETRYING: [testbed-node-3 -> testbed-node-0]: Waiting for nova-compute services to register themselves (20 retries left). 2026-03-29 01:10:14.864656 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-29 01:10:14.864660 | orchestrator | 2026-03-29 01:10:14.864664 | orchestrator | TASK [nova-cell : Fail if nova-compute service failed to register] ************* 2026-03-29 01:10:14.864667 | orchestrator | Sunday 29 March 2026 01:09:22 +0000 (0:00:20.387) 0:08:13.550 ********** 2026-03-29 01:10:14.864671 | orchestrator | skipping: [testbed-node-4] 2026-03-29 01:10:14.864675 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:10:14.864679 | orchestrator | skipping: [testbed-node-5] 2026-03-29 01:10:14.864682 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:10:14.864686 | orchestrator | skipping: [testbed-node-3] 2026-03-29 01:10:14.864690 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:10:14.864694 | orchestrator | 2026-03-29 01:10:14.864698 | orchestrator | TASK [nova-cell : Include discover_computes.yml] ******************************* 2026-03-29 01:10:14.864701 | orchestrator | Sunday 29 March 2026 01:09:30 +0000 (0:00:08.485) 0:08:22.035 ********** 2026-03-29 01:10:14.864705 | orchestrator | skipping: [testbed-node-5] 2026-03-29 01:10:14.864709 | orchestrator | skipping: [testbed-node-4] 2026-03-29 01:10:14.864713 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:10:14.864716 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:10:14.864720 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:10:14.864724 | orchestrator | included: /ansible/roles/nova-cell/tasks/discover_computes.yml for testbed-node-3 2026-03-29 01:10:14.864727 | orchestrator | 2026-03-29 01:10:14.864731 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-03-29 01:10:14.864735 | orchestrator | Sunday 29 March 2026 01:09:34 +0000 (0:00:03.645) 0:08:25.681 ********** 2026-03-29 01:10:14.864739 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-29 01:10:14.864742 | orchestrator | 2026-03-29 01:10:14.864746 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-03-29 01:10:14.864750 | orchestrator | Sunday 29 March 2026 01:09:49 +0000 (0:00:14.726) 0:08:40.408 ********** 2026-03-29 01:10:14.864754 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-29 01:10:14.864757 | orchestrator | 2026-03-29 01:10:14.864761 | orchestrator | TASK [nova-cell : Fail if cell settings not found] ***************************** 2026-03-29 01:10:14.864765 | orchestrator | Sunday 29 March 2026 01:09:50 +0000 (0:00:01.442) 0:08:41.850 ********** 2026-03-29 01:10:14.864769 | orchestrator | skipping: [testbed-node-3] 2026-03-29 01:10:14.864773 | orchestrator | 2026-03-29 01:10:14.864776 | orchestrator | TASK [nova-cell : Discover nova hosts] ***************************************** 2026-03-29 01:10:14.864780 | orchestrator | Sunday 29 March 2026 01:09:52 +0000 (0:00:01.388) 0:08:43.239 ********** 2026-03-29 01:10:14.864786 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-29 01:10:14.864790 | orchestrator | 2026-03-29 01:10:14.864794 | orchestrator | TASK [nova-cell : Remove old nova_libvirt_secrets container volume] ************ 2026-03-29 01:10:14.864800 | orchestrator | Sunday 29 March 2026 01:10:05 +0000 (0:00:13.437) 0:08:56.677 ********** 2026-03-29 01:10:14.864804 | orchestrator | ok: [testbed-node-3] 2026-03-29 01:10:14.864808 | orchestrator | ok: [testbed-node-4] 2026-03-29 01:10:14.864812 | orchestrator | ok: [testbed-node-5] 2026-03-29 01:10:14.864815 | orchestrator | ok: [testbed-node-0] 2026-03-29 01:10:14.864819 | orchestrator | ok: [testbed-node-1] 2026-03-29 01:10:14.864823 | orchestrator | ok: [testbed-node-2] 2026-03-29 01:10:14.864826 | orchestrator | 2026-03-29 01:10:14.864830 | orchestrator | PLAY [Refresh nova scheduler cell cache] *************************************** 2026-03-29 01:10:14.864834 | orchestrator | 2026-03-29 01:10:14.864838 | orchestrator | TASK [nova : Refresh cell cache in nova scheduler] ***************************** 2026-03-29 01:10:14.864845 | orchestrator | Sunday 29 March 2026 01:10:07 +0000 (0:00:01.917) 0:08:58.595 ********** 2026-03-29 01:10:14.864849 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:10:14.864853 | orchestrator | changed: [testbed-node-1] 2026-03-29 01:10:14.864857 | orchestrator | changed: [testbed-node-2] 2026-03-29 01:10:14.864861 | orchestrator | 2026-03-29 01:10:14.864864 | orchestrator | PLAY [Reload global Nova super conductor services] ***************************** 2026-03-29 01:10:14.864868 | orchestrator | 2026-03-29 01:10:14.864872 | orchestrator | TASK [nova : Reload nova super conductor services to remove RPC version pin] *** 2026-03-29 01:10:14.864876 | orchestrator | Sunday 29 March 2026 01:10:08 +0000 (0:00:01.142) 0:08:59.738 ********** 2026-03-29 01:10:14.864879 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:10:14.864883 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:10:14.864887 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:10:14.864890 | orchestrator | 2026-03-29 01:10:14.864894 | orchestrator | PLAY [Reload Nova cell services] *********************************************** 2026-03-29 01:10:14.864898 | orchestrator | 2026-03-29 01:10:14.864902 | orchestrator | TASK [nova-cell : Reload nova cell services to remove RPC version cap] ********* 2026-03-29 01:10:14.864905 | orchestrator | Sunday 29 March 2026 01:10:09 +0000 (0:00:00.515) 0:09:00.254 ********** 2026-03-29 01:10:14.864909 | orchestrator | skipping: [testbed-node-3] => (item=nova-conductor)  2026-03-29 01:10:14.864913 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2026-03-29 01:10:14.864917 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2026-03-29 01:10:14.864921 | orchestrator | skipping: [testbed-node-3] => (item=nova-novncproxy)  2026-03-29 01:10:14.864924 | orchestrator | skipping: [testbed-node-3] => (item=nova-serialproxy)  2026-03-29 01:10:14.864928 | orchestrator | skipping: [testbed-node-3] => (item=nova-spicehtml5proxy)  2026-03-29 01:10:14.864932 | orchestrator | skipping: [testbed-node-3] 2026-03-29 01:10:14.864936 | orchestrator | skipping: [testbed-node-4] => (item=nova-conductor)  2026-03-29 01:10:14.864940 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2026-03-29 01:10:14.864946 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2026-03-29 01:10:14.864952 | orchestrator | skipping: [testbed-node-4] => (item=nova-novncproxy)  2026-03-29 01:10:14.864958 | orchestrator | skipping: [testbed-node-4] => (item=nova-serialproxy)  2026-03-29 01:10:14.864964 | orchestrator | skipping: [testbed-node-4] => (item=nova-spicehtml5proxy)  2026-03-29 01:10:14.864970 | orchestrator | skipping: [testbed-node-4] 2026-03-29 01:10:14.864976 | orchestrator | skipping: [testbed-node-5] => (item=nova-conductor)  2026-03-29 01:10:14.864982 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2026-03-29 01:10:14.864987 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2026-03-29 01:10:14.864993 | orchestrator | skipping: [testbed-node-5] => (item=nova-novncproxy)  2026-03-29 01:10:14.864999 | orchestrator | skipping: [testbed-node-5] => (item=nova-serialproxy)  2026-03-29 01:10:14.865004 | orchestrator | skipping: [testbed-node-5] => (item=nova-spicehtml5proxy)  2026-03-29 01:10:14.865009 | orchestrator | skipping: [testbed-node-5] 2026-03-29 01:10:14.865015 | orchestrator | skipping: [testbed-node-0] => (item=nova-conductor)  2026-03-29 01:10:14.865021 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2026-03-29 01:10:14.865027 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2026-03-29 01:10:14.865033 | orchestrator | skipping: [testbed-node-0] => (item=nova-novncproxy)  2026-03-29 01:10:14.865039 | orchestrator | skipping: [testbed-node-0] => (item=nova-serialproxy)  2026-03-29 01:10:14.865045 | orchestrator | skipping: [testbed-node-0] => (item=nova-spicehtml5proxy)  2026-03-29 01:10:14.865051 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:10:14.865057 | orchestrator | skipping: [testbed-node-1] => (item=nova-conductor)  2026-03-29 01:10:14.865063 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2026-03-29 01:10:14.865072 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2026-03-29 01:10:14.865078 | orchestrator | skipping: [testbed-node-1] => (item=nova-novncproxy)  2026-03-29 01:10:14.865084 | orchestrator | skipping: [testbed-node-1] => (item=nova-serialproxy)  2026-03-29 01:10:14.865090 | orchestrator | skipping: [testbed-node-1] => (item=nova-spicehtml5proxy)  2026-03-29 01:10:14.865098 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:10:14.865102 | orchestrator | skipping: [testbed-node-2] => (item=nova-conductor)  2026-03-29 01:10:14.865106 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2026-03-29 01:10:14.865109 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2026-03-29 01:10:14.865113 | orchestrator | skipping: [testbed-node-2] => (item=nova-novncproxy)  2026-03-29 01:10:14.865117 | orchestrator | skipping: [testbed-node-2] => (item=nova-serialproxy)  2026-03-29 01:10:14.865120 | orchestrator | skipping: [testbed-node-2] => (item=nova-spicehtml5proxy)  2026-03-29 01:10:14.865124 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:10:14.865128 | orchestrator | 2026-03-29 01:10:14.865132 | orchestrator | PLAY [Reload global Nova API services] ***************************************** 2026-03-29 01:10:14.865136 | orchestrator | 2026-03-29 01:10:14.865142 | orchestrator | TASK [nova : Reload nova API services to remove RPC version pin] *************** 2026-03-29 01:10:14.865146 | orchestrator | Sunday 29 March 2026 01:10:10 +0000 (0:00:01.382) 0:09:01.636 ********** 2026-03-29 01:10:14.865150 | orchestrator | skipping: [testbed-node-0] => (item=nova-scheduler)  2026-03-29 01:10:14.865158 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2026-03-29 01:10:14.865162 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:10:14.865166 | orchestrator | skipping: [testbed-node-1] => (item=nova-scheduler)  2026-03-29 01:10:14.865169 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2026-03-29 01:10:14.865173 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:10:14.865177 | orchestrator | skipping: [testbed-node-2] => (item=nova-scheduler)  2026-03-29 01:10:14.865180 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2026-03-29 01:10:14.865184 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:10:14.865188 | orchestrator | 2026-03-29 01:10:14.865192 | orchestrator | PLAY [Run Nova API online data migrations] ************************************* 2026-03-29 01:10:14.865196 | orchestrator | 2026-03-29 01:10:14.865200 | orchestrator | TASK [nova : Run Nova API online database migrations] ************************** 2026-03-29 01:10:14.865203 | orchestrator | Sunday 29 March 2026 01:10:11 +0000 (0:00:00.723) 0:09:02.360 ********** 2026-03-29 01:10:14.865207 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:10:14.865211 | orchestrator | 2026-03-29 01:10:14.865215 | orchestrator | PLAY [Run Nova cell online data migrations] ************************************ 2026-03-29 01:10:14.865218 | orchestrator | 2026-03-29 01:10:14.865222 | orchestrator | TASK [nova-cell : Run Nova cell online database migrations] ******************** 2026-03-29 01:10:14.865226 | orchestrator | Sunday 29 March 2026 01:10:11 +0000 (0:00:00.657) 0:09:03.017 ********** 2026-03-29 01:10:14.865229 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:10:14.865233 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:10:14.865237 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:10:14.865241 | orchestrator | 2026-03-29 01:10:14.865244 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 01:10:14.865248 | orchestrator | testbed-manager : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-29 01:10:14.865253 | orchestrator | testbed-node-0 : ok=54  changed=35  unreachable=0 failed=0 skipped=46  rescued=0 ignored=0 2026-03-29 01:10:14.865258 | orchestrator | testbed-node-1 : ok=27  changed=19  unreachable=0 failed=0 skipped=53  rescued=0 ignored=0 2026-03-29 01:10:14.865262 | orchestrator | testbed-node-2 : ok=27  changed=19  unreachable=0 failed=0 skipped=53  rescued=0 ignored=0 2026-03-29 01:10:14.865269 | orchestrator | testbed-node-3 : ok=46  changed=28  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2026-03-29 01:10:14.865273 | orchestrator | testbed-node-4 : ok=40  changed=28  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-03-29 01:10:14.865276 | orchestrator | testbed-node-5 : ok=40  changed=28  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-03-29 01:10:14.865280 | orchestrator | 2026-03-29 01:10:14.865284 | orchestrator | 2026-03-29 01:10:14.865288 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-29 01:10:14.865291 | orchestrator | Sunday 29 March 2026 01:10:12 +0000 (0:00:00.615) 0:09:03.633 ********** 2026-03-29 01:10:14.865295 | orchestrator | =============================================================================== 2026-03-29 01:10:14.865299 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 33.24s 2026-03-29 01:10:14.865302 | orchestrator | nova-cell : Restart nova-libvirt container ----------------------------- 25.68s 2026-03-29 01:10:14.865306 | orchestrator | nova-cell : Running Nova cell bootstrap container ---------------------- 22.17s 2026-03-29 01:10:14.865310 | orchestrator | nova-cell : Restart nova-ssh container --------------------------------- 21.31s 2026-03-29 01:10:14.865313 | orchestrator | nova-cell : Waiting for nova-compute services to register themselves --- 20.39s 2026-03-29 01:10:14.865317 | orchestrator | nova : Restart nova-scheduler container -------------------------------- 20.04s 2026-03-29 01:10:14.865321 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 19.66s 2026-03-29 01:10:14.865324 | orchestrator | nova-cell : Restart nova-compute container ----------------------------- 18.43s 2026-03-29 01:10:14.865328 | orchestrator | nova : Create cell0 mappings ------------------------------------------- 16.38s 2026-03-29 01:10:14.865332 | orchestrator | nova-cell : Restart nova-novncproxy container -------------------------- 16.03s 2026-03-29 01:10:14.865335 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 14.73s 2026-03-29 01:10:14.865339 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 14.60s 2026-03-29 01:10:14.865343 | orchestrator | nova-cell : Restart nova-conductor container --------------------------- 13.98s 2026-03-29 01:10:14.865346 | orchestrator | nova-cell : Create cell ------------------------------------------------ 13.78s 2026-03-29 01:10:14.865350 | orchestrator | nova-cell : Discover nova hosts ---------------------------------------- 13.44s 2026-03-29 01:10:14.865354 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 13.31s 2026-03-29 01:10:14.865357 | orchestrator | nova : Copying over nova.conf ------------------------------------------- 9.56s 2026-03-29 01:10:14.865364 | orchestrator | service-rabbitmq : nova | Ensure RabbitMQ users exist ------------------- 8.93s 2026-03-29 01:10:14.865368 | orchestrator | nova-cell : Fail if nova-compute service failed to register ------------- 8.49s 2026-03-29 01:10:14.865371 | orchestrator | service-ks-register : nova | Granting user roles ------------------------ 7.38s 2026-03-29 01:10:17.906091 | orchestrator | 2026-03-29 01:10:17 | INFO  | Task eaf45d79-acac-464b-9378-52c460a1ae78 is in state STARTED 2026-03-29 01:10:17.906177 | orchestrator | 2026-03-29 01:10:17 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:10:20.960299 | orchestrator | 2026-03-29 01:10:20 | INFO  | Task eaf45d79-acac-464b-9378-52c460a1ae78 is in state STARTED 2026-03-29 01:10:20.960349 | orchestrator | 2026-03-29 01:10:20 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:10:24.005909 | orchestrator | 2026-03-29 01:10:24 | INFO  | Task eaf45d79-acac-464b-9378-52c460a1ae78 is in state STARTED 2026-03-29 01:10:24.005970 | orchestrator | 2026-03-29 01:10:24 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:10:27.056007 | orchestrator | 2026-03-29 01:10:27 | INFO  | Task eaf45d79-acac-464b-9378-52c460a1ae78 is in state STARTED 2026-03-29 01:10:27.056101 | orchestrator | 2026-03-29 01:10:27 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:10:30.103983 | orchestrator | 2026-03-29 01:10:30 | INFO  | Task eaf45d79-acac-464b-9378-52c460a1ae78 is in state STARTED 2026-03-29 01:10:30.104056 | orchestrator | 2026-03-29 01:10:30 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:10:33.147016 | orchestrator | 2026-03-29 01:10:33 | INFO  | Task eaf45d79-acac-464b-9378-52c460a1ae78 is in state STARTED 2026-03-29 01:10:33.147065 | orchestrator | 2026-03-29 01:10:33 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:10:36.192595 | orchestrator | 2026-03-29 01:10:36 | INFO  | Task eaf45d79-acac-464b-9378-52c460a1ae78 is in state STARTED 2026-03-29 01:10:36.192638 | orchestrator | 2026-03-29 01:10:36 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:10:39.232210 | orchestrator | 2026-03-29 01:10:39 | INFO  | Task eaf45d79-acac-464b-9378-52c460a1ae78 is in state STARTED 2026-03-29 01:10:39.232295 | orchestrator | 2026-03-29 01:10:39 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:10:42.275083 | orchestrator | 2026-03-29 01:10:42 | INFO  | Task eaf45d79-acac-464b-9378-52c460a1ae78 is in state STARTED 2026-03-29 01:10:42.275195 | orchestrator | 2026-03-29 01:10:42 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:10:45.316747 | orchestrator | 2026-03-29 01:10:45 | INFO  | Task eaf45d79-acac-464b-9378-52c460a1ae78 is in state STARTED 2026-03-29 01:10:45.316796 | orchestrator | 2026-03-29 01:10:45 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:10:48.360652 | orchestrator | 2026-03-29 01:10:48 | INFO  | Task eaf45d79-acac-464b-9378-52c460a1ae78 is in state STARTED 2026-03-29 01:10:48.360697 | orchestrator | 2026-03-29 01:10:48 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:10:51.402206 | orchestrator | 2026-03-29 01:10:51 | INFO  | Task eaf45d79-acac-464b-9378-52c460a1ae78 is in state STARTED 2026-03-29 01:10:51.402262 | orchestrator | 2026-03-29 01:10:51 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:10:54.441648 | orchestrator | 2026-03-29 01:10:54 | INFO  | Task eaf45d79-acac-464b-9378-52c460a1ae78 is in state STARTED 2026-03-29 01:10:54.441737 | orchestrator | 2026-03-29 01:10:54 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:10:57.495131 | orchestrator | 2026-03-29 01:10:57 | INFO  | Task eaf45d79-acac-464b-9378-52c460a1ae78 is in state STARTED 2026-03-29 01:10:57.495237 | orchestrator | 2026-03-29 01:10:57 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:11:00.529800 | orchestrator | 2026-03-29 01:11:00 | INFO  | Task eaf45d79-acac-464b-9378-52c460a1ae78 is in state STARTED 2026-03-29 01:11:00.529850 | orchestrator | 2026-03-29 01:11:00 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:11:03.576568 | orchestrator | 2026-03-29 01:11:03 | INFO  | Task eaf45d79-acac-464b-9378-52c460a1ae78 is in state STARTED 2026-03-29 01:11:03.576638 | orchestrator | 2026-03-29 01:11:03 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:11:06.644589 | orchestrator | 2026-03-29 01:11:06 | INFO  | Task eaf45d79-acac-464b-9378-52c460a1ae78 is in state STARTED 2026-03-29 01:11:06.644642 | orchestrator | 2026-03-29 01:11:06 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:11:09.687729 | orchestrator | 2026-03-29 01:11:09 | INFO  | Task eaf45d79-acac-464b-9378-52c460a1ae78 is in state STARTED 2026-03-29 01:11:09.687790 | orchestrator | 2026-03-29 01:11:09 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:11:12.736589 | orchestrator | 2026-03-29 01:11:12 | INFO  | Task eaf45d79-acac-464b-9378-52c460a1ae78 is in state STARTED 2026-03-29 01:11:12.736660 | orchestrator | 2026-03-29 01:11:12 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:11:15.778516 | orchestrator | 2026-03-29 01:11:15 | INFO  | Task eaf45d79-acac-464b-9378-52c460a1ae78 is in state STARTED 2026-03-29 01:11:15.778574 | orchestrator | 2026-03-29 01:11:15 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:11:18.821826 | orchestrator | 2026-03-29 01:11:18 | INFO  | Task eaf45d79-acac-464b-9378-52c460a1ae78 is in state STARTED 2026-03-29 01:11:18.821886 | orchestrator | 2026-03-29 01:11:18 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:11:21.864553 | orchestrator | 2026-03-29 01:11:21 | INFO  | Task eaf45d79-acac-464b-9378-52c460a1ae78 is in state STARTED 2026-03-29 01:11:21.864611 | orchestrator | 2026-03-29 01:11:21 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:11:24.907140 | orchestrator | 2026-03-29 01:11:24 | INFO  | Task eaf45d79-acac-464b-9378-52c460a1ae78 is in state STARTED 2026-03-29 01:11:24.907186 | orchestrator | 2026-03-29 01:11:24 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:11:27.950953 | orchestrator | 2026-03-29 01:11:27 | INFO  | Task eaf45d79-acac-464b-9378-52c460a1ae78 is in state STARTED 2026-03-29 01:11:27.951026 | orchestrator | 2026-03-29 01:11:27 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:11:31.014003 | orchestrator | 2026-03-29 01:11:31 | INFO  | Task eaf45d79-acac-464b-9378-52c460a1ae78 is in state STARTED 2026-03-29 01:11:31.014142 | orchestrator | 2026-03-29 01:11:31 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:11:34.057067 | orchestrator | 2026-03-29 01:11:34 | INFO  | Task eaf45d79-acac-464b-9378-52c460a1ae78 is in state STARTED 2026-03-29 01:11:34.057203 | orchestrator | 2026-03-29 01:11:34 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:11:37.109737 | orchestrator | 2026-03-29 01:11:37 | INFO  | Task eaf45d79-acac-464b-9378-52c460a1ae78 is in state STARTED 2026-03-29 01:11:37.109784 | orchestrator | 2026-03-29 01:11:37 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:11:40.162937 | orchestrator | 2026-03-29 01:11:40 | INFO  | Task eaf45d79-acac-464b-9378-52c460a1ae78 is in state STARTED 2026-03-29 01:11:40.163034 | orchestrator | 2026-03-29 01:11:40 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:11:43.213444 | orchestrator | 2026-03-29 01:11:43 | INFO  | Task eaf45d79-acac-464b-9378-52c460a1ae78 is in state STARTED 2026-03-29 01:11:43.213526 | orchestrator | 2026-03-29 01:11:43 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:11:46.259593 | orchestrator | 2026-03-29 01:11:46 | INFO  | Task eaf45d79-acac-464b-9378-52c460a1ae78 is in state STARTED 2026-03-29 01:11:46.259778 | orchestrator | 2026-03-29 01:11:46 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:11:49.309543 | orchestrator | 2026-03-29 01:11:49 | INFO  | Task eaf45d79-acac-464b-9378-52c460a1ae78 is in state STARTED 2026-03-29 01:11:49.309610 | orchestrator | 2026-03-29 01:11:49 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:11:52.362141 | orchestrator | 2026-03-29 01:11:52 | INFO  | Task eaf45d79-acac-464b-9378-52c460a1ae78 is in state STARTED 2026-03-29 01:11:52.362188 | orchestrator | 2026-03-29 01:11:52 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:11:55.422609 | orchestrator | 2026-03-29 01:11:55 | INFO  | Task eaf45d79-acac-464b-9378-52c460a1ae78 is in state STARTED 2026-03-29 01:11:55.422670 | orchestrator | 2026-03-29 01:11:55 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:11:58.468970 | orchestrator | 2026-03-29 01:11:58 | INFO  | Task eaf45d79-acac-464b-9378-52c460a1ae78 is in state STARTED 2026-03-29 01:11:58.469036 | orchestrator | 2026-03-29 01:11:58 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:12:01.510934 | orchestrator | 2026-03-29 01:12:01 | INFO  | Task eaf45d79-acac-464b-9378-52c460a1ae78 is in state STARTED 2026-03-29 01:12:01.511011 | orchestrator | 2026-03-29 01:12:01 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:12:04.546812 | orchestrator | 2026-03-29 01:12:04 | INFO  | Task eaf45d79-acac-464b-9378-52c460a1ae78 is in state STARTED 2026-03-29 01:12:04.547131 | orchestrator | 2026-03-29 01:12:04 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:12:07.597899 | orchestrator | 2026-03-29 01:12:07 | INFO  | Task eaf45d79-acac-464b-9378-52c460a1ae78 is in state STARTED 2026-03-29 01:12:07.597968 | orchestrator | 2026-03-29 01:12:07 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:12:10.641582 | orchestrator | 2026-03-29 01:12:10 | INFO  | Task eaf45d79-acac-464b-9378-52c460a1ae78 is in state STARTED 2026-03-29 01:12:10.641637 | orchestrator | 2026-03-29 01:12:10 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:12:13.687646 | orchestrator | 2026-03-29 01:12:13 | INFO  | Task eaf45d79-acac-464b-9378-52c460a1ae78 is in state STARTED 2026-03-29 01:12:13.687706 | orchestrator | 2026-03-29 01:12:13 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:12:16.739249 | orchestrator | 2026-03-29 01:12:16 | INFO  | Task eaf45d79-acac-464b-9378-52c460a1ae78 is in state STARTED 2026-03-29 01:12:16.739377 | orchestrator | 2026-03-29 01:12:16 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:12:19.771928 | orchestrator | 2026-03-29 01:12:19 | INFO  | Task eaf45d79-acac-464b-9378-52c460a1ae78 is in state STARTED 2026-03-29 01:12:19.771984 | orchestrator | 2026-03-29 01:12:19 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:12:22.811476 | orchestrator | 2026-03-29 01:12:22 | INFO  | Task eaf45d79-acac-464b-9378-52c460a1ae78 is in state STARTED 2026-03-29 01:12:22.811534 | orchestrator | 2026-03-29 01:12:22 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:12:25.865759 | orchestrator | 2026-03-29 01:12:25 | INFO  | Task eaf45d79-acac-464b-9378-52c460a1ae78 is in state STARTED 2026-03-29 01:12:25.865814 | orchestrator | 2026-03-29 01:12:25 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:12:28.908472 | orchestrator | 2026-03-29 01:12:28 | INFO  | Task eaf45d79-acac-464b-9378-52c460a1ae78 is in state STARTED 2026-03-29 01:12:28.908523 | orchestrator | 2026-03-29 01:12:28 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:12:31.959297 | orchestrator | 2026-03-29 01:12:31 | INFO  | Task eaf45d79-acac-464b-9378-52c460a1ae78 is in state STARTED 2026-03-29 01:12:31.959343 | orchestrator | 2026-03-29 01:12:31 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:12:35.003577 | orchestrator | 2026-03-29 01:12:35 | INFO  | Task eaf45d79-acac-464b-9378-52c460a1ae78 is in state STARTED 2026-03-29 01:12:35.003640 | orchestrator | 2026-03-29 01:12:35 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:12:38.051582 | orchestrator | 2026-03-29 01:12:38 | INFO  | Task eaf45d79-acac-464b-9378-52c460a1ae78 is in state STARTED 2026-03-29 01:12:38.051689 | orchestrator | 2026-03-29 01:12:38 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:12:41.095464 | orchestrator | 2026-03-29 01:12:41 | INFO  | Task eaf45d79-acac-464b-9378-52c460a1ae78 is in state STARTED 2026-03-29 01:12:41.095535 | orchestrator | 2026-03-29 01:12:41 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:12:44.145137 | orchestrator | 2026-03-29 01:12:44 | INFO  | Task eaf45d79-acac-464b-9378-52c460a1ae78 is in state STARTED 2026-03-29 01:12:44.145351 | orchestrator | 2026-03-29 01:12:44 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:12:47.195788 | orchestrator | 2026-03-29 01:12:47 | INFO  | Task eaf45d79-acac-464b-9378-52c460a1ae78 is in state SUCCESS 2026-03-29 01:12:47.197410 | orchestrator | 2026-03-29 01:12:47.197461 | orchestrator | 2026-03-29 01:12:47.197469 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-29 01:12:47.197476 | orchestrator | 2026-03-29 01:12:47.197482 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-29 01:12:47.197487 | orchestrator | Sunday 29 March 2026 01:08:07 +0000 (0:00:00.323) 0:00:00.323 ********** 2026-03-29 01:12:47.197492 | orchestrator | ok: [testbed-node-0] 2026-03-29 01:12:47.197499 | orchestrator | ok: [testbed-node-1] 2026-03-29 01:12:47.197504 | orchestrator | ok: [testbed-node-2] 2026-03-29 01:12:47.197509 | orchestrator | 2026-03-29 01:12:47.197513 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-29 01:12:47.197519 | orchestrator | Sunday 29 March 2026 01:08:07 +0000 (0:00:00.425) 0:00:00.749 ********** 2026-03-29 01:12:47.197524 | orchestrator | ok: [testbed-node-0] => (item=enable_octavia_True) 2026-03-29 01:12:47.197530 | orchestrator | ok: [testbed-node-1] => (item=enable_octavia_True) 2026-03-29 01:12:47.197535 | orchestrator | ok: [testbed-node-2] => (item=enable_octavia_True) 2026-03-29 01:12:47.197540 | orchestrator | 2026-03-29 01:12:47.197545 | orchestrator | PLAY [Apply role octavia] ****************************************************** 2026-03-29 01:12:47.197550 | orchestrator | 2026-03-29 01:12:47.197556 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-29 01:12:47.197561 | orchestrator | Sunday 29 March 2026 01:08:07 +0000 (0:00:00.340) 0:00:01.089 ********** 2026-03-29 01:12:47.197566 | orchestrator | included: /ansible/roles/octavia/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 01:12:47.197572 | orchestrator | 2026-03-29 01:12:47.197577 | orchestrator | TASK [service-ks-register : octavia | Creating services] *********************** 2026-03-29 01:12:47.197583 | orchestrator | Sunday 29 March 2026 01:08:08 +0000 (0:00:00.928) 0:00:02.018 ********** 2026-03-29 01:12:47.197588 | orchestrator | changed: [testbed-node-0] => (item=octavia (load-balancer)) 2026-03-29 01:12:47.197593 | orchestrator | 2026-03-29 01:12:47.197599 | orchestrator | TASK [service-ks-register : octavia | Creating endpoints] ********************** 2026-03-29 01:12:47.197612 | orchestrator | Sunday 29 March 2026 01:08:12 +0000 (0:00:03.438) 0:00:05.457 ********** 2026-03-29 01:12:47.197617 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api-int.testbed.osism.xyz:9876 -> internal) 2026-03-29 01:12:47.197622 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api.testbed.osism.xyz:9876 -> public) 2026-03-29 01:12:47.197627 | orchestrator | 2026-03-29 01:12:47.197632 | orchestrator | TASK [service-ks-register : octavia | Creating projects] *********************** 2026-03-29 01:12:47.197636 | orchestrator | Sunday 29 March 2026 01:08:18 +0000 (0:00:05.959) 0:00:11.416 ********** 2026-03-29 01:12:47.197642 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-29 01:12:47.197647 | orchestrator | 2026-03-29 01:12:47.197652 | orchestrator | TASK [service-ks-register : octavia | Creating users] ************************** 2026-03-29 01:12:47.197657 | orchestrator | Sunday 29 March 2026 01:08:21 +0000 (0:00:03.564) 0:00:14.980 ********** 2026-03-29 01:12:47.197663 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2026-03-29 01:12:47.197668 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2026-03-29 01:12:47.197673 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-29 01:12:47.197678 | orchestrator | 2026-03-29 01:12:47.197682 | orchestrator | TASK [service-ks-register : octavia | Creating roles] ************************** 2026-03-29 01:12:47.197685 | orchestrator | Sunday 29 March 2026 01:08:29 +0000 (0:00:07.257) 0:00:22.238 ********** 2026-03-29 01:12:47.197688 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-29 01:12:47.197704 | orchestrator | 2026-03-29 01:12:47.197707 | orchestrator | TASK [service-ks-register : octavia | Granting user roles] ********************* 2026-03-29 01:12:47.197710 | orchestrator | Sunday 29 March 2026 01:08:32 +0000 (0:00:03.249) 0:00:25.488 ********** 2026-03-29 01:12:47.197713 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service -> admin) 2026-03-29 01:12:47.197716 | orchestrator | ok: [testbed-node-0] => (item=octavia -> service -> admin) 2026-03-29 01:12:47.197719 | orchestrator | 2026-03-29 01:12:47.197722 | orchestrator | TASK [octavia : Adding octavia related roles] ********************************** 2026-03-29 01:12:47.197725 | orchestrator | Sunday 29 March 2026 01:08:39 +0000 (0:00:06.676) 0:00:32.164 ********** 2026-03-29 01:12:47.197728 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_observer) 2026-03-29 01:12:47.197731 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_global_observer) 2026-03-29 01:12:47.197734 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_member) 2026-03-29 01:12:47.197737 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_admin) 2026-03-29 01:12:47.197740 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_quota_admin) 2026-03-29 01:12:47.197743 | orchestrator | 2026-03-29 01:12:47.197746 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-29 01:12:47.197749 | orchestrator | Sunday 29 March 2026 01:08:54 +0000 (0:00:15.657) 0:00:47.822 ********** 2026-03-29 01:12:47.197752 | orchestrator | included: /ansible/roles/octavia/tasks/prepare.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 01:12:47.197756 | orchestrator | 2026-03-29 01:12:47.197759 | orchestrator | TASK [octavia : Create amphora flavor] ***************************************** 2026-03-29 01:12:47.197762 | orchestrator | Sunday 29 March 2026 01:08:55 +0000 (0:00:00.716) 0:00:48.538 ********** 2026-03-29 01:12:47.197765 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:12:47.197768 | orchestrator | 2026-03-29 01:12:47.197771 | orchestrator | TASK [octavia : Create nova keypair for amphora] ******************************* 2026-03-29 01:12:47.197774 | orchestrator | Sunday 29 March 2026 01:09:00 +0000 (0:00:05.127) 0:00:53.666 ********** 2026-03-29 01:12:47.197777 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:12:47.197780 | orchestrator | 2026-03-29 01:12:47.197783 | orchestrator | TASK [octavia : Get service project id] **************************************** 2026-03-29 01:12:47.197795 | orchestrator | Sunday 29 March 2026 01:09:04 +0000 (0:00:03.868) 0:00:57.534 ********** 2026-03-29 01:12:47.197798 | orchestrator | ok: [testbed-node-0] 2026-03-29 01:12:47.197802 | orchestrator | 2026-03-29 01:12:47.197805 | orchestrator | TASK [octavia : Create security groups for octavia] **************************** 2026-03-29 01:12:47.197808 | orchestrator | Sunday 29 March 2026 01:09:07 +0000 (0:00:03.138) 0:01:00.672 ********** 2026-03-29 01:12:47.197811 | orchestrator | changed: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2026-03-29 01:12:47.197814 | orchestrator | changed: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2026-03-29 01:12:47.197817 | orchestrator | 2026-03-29 01:12:47.197820 | orchestrator | TASK [octavia : Add rules for security groups] ********************************* 2026-03-29 01:12:47.197823 | orchestrator | Sunday 29 March 2026 01:09:16 +0000 (0:00:08.743) 0:01:09.416 ********** 2026-03-29 01:12:47.197826 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'icmp'}]) 2026-03-29 01:12:47.197830 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': 22, 'dst_port': 22}]) 2026-03-29 01:12:47.197834 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': '9443', 'dst_port': '9443'}]) 2026-03-29 01:12:47.197838 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-health-mgr-sec-grp', 'enabled': True}, {'protocol': 'udp', 'src_port': '5555', 'dst_port': '5555'}]) 2026-03-29 01:12:47.197841 | orchestrator | 2026-03-29 01:12:47.197844 | orchestrator | TASK [octavia : Create loadbalancer management network] ************************ 2026-03-29 01:12:47.197850 | orchestrator | Sunday 29 March 2026 01:09:33 +0000 (0:00:17.018) 0:01:26.434 ********** 2026-03-29 01:12:47.197853 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:12:47.197856 | orchestrator | 2026-03-29 01:12:47.197859 | orchestrator | TASK [octavia : Create loadbalancer management subnet] ************************* 2026-03-29 01:12:47.197868 | orchestrator | Sunday 29 March 2026 01:09:37 +0000 (0:00:04.332) 0:01:30.766 ********** 2026-03-29 01:12:47.197871 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:12:47.197874 | orchestrator | 2026-03-29 01:12:47.197877 | orchestrator | TASK [octavia : Create loadbalancer management router for IPv6] **************** 2026-03-29 01:12:47.197880 | orchestrator | Sunday 29 March 2026 01:09:44 +0000 (0:00:06.516) 0:01:37.282 ********** 2026-03-29 01:12:47.197883 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:12:47.197887 | orchestrator | 2026-03-29 01:12:47.197890 | orchestrator | TASK [octavia : Update loadbalancer management subnet] ************************* 2026-03-29 01:12:47.197893 | orchestrator | Sunday 29 March 2026 01:09:44 +0000 (0:00:00.465) 0:01:37.748 ********** 2026-03-29 01:12:47.197896 | orchestrator | ok: [testbed-node-0] 2026-03-29 01:12:47.197899 | orchestrator | 2026-03-29 01:12:47.197902 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-29 01:12:47.197905 | orchestrator | Sunday 29 March 2026 01:09:48 +0000 (0:00:03.972) 0:01:41.721 ********** 2026-03-29 01:12:47.197908 | orchestrator | included: /ansible/roles/octavia/tasks/hm-interface.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 01:12:47.197911 | orchestrator | 2026-03-29 01:12:47.197914 | orchestrator | TASK [octavia : Create ports for Octavia health-manager nodes] ***************** 2026-03-29 01:12:47.197917 | orchestrator | Sunday 29 March 2026 01:09:49 +0000 (0:00:00.811) 0:01:42.532 ********** 2026-03-29 01:12:47.197920 | orchestrator | changed: [testbed-node-1] 2026-03-29 01:12:47.197924 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:12:47.197927 | orchestrator | changed: [testbed-node-2] 2026-03-29 01:12:47.197930 | orchestrator | 2026-03-29 01:12:47.197933 | orchestrator | TASK [octavia : Update Octavia health manager port host_id] ******************** 2026-03-29 01:12:47.197936 | orchestrator | Sunday 29 March 2026 01:09:55 +0000 (0:00:06.384) 0:01:48.917 ********** 2026-03-29 01:12:47.197939 | orchestrator | changed: [testbed-node-1] 2026-03-29 01:12:47.197942 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:12:47.197945 | orchestrator | changed: [testbed-node-2] 2026-03-29 01:12:47.197948 | orchestrator | 2026-03-29 01:12:47.197951 | orchestrator | TASK [octavia : Add Octavia port to openvswitch br-int] ************************ 2026-03-29 01:12:47.197954 | orchestrator | Sunday 29 March 2026 01:10:00 +0000 (0:00:04.734) 0:01:53.651 ********** 2026-03-29 01:12:47.197957 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:12:47.197960 | orchestrator | changed: [testbed-node-1] 2026-03-29 01:12:47.197963 | orchestrator | changed: [testbed-node-2] 2026-03-29 01:12:47.197966 | orchestrator | 2026-03-29 01:12:47.197969 | orchestrator | TASK [octavia : Install isc-dhcp-client package] ******************************* 2026-03-29 01:12:47.197972 | orchestrator | Sunday 29 March 2026 01:10:01 +0000 (0:00:00.768) 0:01:54.420 ********** 2026-03-29 01:12:47.197975 | orchestrator | ok: [testbed-node-2] 2026-03-29 01:12:47.197979 | orchestrator | ok: [testbed-node-1] 2026-03-29 01:12:47.197982 | orchestrator | ok: [testbed-node-0] 2026-03-29 01:12:47.197985 | orchestrator | 2026-03-29 01:12:47.197988 | orchestrator | TASK [octavia : Create octavia dhclient conf] ********************************** 2026-03-29 01:12:47.197991 | orchestrator | Sunday 29 March 2026 01:10:03 +0000 (0:00:01.748) 0:01:56.169 ********** 2026-03-29 01:12:47.197994 | orchestrator | changed: [testbed-node-1] 2026-03-29 01:12:47.197997 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:12:47.198000 | orchestrator | changed: [testbed-node-2] 2026-03-29 01:12:47.198003 | orchestrator | 2026-03-29 01:12:47.198006 | orchestrator | TASK [octavia : Create octavia-interface service] ****************************** 2026-03-29 01:12:47.198010 | orchestrator | Sunday 29 March 2026 01:10:04 +0000 (0:00:01.157) 0:01:57.326 ********** 2026-03-29 01:12:47.198041 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:12:47.198045 | orchestrator | changed: [testbed-node-2] 2026-03-29 01:12:47.198052 | orchestrator | changed: [testbed-node-1] 2026-03-29 01:12:47.198055 | orchestrator | 2026-03-29 01:12:47.198058 | orchestrator | TASK [octavia : Restart octavia-interface.service if required] ***************** 2026-03-29 01:12:47.198062 | orchestrator | Sunday 29 March 2026 01:10:05 +0000 (0:00:01.064) 0:01:58.391 ********** 2026-03-29 01:12:47.198065 | orchestrator | changed: [testbed-node-1] 2026-03-29 01:12:47.198069 | orchestrator | changed: [testbed-node-2] 2026-03-29 01:12:47.198072 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:12:47.198076 | orchestrator | 2026-03-29 01:12:47.198083 | orchestrator | TASK [octavia : Enable and start octavia-interface.service] ******************** 2026-03-29 01:12:47.198086 | orchestrator | Sunday 29 March 2026 01:10:07 +0000 (0:00:02.208) 0:02:00.599 ********** 2026-03-29 01:12:47.198090 | orchestrator | changed: [testbed-node-1] 2026-03-29 01:12:47.198093 | orchestrator | changed: [testbed-node-2] 2026-03-29 01:12:47.198097 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:12:47.198100 | orchestrator | 2026-03-29 01:12:47.198104 | orchestrator | TASK [octavia : Wait for interface ohm0 ip appear] ***************************** 2026-03-29 01:12:47.198107 | orchestrator | Sunday 29 March 2026 01:10:08 +0000 (0:00:01.474) 0:02:02.074 ********** 2026-03-29 01:12:47.198110 | orchestrator | ok: [testbed-node-0] 2026-03-29 01:12:47.198114 | orchestrator | ok: [testbed-node-1] 2026-03-29 01:12:47.198118 | orchestrator | ok: [testbed-node-2] 2026-03-29 01:12:47.198121 | orchestrator | 2026-03-29 01:12:47.198124 | orchestrator | TASK [octavia : Gather facts] ************************************************** 2026-03-29 01:12:47.198128 | orchestrator | Sunday 29 March 2026 01:10:09 +0000 (0:00:00.616) 0:02:02.690 ********** 2026-03-29 01:12:47.198131 | orchestrator | ok: [testbed-node-0] 2026-03-29 01:12:47.198135 | orchestrator | ok: [testbed-node-1] 2026-03-29 01:12:47.198138 | orchestrator | ok: [testbed-node-2] 2026-03-29 01:12:47.198141 | orchestrator | 2026-03-29 01:12:47.198145 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-29 01:12:47.198149 | orchestrator | Sunday 29 March 2026 01:10:13 +0000 (0:00:03.504) 0:02:06.194 ********** 2026-03-29 01:12:47.198152 | orchestrator | included: /ansible/roles/octavia/tasks/get_resources_info.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 01:12:47.198156 | orchestrator | 2026-03-29 01:12:47.198159 | orchestrator | TASK [octavia : Get amphora flavor info] *************************************** 2026-03-29 01:12:47.198163 | orchestrator | Sunday 29 March 2026 01:10:13 +0000 (0:00:00.722) 0:02:06.916 ********** 2026-03-29 01:12:47.198166 | orchestrator | ok: [testbed-node-0] 2026-03-29 01:12:47.198170 | orchestrator | 2026-03-29 01:12:47.198173 | orchestrator | TASK [octavia : Get service project id] **************************************** 2026-03-29 01:12:47.198179 | orchestrator | Sunday 29 March 2026 01:10:19 +0000 (0:00:05.246) 0:02:12.163 ********** 2026-03-29 01:12:47.198183 | orchestrator | ok: [testbed-node-0] 2026-03-29 01:12:47.198186 | orchestrator | 2026-03-29 01:12:47.198416 | orchestrator | TASK [octavia : Get security groups for octavia] ******************************* 2026-03-29 01:12:47.198420 | orchestrator | Sunday 29 March 2026 01:10:22 +0000 (0:00:03.543) 0:02:15.707 ********** 2026-03-29 01:12:47.198424 | orchestrator | ok: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2026-03-29 01:12:47.198428 | orchestrator | ok: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2026-03-29 01:12:47.198431 | orchestrator | 2026-03-29 01:12:47.198434 | orchestrator | TASK [octavia : Get loadbalancer management network] *************************** 2026-03-29 01:12:47.198437 | orchestrator | Sunday 29 March 2026 01:10:29 +0000 (0:00:06.713) 0:02:22.421 ********** 2026-03-29 01:12:47.198440 | orchestrator | ok: [testbed-node-0] 2026-03-29 01:12:47.198443 | orchestrator | 2026-03-29 01:12:47.198446 | orchestrator | TASK [octavia : Set octavia resources facts] *********************************** 2026-03-29 01:12:47.198449 | orchestrator | Sunday 29 March 2026 01:10:33 +0000 (0:00:04.082) 0:02:26.503 ********** 2026-03-29 01:12:47.198452 | orchestrator | ok: [testbed-node-0] 2026-03-29 01:12:47.198455 | orchestrator | ok: [testbed-node-1] 2026-03-29 01:12:47.198458 | orchestrator | ok: [testbed-node-2] 2026-03-29 01:12:47.198461 | orchestrator | 2026-03-29 01:12:47.198468 | orchestrator | TASK [octavia : Ensuring config directories exist] ***************************** 2026-03-29 01:12:47.198471 | orchestrator | Sunday 29 March 2026 01:10:33 +0000 (0:00:00.319) 0:02:26.822 ********** 2026-03-29 01:12:47.198476 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-29 01:12:47.198485 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-29 01:12:47.198491 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-29 01:12:47.198500 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-29 01:12:47.198506 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-29 01:12:47.198516 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-29 01:12:47.198522 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-29 01:12:47.198528 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-29 01:12:47.198537 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-29 01:12:47.198543 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-29 01:12:47.198552 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-29 01:12:47.198557 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-29 01:12:47.198565 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-29 01:12:47.198573 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-29 01:12:47.198579 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-29 01:12:47.198584 | orchestrator | 2026-03-29 01:12:47.198590 | orchestrator | TASK [octavia : Check if policies shall be overwritten] ************************ 2026-03-29 01:12:47.198595 | orchestrator | Sunday 29 March 2026 01:10:36 +0000 (0:00:02.629) 0:02:29.451 ********** 2026-03-29 01:12:47.198601 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:12:47.198606 | orchestrator | 2026-03-29 01:12:47.198613 | orchestrator | TASK [octavia : Set octavia policy file] *************************************** 2026-03-29 01:12:47.198618 | orchestrator | Sunday 29 March 2026 01:10:36 +0000 (0:00:00.145) 0:02:29.596 ********** 2026-03-29 01:12:47.198624 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:12:47.198629 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:12:47.198634 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:12:47.198640 | orchestrator | 2026-03-29 01:12:47.198645 | orchestrator | TASK [octavia : Copying over existing policy file] ***************************** 2026-03-29 01:12:47.198650 | orchestrator | Sunday 29 March 2026 01:10:36 +0000 (0:00:00.288) 0:02:29.885 ********** 2026-03-29 01:12:47.198656 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-29 01:12:47.198834 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-29 01:12:47.198845 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-29 01:12:47.198851 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-29 01:12:47.198856 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-29 01:12:47.198861 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:12:47.198883 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-29 01:12:47.198889 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-29 01:12:47.198896 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-29 01:12:47.198905 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-29 01:12:47.198910 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-29 01:12:47.198915 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:12:47.198920 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-29 01:12:47.198938 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-29 01:12:47.198943 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-29 01:12:47.198954 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-29 01:12:47.198959 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-29 01:12:47.198964 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:12:47.198969 | orchestrator | 2026-03-29 01:12:47.198974 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-29 01:12:47.198979 | orchestrator | Sunday 29 March 2026 01:10:37 +0000 (0:00:00.696) 0:02:30.581 ********** 2026-03-29 01:12:47.198984 | orchestrator | included: /ansible/roles/octavia/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 01:12:47.198989 | orchestrator | 2026-03-29 01:12:47.198993 | orchestrator | TASK [service-cert-copy : octavia | Copying over extra CA certificates] ******** 2026-03-29 01:12:47.198998 | orchestrator | Sunday 29 March 2026 01:10:38 +0000 (0:00:00.708) 0:02:31.290 ********** 2026-03-29 01:12:47.199003 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-29 01:12:47.199020 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-29 01:12:47.199026 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-29 01:12:47.199038 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-29 01:12:47.199044 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-29 01:12:47.199049 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-29 01:12:47.199054 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-29 01:12:47.199059 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-29 01:12:47.199066 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-29 01:12:47.199075 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-29 01:12:47.199083 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-29 01:12:47.199100 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-29 01:12:47.199106 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-29 01:12:47.199111 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-29 01:12:47.199119 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-29 01:12:47.199125 | orchestrator | 2026-03-29 01:12:47.199130 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS certificate] *** 2026-03-29 01:12:47.199139 | orchestrator | Sunday 29 March 2026 01:10:43 +0000 (0:00:05.600) 0:02:36.890 ********** 2026-03-29 01:12:47.199142 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-29 01:12:47.199148 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-29 01:12:47.199151 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-29 01:12:47.199154 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-29 01:12:47.199158 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-29 01:12:47.199161 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:12:47.199167 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-29 01:12:47.199173 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-29 01:12:47.199178 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-29 01:12:47.199181 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-29 01:12:47.199184 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-29 01:12:47.199188 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:12:47.199191 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-29 01:12:47.199194 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-29 01:12:47.199246 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-29 01:12:47.199251 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-29 01:12:47.199256 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-29 01:12:47.199259 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:12:47.199262 | orchestrator | 2026-03-29 01:12:47.199265 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS key] ***** 2026-03-29 01:12:47.199268 | orchestrator | Sunday 29 March 2026 01:10:44 +0000 (0:00:00.634) 0:02:37.525 ********** 2026-03-29 01:12:47.199272 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-29 01:12:47.199275 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-29 01:12:47.199278 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-29 01:12:47.199286 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-29 01:12:47.199290 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-29 01:12:47.199293 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:12:47.199298 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-29 01:12:47.199301 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-29 01:12:47.199305 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-29 01:12:47.199308 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-29 01:12:47.199316 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-29 01:12:47.199319 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:12:47.199322 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-29 01:12:47.199327 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-29 01:12:47.199332 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-29 01:12:47.199338 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-29 01:12:47.199354 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-29 01:12:47.199360 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:12:47.199365 | orchestrator | 2026-03-29 01:12:47.199370 | orchestrator | TASK [octavia : Copying over config.json files for services] ******************* 2026-03-29 01:12:47.199375 | orchestrator | Sunday 29 March 2026 01:10:45 +0000 (0:00:01.113) 0:02:38.639 ********** 2026-03-29 01:12:47.199381 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-29 01:12:47.199386 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-29 01:12:47.199390 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-29 01:12:47.199393 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-29 01:12:47.199400 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-29 01:12:47.199403 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-29 01:12:47.199408 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-29 01:12:47.199412 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-29 01:12:47.199417 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-29 01:12:47.199420 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-29 01:12:47.199424 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-29 01:12:47.199429 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-29 01:12:47.199435 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-29 01:12:47.199438 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-29 01:12:47.199441 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-29 01:12:47.199444 | orchestrator | 2026-03-29 01:12:47.199450 | orchestrator | TASK [octavia : Copying over octavia-wsgi.conf] ******************************** 2026-03-29 01:12:47.199454 | orchestrator | Sunday 29 March 2026 01:10:49 +0000 (0:00:04.432) 0:02:43.072 ********** 2026-03-29 01:12:47.199457 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-03-29 01:12:47.199462 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-03-29 01:12:47.199465 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-03-29 01:12:47.199469 | orchestrator | 2026-03-29 01:12:47.199472 | orchestrator | TASK [octavia : Copying over octavia.conf] ************************************* 2026-03-29 01:12:47.199476 | orchestrator | Sunday 29 March 2026 01:10:51 +0000 (0:00:01.666) 0:02:44.738 ********** 2026-03-29 01:12:47.199480 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-29 01:12:47.199488 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-29 01:12:47.199496 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-29 01:12:47.199502 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-29 01:12:47.199509 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-29 01:12:47.199515 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-29 01:12:47.199525 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-29 01:12:47.199531 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-29 01:12:47.199536 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-29 01:12:47.199545 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-29 01:12:47.199549 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-29 01:12:47.199554 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-29 01:12:47.199558 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-29 01:12:47.199564 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-29 01:12:47.199568 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-29 01:12:47.199572 | orchestrator | 2026-03-29 01:12:47.199575 | orchestrator | TASK [octavia : Copying over Octavia SSH key] ********************************** 2026-03-29 01:12:47.199579 | orchestrator | Sunday 29 March 2026 01:11:07 +0000 (0:00:16.144) 0:03:00.882 ********** 2026-03-29 01:12:47.199583 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:12:47.199586 | orchestrator | changed: [testbed-node-1] 2026-03-29 01:12:47.199590 | orchestrator | changed: [testbed-node-2] 2026-03-29 01:12:47.199593 | orchestrator | 2026-03-29 01:12:47.199597 | orchestrator | TASK [octavia : Copying certificate files for octavia-worker] ****************** 2026-03-29 01:12:47.199600 | orchestrator | Sunday 29 March 2026 01:11:09 +0000 (0:00:02.057) 0:03:02.940 ********** 2026-03-29 01:12:47.199604 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-03-29 01:12:47.199608 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-03-29 01:12:47.199613 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-03-29 01:12:47.199617 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-03-29 01:12:47.199621 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-03-29 01:12:47.199624 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-03-29 01:12:47.199628 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-03-29 01:12:47.199631 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-03-29 01:12:47.199635 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-03-29 01:12:47.199638 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-03-29 01:12:47.199642 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-03-29 01:12:47.199645 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-03-29 01:12:47.199649 | orchestrator | 2026-03-29 01:12:47.199652 | orchestrator | TASK [octavia : Copying certificate files for octavia-housekeeping] ************ 2026-03-29 01:12:47.199656 | orchestrator | Sunday 29 March 2026 01:11:14 +0000 (0:00:05.049) 0:03:07.989 ********** 2026-03-29 01:12:47.199660 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-03-29 01:12:47.199663 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-03-29 01:12:47.199669 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-03-29 01:12:47.199674 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-03-29 01:12:47.199682 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-03-29 01:12:47.199687 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-03-29 01:12:47.199691 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-03-29 01:12:47.199696 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-03-29 01:12:47.199703 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-03-29 01:12:47.199708 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-03-29 01:12:47.199713 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-03-29 01:12:47.199718 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-03-29 01:12:47.199724 | orchestrator | 2026-03-29 01:12:47.199729 | orchestrator | TASK [octavia : Copying certificate files for octavia-health-manager] ********** 2026-03-29 01:12:47.199735 | orchestrator | Sunday 29 March 2026 01:11:20 +0000 (0:00:05.259) 0:03:13.248 ********** 2026-03-29 01:12:47.199741 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-03-29 01:12:47.199747 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-03-29 01:12:47.199752 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-03-29 01:12:47.199757 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-03-29 01:12:47.199761 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-03-29 01:12:47.199764 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-03-29 01:12:47.199768 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-03-29 01:12:47.199772 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-03-29 01:12:47.199775 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-03-29 01:12:47.199780 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-03-29 01:12:47.199785 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-03-29 01:12:47.199790 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-03-29 01:12:47.199795 | orchestrator | 2026-03-29 01:12:47.199800 | orchestrator | TASK [octavia : Check octavia containers] ************************************** 2026-03-29 01:12:47.199805 | orchestrator | Sunday 29 March 2026 01:11:24 +0000 (0:00:04.813) 0:03:18.062 ********** 2026-03-29 01:12:47.199811 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-29 01:12:47.199821 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-29 01:12:47.199834 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-29 01:12:47.199840 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-29 01:12:47.199846 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-29 01:12:47.199852 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-29 01:12:47.199858 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-29 01:12:47.199866 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-29 01:12:47.199875 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-29 01:12:47.199879 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-29 01:12:47.199884 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-29 01:12:47.199887 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-29 01:12:47.199891 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-29 01:12:47.199894 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-29 01:12:47.199900 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-29 01:12:47.199907 | orchestrator | 2026-03-29 01:12:47.199910 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-29 01:12:47.199913 | orchestrator | Sunday 29 March 2026 01:11:28 +0000 (0:00:03.603) 0:03:21.666 ********** 2026-03-29 01:12:47.199916 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:12:47.199920 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:12:47.199923 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:12:47.199926 | orchestrator | 2026-03-29 01:12:47.199929 | orchestrator | TASK [octavia : Creating Octavia database] ************************************* 2026-03-29 01:12:47.199932 | orchestrator | Sunday 29 March 2026 01:11:29 +0000 (0:00:00.479) 0:03:22.146 ********** 2026-03-29 01:12:47.199935 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:12:47.199938 | orchestrator | 2026-03-29 01:12:47.199941 | orchestrator | TASK [octavia : Creating Octavia persistence database] ************************* 2026-03-29 01:12:47.199944 | orchestrator | Sunday 29 March 2026 01:11:31 +0000 (0:00:02.720) 0:03:24.867 ********** 2026-03-29 01:12:47.199947 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:12:47.199951 | orchestrator | 2026-03-29 01:12:47.199954 | orchestrator | TASK [octavia : Creating Octavia database user and setting permissions] ******** 2026-03-29 01:12:47.199957 | orchestrator | Sunday 29 March 2026 01:11:34 +0000 (0:00:02.784) 0:03:27.651 ********** 2026-03-29 01:12:47.199960 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:12:47.199963 | orchestrator | 2026-03-29 01:12:47.199966 | orchestrator | TASK [octavia : Creating Octavia persistence database user and setting permissions] *** 2026-03-29 01:12:47.199969 | orchestrator | Sunday 29 March 2026 01:11:36 +0000 (0:00:02.489) 0:03:30.140 ********** 2026-03-29 01:12:47.199972 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:12:47.199976 | orchestrator | 2026-03-29 01:12:47.199979 | orchestrator | TASK [octavia : Running Octavia bootstrap container] *************************** 2026-03-29 01:12:47.199983 | orchestrator | Sunday 29 March 2026 01:11:39 +0000 (0:00:02.034) 0:03:32.175 ********** 2026-03-29 01:12:47.199986 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:12:47.199989 | orchestrator | 2026-03-29 01:12:47.199993 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-03-29 01:12:47.199996 | orchestrator | Sunday 29 March 2026 01:12:00 +0000 (0:00:21.228) 0:03:53.404 ********** 2026-03-29 01:12:47.199999 | orchestrator | 2026-03-29 01:12:47.200002 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-03-29 01:12:47.200005 | orchestrator | Sunday 29 March 2026 01:12:00 +0000 (0:00:00.071) 0:03:53.476 ********** 2026-03-29 01:12:47.200008 | orchestrator | 2026-03-29 01:12:47.200011 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-03-29 01:12:47.200014 | orchestrator | Sunday 29 March 2026 01:12:00 +0000 (0:00:00.069) 0:03:53.546 ********** 2026-03-29 01:12:47.200017 | orchestrator | 2026-03-29 01:12:47.200020 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-api container] ********************** 2026-03-29 01:12:47.200023 | orchestrator | Sunday 29 March 2026 01:12:00 +0000 (0:00:00.072) 0:03:53.618 ********** 2026-03-29 01:12:47.200026 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:12:47.200030 | orchestrator | changed: [testbed-node-2] 2026-03-29 01:12:47.200033 | orchestrator | changed: [testbed-node-1] 2026-03-29 01:12:47.200036 | orchestrator | 2026-03-29 01:12:47.200039 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-driver-agent container] ************* 2026-03-29 01:12:47.200042 | orchestrator | Sunday 29 March 2026 01:12:15 +0000 (0:00:15.295) 0:04:08.914 ********** 2026-03-29 01:12:47.200045 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:12:47.200048 | orchestrator | changed: [testbed-node-1] 2026-03-29 01:12:47.200051 | orchestrator | changed: [testbed-node-2] 2026-03-29 01:12:47.200054 | orchestrator | 2026-03-29 01:12:47.200057 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-health-manager container] *********** 2026-03-29 01:12:47.200063 | orchestrator | Sunday 29 March 2026 01:12:21 +0000 (0:00:06.172) 0:04:15.086 ********** 2026-03-29 01:12:47.200066 | orchestrator | changed: [testbed-node-2] 2026-03-29 01:12:47.200070 | orchestrator | changed: [testbed-node-1] 2026-03-29 01:12:47.200073 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:12:47.200076 | orchestrator | 2026-03-29 01:12:47.200079 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-housekeeping container] ************* 2026-03-29 01:12:47.200082 | orchestrator | Sunday 29 March 2026 01:12:30 +0000 (0:00:08.471) 0:04:23.558 ********** 2026-03-29 01:12:47.200085 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:12:47.200088 | orchestrator | changed: [testbed-node-2] 2026-03-29 01:12:47.200091 | orchestrator | changed: [testbed-node-1] 2026-03-29 01:12:47.200095 | orchestrator | 2026-03-29 01:12:47.200098 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-worker container] ******************* 2026-03-29 01:12:47.200101 | orchestrator | Sunday 29 March 2026 01:12:40 +0000 (0:00:09.910) 0:04:33.468 ********** 2026-03-29 01:12:47.200104 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:12:47.200107 | orchestrator | changed: [testbed-node-1] 2026-03-29 01:12:47.200110 | orchestrator | changed: [testbed-node-2] 2026-03-29 01:12:47.200113 | orchestrator | 2026-03-29 01:12:47.200116 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 01:12:47.200120 | orchestrator | testbed-node-0 : ok=57  changed=38  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-29 01:12:47.200123 | orchestrator | testbed-node-1 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-29 01:12:47.200126 | orchestrator | testbed-node-2 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-29 01:12:47.200129 | orchestrator | 2026-03-29 01:12:47.200132 | orchestrator | 2026-03-29 01:12:47.200136 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-29 01:12:47.200139 | orchestrator | Sunday 29 March 2026 01:12:46 +0000 (0:00:06.330) 0:04:39.799 ********** 2026-03-29 01:12:47.200144 | orchestrator | =============================================================================== 2026-03-29 01:12:47.200147 | orchestrator | octavia : Running Octavia bootstrap container -------------------------- 21.23s 2026-03-29 01:12:47.200150 | orchestrator | octavia : Add rules for security groups -------------------------------- 17.02s 2026-03-29 01:12:47.200154 | orchestrator | octavia : Copying over octavia.conf ------------------------------------ 16.14s 2026-03-29 01:12:47.200157 | orchestrator | octavia : Adding octavia related roles --------------------------------- 15.66s 2026-03-29 01:12:47.200160 | orchestrator | octavia : Restart octavia-api container -------------------------------- 15.30s 2026-03-29 01:12:47.200163 | orchestrator | octavia : Restart octavia-housekeeping container ------------------------ 9.91s 2026-03-29 01:12:47.200166 | orchestrator | octavia : Create security groups for octavia ---------------------------- 8.74s 2026-03-29 01:12:47.200169 | orchestrator | octavia : Restart octavia-health-manager container ---------------------- 8.47s 2026-03-29 01:12:47.200172 | orchestrator | service-ks-register : octavia | Creating users -------------------------- 7.26s 2026-03-29 01:12:47.200175 | orchestrator | octavia : Get security groups for octavia ------------------------------- 6.71s 2026-03-29 01:12:47.200179 | orchestrator | service-ks-register : octavia | Granting user roles --------------------- 6.68s 2026-03-29 01:12:47.200182 | orchestrator | octavia : Create loadbalancer management subnet ------------------------- 6.52s 2026-03-29 01:12:47.200185 | orchestrator | octavia : Create ports for Octavia health-manager nodes ----------------- 6.38s 2026-03-29 01:12:47.200188 | orchestrator | octavia : Restart octavia-worker container ------------------------------ 6.33s 2026-03-29 01:12:47.200191 | orchestrator | octavia : Restart octavia-driver-agent container ------------------------ 6.17s 2026-03-29 01:12:47.200194 | orchestrator | service-ks-register : octavia | Creating endpoints ---------------------- 5.96s 2026-03-29 01:12:47.200224 | orchestrator | service-cert-copy : octavia | Copying over extra CA certificates -------- 5.60s 2026-03-29 01:12:47.200230 | orchestrator | octavia : Copying certificate files for octavia-housekeeping ------------ 5.26s 2026-03-29 01:12:47.200233 | orchestrator | octavia : Get amphora flavor info --------------------------------------- 5.25s 2026-03-29 01:12:47.200236 | orchestrator | octavia : Create amphora flavor ----------------------------------------- 5.13s 2026-03-29 01:12:47.200239 | orchestrator | 2026-03-29 01:12:47 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-29 01:12:50.234522 | orchestrator | 2026-03-29 01:12:50 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-29 01:12:53.277762 | orchestrator | 2026-03-29 01:12:53 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-29 01:12:56.320814 | orchestrator | 2026-03-29 01:12:56 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-29 01:12:59.357424 | orchestrator | 2026-03-29 01:12:59 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-29 01:13:02.403984 | orchestrator | 2026-03-29 01:13:02 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-29 01:13:05.442958 | orchestrator | 2026-03-29 01:13:05 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-29 01:13:08.491340 | orchestrator | 2026-03-29 01:13:08 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-29 01:13:11.532349 | orchestrator | 2026-03-29 01:13:11 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-29 01:13:14.573956 | orchestrator | 2026-03-29 01:13:14 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-29 01:13:17.621273 | orchestrator | 2026-03-29 01:13:17 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-29 01:13:20.669036 | orchestrator | 2026-03-29 01:13:20 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-29 01:13:23.712822 | orchestrator | 2026-03-29 01:13:23 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-29 01:13:26.765710 | orchestrator | 2026-03-29 01:13:26 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-29 01:13:29.803545 | orchestrator | 2026-03-29 01:13:29 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-29 01:13:32.855900 | orchestrator | 2026-03-29 01:13:32 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-29 01:13:35.897618 | orchestrator | 2026-03-29 01:13:35 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-29 01:13:38.945256 | orchestrator | 2026-03-29 01:13:38 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-29 01:13:41.990055 | orchestrator | 2026-03-29 01:13:41 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-29 01:13:45.039982 | orchestrator | 2026-03-29 01:13:45 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-29 01:13:48.085795 | orchestrator | 2026-03-29 01:13:48.290910 | orchestrator | 2026-03-29 01:13:48.296152 | orchestrator | --> DEPLOY IN A NUTSHELL -- END -- Sun Mar 29 01:13:48 UTC 2026 2026-03-29 01:13:48.296211 | orchestrator | 2026-03-29 01:13:48.677489 | orchestrator | ok: Runtime: 0:32:02.043286 2026-03-29 01:13:48.934439 | 2026-03-29 01:13:48.934584 | TASK [Bootstrap services] 2026-03-29 01:13:49.605672 | orchestrator | 2026-03-29 01:13:49.605752 | orchestrator | # BOOTSTRAP 2026-03-29 01:13:49.605759 | orchestrator | 2026-03-29 01:13:49.605763 | orchestrator | + set -e 2026-03-29 01:13:49.605772 | orchestrator | + echo 2026-03-29 01:13:49.605780 | orchestrator | + echo '# BOOTSTRAP' 2026-03-29 01:13:49.605786 | orchestrator | + echo 2026-03-29 01:13:49.605801 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap-services.sh 2026-03-29 01:13:49.614985 | orchestrator | + set -e 2026-03-29 01:13:49.615031 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/300-openstack.sh 2026-03-29 01:13:54.596367 | orchestrator | 2026-03-29 01:13:54 | INFO  | It takes a moment until task 962442ef-4899-47e1-a067-cffc747916b2 (flavor-manager) has been started and output is visible here. 2026-03-29 01:14:02.766953 | orchestrator | 2026-03-29 01:13:59 | INFO  | Flavor SCS-1L-1 created 2026-03-29 01:14:02.767024 | orchestrator | 2026-03-29 01:13:59 | INFO  | Flavor SCS-1L-1-5 created 2026-03-29 01:14:02.767038 | orchestrator | 2026-03-29 01:13:59 | INFO  | Flavor SCS-1V-2 created 2026-03-29 01:14:02.767045 | orchestrator | 2026-03-29 01:13:59 | INFO  | Flavor SCS-1V-2-5 created 2026-03-29 01:14:02.767053 | orchestrator | 2026-03-29 01:13:59 | INFO  | Flavor SCS-1V-4 created 2026-03-29 01:14:02.767059 | orchestrator | 2026-03-29 01:14:00 | INFO  | Flavor SCS-1V-4-10 created 2026-03-29 01:14:02.767066 | orchestrator | 2026-03-29 01:14:00 | INFO  | Flavor SCS-1V-8 created 2026-03-29 01:14:02.767073 | orchestrator | 2026-03-29 01:14:00 | INFO  | Flavor SCS-1V-8-20 created 2026-03-29 01:14:02.767085 | orchestrator | 2026-03-29 01:14:00 | INFO  | Flavor SCS-2V-4 created 2026-03-29 01:14:02.767091 | orchestrator | 2026-03-29 01:14:00 | INFO  | Flavor SCS-2V-4-10 created 2026-03-29 01:14:02.767098 | orchestrator | 2026-03-29 01:14:00 | INFO  | Flavor SCS-2V-8 created 2026-03-29 01:14:02.767104 | orchestrator | 2026-03-29 01:14:00 | INFO  | Flavor SCS-2V-8-20 created 2026-03-29 01:14:02.767111 | orchestrator | 2026-03-29 01:14:00 | INFO  | Flavor SCS-2V-16 created 2026-03-29 01:14:02.767117 | orchestrator | 2026-03-29 01:14:00 | INFO  | Flavor SCS-2V-16-50 created 2026-03-29 01:14:02.767123 | orchestrator | 2026-03-29 01:14:00 | INFO  | Flavor SCS-4V-8 created 2026-03-29 01:14:02.767131 | orchestrator | 2026-03-29 01:14:01 | INFO  | Flavor SCS-4V-8-20 created 2026-03-29 01:14:02.767140 | orchestrator | 2026-03-29 01:14:01 | INFO  | Flavor SCS-4V-16 created 2026-03-29 01:14:02.767150 | orchestrator | 2026-03-29 01:14:01 | INFO  | Flavor SCS-4V-16-50 created 2026-03-29 01:14:02.767160 | orchestrator | 2026-03-29 01:14:01 | INFO  | Flavor SCS-4V-32 created 2026-03-29 01:14:02.767169 | orchestrator | 2026-03-29 01:14:01 | INFO  | Flavor SCS-4V-32-100 created 2026-03-29 01:14:02.767178 | orchestrator | 2026-03-29 01:14:01 | INFO  | Flavor SCS-8V-16 created 2026-03-29 01:14:02.767185 | orchestrator | 2026-03-29 01:14:01 | INFO  | Flavor SCS-8V-16-50 created 2026-03-29 01:14:02.767192 | orchestrator | 2026-03-29 01:14:01 | INFO  | Flavor SCS-8V-32 created 2026-03-29 01:14:02.767199 | orchestrator | 2026-03-29 01:14:01 | INFO  | Flavor SCS-8V-32-100 created 2026-03-29 01:14:02.767206 | orchestrator | 2026-03-29 01:14:01 | INFO  | Flavor SCS-16V-32 created 2026-03-29 01:14:02.767212 | orchestrator | 2026-03-29 01:14:02 | INFO  | Flavor SCS-16V-32-100 created 2026-03-29 01:14:02.767218 | orchestrator | 2026-03-29 01:14:02 | INFO  | Flavor SCS-2V-4-20s created 2026-03-29 01:14:02.767224 | orchestrator | 2026-03-29 01:14:02 | INFO  | Flavor SCS-4V-8-50s created 2026-03-29 01:14:02.767229 | orchestrator | 2026-03-29 01:14:02 | INFO  | Flavor SCS-4V-16-100s created 2026-03-29 01:14:02.767235 | orchestrator | 2026-03-29 01:14:02 | INFO  | Flavor SCS-8V-32-100s created 2026-03-29 01:14:04.179567 | orchestrator | 2026-03-29 01:14:04 | INFO  | Trying to run play bootstrap-basic in environment openstack 2026-03-29 01:14:14.262414 | orchestrator | 2026-03-29 01:14:14 | INFO  | Prepare task for execution of bootstrap-basic. 2026-03-29 01:14:14.351352 | orchestrator | 2026-03-29 01:14:14 | INFO  | Task e8730070-fbf3-4d50-9a9f-5a8994afe575 (bootstrap-basic) was prepared for execution. 2026-03-29 01:14:14.351440 | orchestrator | 2026-03-29 01:14:14 | INFO  | It takes a moment until task e8730070-fbf3-4d50-9a9f-5a8994afe575 (bootstrap-basic) has been started and output is visible here. 2026-03-29 01:15:01.233131 | orchestrator | 2026-03-29 01:15:01.233190 | orchestrator | PLAY [Bootstrap basic OpenStack services] ************************************** 2026-03-29 01:15:01.233196 | orchestrator | 2026-03-29 01:15:01.233201 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-29 01:15:01.233205 | orchestrator | Sunday 29 March 2026 01:14:17 +0000 (0:00:00.110) 0:00:00.110 ********** 2026-03-29 01:15:01.233209 | orchestrator | ok: [localhost] 2026-03-29 01:15:01.233214 | orchestrator | 2026-03-29 01:15:01.233218 | orchestrator | TASK [Get volume type LUKS] **************************************************** 2026-03-29 01:15:01.233222 | orchestrator | Sunday 29 March 2026 01:14:19 +0000 (0:00:02.098) 0:00:02.208 ********** 2026-03-29 01:15:01.233227 | orchestrator | ok: [localhost] 2026-03-29 01:15:01.233231 | orchestrator | 2026-03-29 01:15:01.233235 | orchestrator | TASK [Create volume type LUKS] ************************************************* 2026-03-29 01:15:01.233238 | orchestrator | Sunday 29 March 2026 01:14:28 +0000 (0:00:08.807) 0:00:11.016 ********** 2026-03-29 01:15:01.233242 | orchestrator | changed: [localhost] 2026-03-29 01:15:01.233247 | orchestrator | 2026-03-29 01:15:01.233250 | orchestrator | TASK [Create public network] *************************************************** 2026-03-29 01:15:01.233254 | orchestrator | Sunday 29 March 2026 01:14:37 +0000 (0:00:08.439) 0:00:19.455 ********** 2026-03-29 01:15:01.233258 | orchestrator | changed: [localhost] 2026-03-29 01:15:01.233262 | orchestrator | 2026-03-29 01:15:01.233268 | orchestrator | TASK [Set public network to default] ******************************************* 2026-03-29 01:15:01.233272 | orchestrator | Sunday 29 March 2026 01:14:43 +0000 (0:00:06.157) 0:00:25.612 ********** 2026-03-29 01:15:01.233276 | orchestrator | changed: [localhost] 2026-03-29 01:15:01.233280 | orchestrator | 2026-03-29 01:15:01.233284 | orchestrator | TASK [Create public subnet] **************************************************** 2026-03-29 01:15:01.233287 | orchestrator | Sunday 29 March 2026 01:14:49 +0000 (0:00:06.212) 0:00:31.825 ********** 2026-03-29 01:15:01.233291 | orchestrator | changed: [localhost] 2026-03-29 01:15:01.233295 | orchestrator | 2026-03-29 01:15:01.233299 | orchestrator | TASK [Create default IPv4 subnet pool] ***************************************** 2026-03-29 01:15:01.233303 | orchestrator | Sunday 29 March 2026 01:14:53 +0000 (0:00:03.929) 0:00:35.754 ********** 2026-03-29 01:15:01.233306 | orchestrator | changed: [localhost] 2026-03-29 01:15:01.233310 | orchestrator | 2026-03-29 01:15:01.233314 | orchestrator | TASK [Create manager role] ***************************************************** 2026-03-29 01:15:01.233323 | orchestrator | Sunday 29 March 2026 01:14:57 +0000 (0:00:03.685) 0:00:39.440 ********** 2026-03-29 01:15:01.233327 | orchestrator | ok: [localhost] 2026-03-29 01:15:01.233331 | orchestrator | 2026-03-29 01:15:01.233335 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 01:15:01.233339 | orchestrator | localhost : ok=8  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-29 01:15:01.233343 | orchestrator | 2026-03-29 01:15:01.233347 | orchestrator | 2026-03-29 01:15:01.233351 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-29 01:15:01.233355 | orchestrator | Sunday 29 March 2026 01:15:00 +0000 (0:00:03.879) 0:00:43.320 ********** 2026-03-29 01:15:01.233358 | orchestrator | =============================================================================== 2026-03-29 01:15:01.233362 | orchestrator | Get volume type LUKS ---------------------------------------------------- 8.81s 2026-03-29 01:15:01.233384 | orchestrator | Create volume type LUKS ------------------------------------------------- 8.44s 2026-03-29 01:15:01.233388 | orchestrator | Set public network to default ------------------------------------------- 6.21s 2026-03-29 01:15:01.233392 | orchestrator | Create public network --------------------------------------------------- 6.16s 2026-03-29 01:15:01.233396 | orchestrator | Create public subnet ---------------------------------------------------- 3.93s 2026-03-29 01:15:01.233400 | orchestrator | Create manager role ----------------------------------------------------- 3.88s 2026-03-29 01:15:01.233404 | orchestrator | Create default IPv4 subnet pool ----------------------------------------- 3.69s 2026-03-29 01:15:01.233408 | orchestrator | Gathering Facts --------------------------------------------------------- 2.10s 2026-03-29 01:15:03.495259 | orchestrator | 2026-03-29 01:15:03 | INFO  | It takes a moment until task 01df50be-54ea-43fd-b4cc-256e09af95e8 (image-manager) has been started and output is visible here. 2026-03-29 01:15:46.610633 | orchestrator | 2026-03-29 01:15:06 | INFO  | Processing image 'Cirros 0.6.2' 2026-03-29 01:15:46.610682 | orchestrator | 2026-03-29 01:15:06 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img: 302 2026-03-29 01:15:46.610687 | orchestrator | 2026-03-29 01:15:06 | INFO  | Importing image Cirros 0.6.2 2026-03-29 01:15:46.610691 | orchestrator | 2026-03-29 01:15:06 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2026-03-29 01:15:46.610695 | orchestrator | 2026-03-29 01:15:08 | INFO  | Waiting for image to leave queued state... 2026-03-29 01:15:46.610699 | orchestrator | 2026-03-29 01:15:11 | INFO  | Waiting for import to complete... 2026-03-29 01:15:46.610702 | orchestrator | 2026-03-29 01:15:21 | INFO  | Import of 'Cirros 0.6.2' successfully completed, reloading images 2026-03-29 01:15:46.610705 | orchestrator | 2026-03-29 01:15:21 | INFO  | Checking parameters of 'Cirros 0.6.2' 2026-03-29 01:15:46.610709 | orchestrator | 2026-03-29 01:15:21 | INFO  | Setting internal_version = 0.6.2 2026-03-29 01:15:46.610712 | orchestrator | 2026-03-29 01:15:21 | INFO  | Setting image_original_user = cirros 2026-03-29 01:15:46.610715 | orchestrator | 2026-03-29 01:15:21 | INFO  | Adding tag os:cirros 2026-03-29 01:15:46.610718 | orchestrator | 2026-03-29 01:15:21 | INFO  | Setting property architecture: x86_64 2026-03-29 01:15:46.610721 | orchestrator | 2026-03-29 01:15:22 | INFO  | Setting property hw_disk_bus: scsi 2026-03-29 01:15:46.610724 | orchestrator | 2026-03-29 01:15:22 | INFO  | Setting property hw_rng_model: virtio 2026-03-29 01:15:46.610728 | orchestrator | 2026-03-29 01:15:22 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-03-29 01:15:46.610731 | orchestrator | 2026-03-29 01:15:23 | INFO  | Setting property hw_watchdog_action: reset 2026-03-29 01:15:46.610734 | orchestrator | 2026-03-29 01:15:23 | INFO  | Setting property hypervisor_type: qemu 2026-03-29 01:15:46.610740 | orchestrator | 2026-03-29 01:15:23 | INFO  | Setting property os_distro: cirros 2026-03-29 01:15:46.610743 | orchestrator | 2026-03-29 01:15:23 | INFO  | Setting property os_purpose: minimal 2026-03-29 01:15:46.610747 | orchestrator | 2026-03-29 01:15:23 | INFO  | Setting property replace_frequency: never 2026-03-29 01:15:46.610750 | orchestrator | 2026-03-29 01:15:24 | INFO  | Setting property uuid_validity: none 2026-03-29 01:15:46.610753 | orchestrator | 2026-03-29 01:15:24 | INFO  | Setting property provided_until: none 2026-03-29 01:15:46.610756 | orchestrator | 2026-03-29 01:15:24 | INFO  | Setting property image_description: Cirros 2026-03-29 01:15:46.610759 | orchestrator | 2026-03-29 01:15:24 | INFO  | Setting property image_name: Cirros 2026-03-29 01:15:46.610772 | orchestrator | 2026-03-29 01:15:24 | INFO  | Setting property internal_version: 0.6.2 2026-03-29 01:15:46.610776 | orchestrator | 2026-03-29 01:15:25 | INFO  | Setting property image_original_user: cirros 2026-03-29 01:15:46.610779 | orchestrator | 2026-03-29 01:15:25 | INFO  | Setting property os_version: 0.6.2 2026-03-29 01:15:46.610782 | orchestrator | 2026-03-29 01:15:25 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2026-03-29 01:15:46.610786 | orchestrator | 2026-03-29 01:15:25 | INFO  | Setting property image_build_date: 2023-05-30 2026-03-29 01:15:46.610789 | orchestrator | 2026-03-29 01:15:25 | INFO  | Checking status of 'Cirros 0.6.2' 2026-03-29 01:15:46.610792 | orchestrator | 2026-03-29 01:15:25 | INFO  | Checking visibility of 'Cirros 0.6.2' 2026-03-29 01:15:46.610797 | orchestrator | 2026-03-29 01:15:25 | INFO  | Setting visibility of 'Cirros 0.6.2' to 'public' 2026-03-29 01:15:46.610800 | orchestrator | 2026-03-29 01:15:26 | INFO  | Processing image 'Cirros 0.6.3' 2026-03-29 01:15:46.610803 | orchestrator | 2026-03-29 01:15:26 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img: 302 2026-03-29 01:15:46.610807 | orchestrator | 2026-03-29 01:15:26 | INFO  | Importing image Cirros 0.6.3 2026-03-29 01:15:46.610810 | orchestrator | 2026-03-29 01:15:26 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2026-03-29 01:15:46.610813 | orchestrator | 2026-03-29 01:15:28 | INFO  | Waiting for image to leave queued state... 2026-03-29 01:15:46.610816 | orchestrator | 2026-03-29 01:15:30 | INFO  | Waiting for import to complete... 2026-03-29 01:15:46.610825 | orchestrator | 2026-03-29 01:15:40 | INFO  | Import of 'Cirros 0.6.3' successfully completed, reloading images 2026-03-29 01:15:46.610828 | orchestrator | 2026-03-29 01:15:41 | INFO  | Checking parameters of 'Cirros 0.6.3' 2026-03-29 01:15:46.610831 | orchestrator | 2026-03-29 01:15:41 | INFO  | Setting internal_version = 0.6.3 2026-03-29 01:15:46.610834 | orchestrator | 2026-03-29 01:15:41 | INFO  | Setting image_original_user = cirros 2026-03-29 01:15:46.610837 | orchestrator | 2026-03-29 01:15:41 | INFO  | Adding tag os:cirros 2026-03-29 01:15:46.610840 | orchestrator | 2026-03-29 01:15:41 | INFO  | Setting property architecture: x86_64 2026-03-29 01:15:46.610843 | orchestrator | 2026-03-29 01:15:41 | INFO  | Setting property hw_disk_bus: scsi 2026-03-29 01:15:46.610846 | orchestrator | 2026-03-29 01:15:42 | INFO  | Setting property hw_rng_model: virtio 2026-03-29 01:15:46.610849 | orchestrator | 2026-03-29 01:15:42 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-03-29 01:15:46.610853 | orchestrator | 2026-03-29 01:15:42 | INFO  | Setting property hw_watchdog_action: reset 2026-03-29 01:15:46.610856 | orchestrator | 2026-03-29 01:15:42 | INFO  | Setting property hypervisor_type: qemu 2026-03-29 01:15:46.610859 | orchestrator | 2026-03-29 01:15:43 | INFO  | Setting property os_distro: cirros 2026-03-29 01:15:46.610862 | orchestrator | 2026-03-29 01:15:43 | INFO  | Setting property os_purpose: minimal 2026-03-29 01:15:46.610865 | orchestrator | 2026-03-29 01:15:43 | INFO  | Setting property replace_frequency: never 2026-03-29 01:15:46.610868 | orchestrator | 2026-03-29 01:15:43 | INFO  | Setting property uuid_validity: none 2026-03-29 01:15:46.610871 | orchestrator | 2026-03-29 01:15:44 | INFO  | Setting property provided_until: none 2026-03-29 01:15:46.610874 | orchestrator | 2026-03-29 01:15:44 | INFO  | Setting property image_description: Cirros 2026-03-29 01:15:46.610880 | orchestrator | 2026-03-29 01:15:44 | INFO  | Setting property image_name: Cirros 2026-03-29 01:15:46.610883 | orchestrator | 2026-03-29 01:15:44 | INFO  | Setting property internal_version: 0.6.3 2026-03-29 01:15:46.610886 | orchestrator | 2026-03-29 01:15:44 | INFO  | Setting property image_original_user: cirros 2026-03-29 01:15:46.610889 | orchestrator | 2026-03-29 01:15:45 | INFO  | Setting property os_version: 0.6.3 2026-03-29 01:15:46.610900 | orchestrator | 2026-03-29 01:15:45 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2026-03-29 01:15:46.610907 | orchestrator | 2026-03-29 01:15:45 | INFO  | Setting property image_build_date: 2024-09-26 2026-03-29 01:15:46.610910 | orchestrator | 2026-03-29 01:15:45 | INFO  | Checking status of 'Cirros 0.6.3' 2026-03-29 01:15:46.610913 | orchestrator | 2026-03-29 01:15:45 | INFO  | Checking visibility of 'Cirros 0.6.3' 2026-03-29 01:15:46.610916 | orchestrator | 2026-03-29 01:15:45 | INFO  | Setting visibility of 'Cirros 0.6.3' to 'public' 2026-03-29 01:15:46.848498 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh 2026-03-29 01:15:48.570979 | orchestrator | 2026-03-29 01:15:48 | INFO  | date: 2026-03-28 2026-03-29 01:15:48.571035 | orchestrator | 2026-03-29 01:15:48 | INFO  | image: octavia-amphora-haproxy-2024.2.20260328.qcow2 2026-03-29 01:15:48.571055 | orchestrator | 2026-03-29 01:15:48 | INFO  | url: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260328.qcow2 2026-03-29 01:15:48.571060 | orchestrator | 2026-03-29 01:15:48 | INFO  | checksum_url: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260328.qcow2.CHECKSUM 2026-03-29 01:15:48.950671 | orchestrator | 2026-03-29 01:15:48 | INFO  | checksum: d8129f2399256e335fa58752e7bcbe178527a1e3d0a6709e3e9c03f99848308a 2026-03-29 01:15:49.033617 | orchestrator | 2026-03-29 01:15:49 | INFO  | It takes a moment until task 3793627d-92ca-490d-99b4-ab72dad7b87e (image-manager) has been started and output is visible here. 2026-03-29 01:16:51.055757 | orchestrator | 2026-03-29 01:15:51 | INFO  | Processing image 'OpenStack Octavia Amphora 2026-03-28' 2026-03-29 01:16:51.055857 | orchestrator | 2026-03-29 01:15:51 | INFO  | Tested URL https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260328.qcow2: 200 2026-03-29 01:16:51.055865 | orchestrator | 2026-03-29 01:15:51 | INFO  | Importing image OpenStack Octavia Amphora 2026-03-28 2026-03-29 01:16:51.055870 | orchestrator | 2026-03-29 01:15:51 | INFO  | Importing from URL https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260328.qcow2 2026-03-29 01:16:51.055876 | orchestrator | 2026-03-29 01:15:52 | INFO  | Waiting for image to leave queued state... 2026-03-29 01:16:51.055880 | orchestrator | 2026-03-29 01:15:54 | INFO  | Waiting for import to complete... 2026-03-29 01:16:51.055885 | orchestrator | 2026-03-29 01:16:04 | INFO  | Waiting for import to complete... 2026-03-29 01:16:51.055889 | orchestrator | 2026-03-29 01:16:14 | INFO  | Waiting for import to complete... 2026-03-29 01:16:51.055893 | orchestrator | 2026-03-29 01:16:24 | INFO  | Waiting for import to complete... 2026-03-29 01:16:51.055899 | orchestrator | 2026-03-29 01:16:35 | INFO  | Waiting for import to complete... 2026-03-29 01:16:51.055903 | orchestrator | 2026-03-29 01:16:45 | INFO  | Import of 'OpenStack Octavia Amphora 2026-03-28' successfully completed, reloading images 2026-03-29 01:16:51.055925 | orchestrator | 2026-03-29 01:16:45 | INFO  | Checking parameters of 'OpenStack Octavia Amphora 2026-03-28' 2026-03-29 01:16:51.055929 | orchestrator | 2026-03-29 01:16:45 | INFO  | Setting internal_version = 2026-03-28 2026-03-29 01:16:51.055933 | orchestrator | 2026-03-29 01:16:45 | INFO  | Setting image_original_user = ubuntu 2026-03-29 01:16:51.055938 | orchestrator | 2026-03-29 01:16:45 | INFO  | Adding tag amphora 2026-03-29 01:16:51.055942 | orchestrator | 2026-03-29 01:16:46 | INFO  | Adding tag os:ubuntu 2026-03-29 01:16:51.055946 | orchestrator | 2026-03-29 01:16:46 | INFO  | Setting property architecture: x86_64 2026-03-29 01:16:51.055950 | orchestrator | 2026-03-29 01:16:46 | INFO  | Setting property hw_disk_bus: scsi 2026-03-29 01:16:51.055953 | orchestrator | 2026-03-29 01:16:46 | INFO  | Setting property hw_rng_model: virtio 2026-03-29 01:16:51.055958 | orchestrator | 2026-03-29 01:16:46 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-03-29 01:16:51.055962 | orchestrator | 2026-03-29 01:16:47 | INFO  | Setting property hw_watchdog_action: reset 2026-03-29 01:16:51.055966 | orchestrator | 2026-03-29 01:16:47 | INFO  | Setting property hypervisor_type: qemu 2026-03-29 01:16:51.055970 | orchestrator | 2026-03-29 01:16:47 | INFO  | Setting property os_distro: ubuntu 2026-03-29 01:16:51.055976 | orchestrator | 2026-03-29 01:16:47 | INFO  | Setting property replace_frequency: quarterly 2026-03-29 01:16:51.055982 | orchestrator | 2026-03-29 01:16:47 | INFO  | Setting property uuid_validity: last-1 2026-03-29 01:16:51.055988 | orchestrator | 2026-03-29 01:16:48 | INFO  | Setting property provided_until: none 2026-03-29 01:16:51.055998 | orchestrator | 2026-03-29 01:16:48 | INFO  | Setting property os_purpose: network 2026-03-29 01:16:51.056004 | orchestrator | 2026-03-29 01:16:48 | INFO  | Setting property image_description: OpenStack Octavia Amphora 2026-03-29 01:16:51.056024 | orchestrator | 2026-03-29 01:16:49 | INFO  | Setting property image_name: OpenStack Octavia Amphora 2026-03-29 01:16:51.056031 | orchestrator | 2026-03-29 01:16:49 | INFO  | Setting property internal_version: 2026-03-28 2026-03-29 01:16:51.056037 | orchestrator | 2026-03-29 01:16:49 | INFO  | Setting property image_original_user: ubuntu 2026-03-29 01:16:51.056055 | orchestrator | 2026-03-29 01:16:49 | INFO  | Setting property os_version: 2026-03-28 2026-03-29 01:16:51.056062 | orchestrator | 2026-03-29 01:16:50 | INFO  | Setting property image_source: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260328.qcow2 2026-03-29 01:16:51.056075 | orchestrator | 2026-03-29 01:16:50 | INFO  | Setting property image_build_date: 2026-03-28 2026-03-29 01:16:51.056081 | orchestrator | 2026-03-29 01:16:50 | INFO  | Checking status of 'OpenStack Octavia Amphora 2026-03-28' 2026-03-29 01:16:51.056087 | orchestrator | 2026-03-29 01:16:50 | INFO  | Checking visibility of 'OpenStack Octavia Amphora 2026-03-28' 2026-03-29 01:16:51.056093 | orchestrator | 2026-03-29 01:16:50 | INFO  | Processing image 'Cirros 0.6.3' (removal candidate) 2026-03-29 01:16:51.056113 | orchestrator | 2026-03-29 01:16:50 | WARNING  | No image definition found for 'Cirros 0.6.3', image will be ignored 2026-03-29 01:16:51.056123 | orchestrator | 2026-03-29 01:16:50 | INFO  | Processing image 'Cirros 0.6.2' (removal candidate) 2026-03-29 01:16:51.056127 | orchestrator | 2026-03-29 01:16:50 | WARNING  | No image definition found for 'Cirros 0.6.2', image will be ignored 2026-03-29 01:16:51.568778 | orchestrator | ok: Runtime: 0:03:02.058121 2026-03-29 01:16:51.595683 | 2026-03-29 01:16:51.595849 | TASK [Run checks] 2026-03-29 01:16:52.320893 | orchestrator | + set -e 2026-03-29 01:16:52.321006 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-29 01:16:52.321021 | orchestrator | ++ export INTERACTIVE=false 2026-03-29 01:16:52.321032 | orchestrator | ++ INTERACTIVE=false 2026-03-29 01:16:52.321040 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-29 01:16:52.321048 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-29 01:16:52.321056 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-03-29 01:16:52.322502 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-03-29 01:16:52.328905 | orchestrator | 2026-03-29 01:16:52.328989 | orchestrator | # CHECK 2026-03-29 01:16:52.329004 | orchestrator | 2026-03-29 01:16:52.329016 | orchestrator | ++ export MANAGER_VERSION=latest 2026-03-29 01:16:52.329030 | orchestrator | ++ MANAGER_VERSION=latest 2026-03-29 01:16:52.329042 | orchestrator | + echo 2026-03-29 01:16:52.329053 | orchestrator | + echo '# CHECK' 2026-03-29 01:16:52.329065 | orchestrator | + echo 2026-03-29 01:16:52.329075 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-03-29 01:16:52.329986 | orchestrator | ++ semver latest 5.0.0 2026-03-29 01:16:52.397451 | orchestrator | 2026-03-29 01:16:52.397521 | orchestrator | ## Containers @ testbed-manager 2026-03-29 01:16:52.397531 | orchestrator | 2026-03-29 01:16:52.397541 | orchestrator | + [[ -1 -eq -1 ]] 2026-03-29 01:16:52.397548 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-03-29 01:16:52.397596 | orchestrator | + echo 2026-03-29 01:16:52.397607 | orchestrator | + echo '## Containers @ testbed-manager' 2026-03-29 01:16:52.397616 | orchestrator | + echo 2026-03-29 01:16:52.397624 | orchestrator | + osism container testbed-manager ps 2026-03-29 01:16:53.500768 | orchestrator | 2026-03-29 01:16:53 | INFO  | Creating empty known_hosts file: /share/known_hosts 2026-03-29 01:16:53.891865 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-03-29 01:16:53.891973 | orchestrator | 317f46e0052d registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_blackbox_exporter 2026-03-29 01:16:53.892001 | orchestrator | 4cc64a42813d registry.osism.tech/kolla/prometheus-alertmanager:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_alertmanager 2026-03-29 01:16:53.892016 | orchestrator | 1651885335b2 registry.osism.tech/kolla/prometheus-cadvisor:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_cadvisor 2026-03-29 01:16:53.892024 | orchestrator | ef5a939d764b registry.osism.tech/kolla/prometheus-node-exporter:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_node_exporter 2026-03-29 01:16:53.892036 | orchestrator | 3591f9071892 registry.osism.tech/kolla/prometheus-v2-server:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_server 2026-03-29 01:16:53.892044 | orchestrator | 12dd7d2403eb registry.osism.tech/osism/cephclient:reef "/usr/bin/dumb-init …" 17 minutes ago Up 16 minutes cephclient 2026-03-29 01:16:53.892052 | orchestrator | ab686baec3cb registry.osism.tech/kolla/cron:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes cron 2026-03-29 01:16:53.892060 | orchestrator | 59a58237e949 registry.osism.tech/kolla/kolla-toolbox:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes kolla_toolbox 2026-03-29 01:16:53.892086 | orchestrator | b8e688451292 registry.osism.tech/kolla/fluentd:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes fluentd 2026-03-29 01:16:53.892094 | orchestrator | 33521cdbfc0e phpmyadmin/phpmyadmin:5.2 "/docker-entrypoint.…" 29 minutes ago Up 28 minutes (healthy) 80/tcp phpmyadmin 2026-03-29 01:16:53.892102 | orchestrator | 05e639d2217b registry.osism.tech/osism/openstackclient:2024.2 "/usr/bin/dumb-init …" 29 minutes ago Up 29 minutes openstackclient 2026-03-29 01:16:53.892110 | orchestrator | fe7303f3d9ad registry.osism.tech/osism/homer:v25.10.1 "/bin/sh /entrypoint…" 29 minutes ago Up 29 minutes (healthy) 8080/tcp homer 2026-03-29 01:16:53.892118 | orchestrator | 4e0e3a92c51c registry.osism.tech/dockerhub/ubuntu/squid:6.1-23.10_beta "entrypoint.sh -f /e…" 52 minutes ago Up 51 minutes (healthy) 192.168.16.5:3128->3128/tcp squid 2026-03-29 01:16:53.892125 | orchestrator | 97d04a6c6575 registry.osism.tech/osism/inventory-reconciler:latest "/sbin/tini -- /entr…" 56 minutes ago Up 35 minutes (healthy) manager-inventory_reconciler-1 2026-03-29 01:16:53.892133 | orchestrator | 366391e00738 registry.osism.tech/osism/osism-ansible:latest "/entrypoint.sh osis…" 56 minutes ago Up 35 minutes (healthy) osism-ansible 2026-03-29 01:16:53.892158 | orchestrator | d85b6fcf532f registry.osism.tech/osism/osism-kubernetes:latest "/entrypoint.sh osis…" 56 minutes ago Up 35 minutes (healthy) osism-kubernetes 2026-03-29 01:16:53.892170 | orchestrator | d8864fac79e8 registry.osism.tech/osism/kolla-ansible:2024.2 "/entrypoint.sh osis…" 56 minutes ago Up 35 minutes (healthy) kolla-ansible 2026-03-29 01:16:53.892178 | orchestrator | ff6625345d9d registry.osism.tech/osism/ceph-ansible:reef "/entrypoint.sh osis…" 56 minutes ago Up 35 minutes (healthy) ceph-ansible 2026-03-29 01:16:53.892186 | orchestrator | ea3cda184e53 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" 56 minutes ago Up 36 minutes (healthy) 8000/tcp manager-ara-server-1 2026-03-29 01:16:53.892193 | orchestrator | 0dda4f0c6cdf registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" 56 minutes ago Up 36 minutes (healthy) manager-flower-1 2026-03-29 01:16:53.892201 | orchestrator | b4871eae0bc0 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" 56 minutes ago Up 36 minutes (healthy) manager-beat-1 2026-03-29 01:16:53.892209 | orchestrator | acc170bbd37f registry.osism.tech/osism/osism:latest "/sbin/tini -- sleep…" 56 minutes ago Up 36 minutes (healthy) osismclient 2026-03-29 01:16:53.892216 | orchestrator | 306363cf0f33 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" 56 minutes ago Up 36 minutes (healthy) 192.168.16.5:8000->8000/tcp manager-api-1 2026-03-29 01:16:53.892230 | orchestrator | c22dd8d4d2a0 registry.osism.tech/osism/osism-frontend:latest "docker-entrypoint.s…" 56 minutes ago Up 36 minutes 192.168.16.5:3000->3000/tcp osism-frontend 2026-03-29 01:16:53.892237 | orchestrator | 73cc1be26909 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" 56 minutes ago Up 36 minutes (healthy) manager-openstack-1 2026-03-29 01:16:53.892245 | orchestrator | 19cdc73792f6 registry.osism.tech/dockerhub/library/mariadb:11.8.4 "docker-entrypoint.s…" 56 minutes ago Up 36 minutes (healthy) 3306/tcp manager-mariadb-1 2026-03-29 01:16:53.892252 | orchestrator | b03c8d3f7baa registry.osism.tech/dockerhub/library/redis:7.4.7-alpine "docker-entrypoint.s…" 56 minutes ago Up 36 minutes (healthy) 6379/tcp manager-redis-1 2026-03-29 01:16:53.892260 | orchestrator | 1ca05ef55310 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" 56 minutes ago Up 36 minutes (healthy) manager-listener-1 2026-03-29 01:16:53.892267 | orchestrator | c0d57ce809b6 registry.osism.tech/dockerhub/library/traefik:v3.5.0 "/entrypoint.sh trae…" 57 minutes ago Up 57 minutes (healthy) 192.168.16.5:80->80/tcp, 192.168.16.5:443->443/tcp, 192.168.16.5:8122->8080/tcp traefik 2026-03-29 01:16:54.061730 | orchestrator | 2026-03-29 01:16:54.061814 | orchestrator | ## Images @ testbed-manager 2026-03-29 01:16:54.061821 | orchestrator | 2026-03-29 01:16:54.061827 | orchestrator | + echo 2026-03-29 01:16:54.061831 | orchestrator | + echo '## Images @ testbed-manager' 2026-03-29 01:16:54.061836 | orchestrator | + echo 2026-03-29 01:16:54.061844 | orchestrator | + osism container testbed-manager images 2026-03-29 01:16:55.602663 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-03-29 01:16:55.602738 | orchestrator | registry.osism.tech/osism/osism-ansible latest 84cf620bd3a3 About an hour ago 638MB 2026-03-29 01:16:55.602745 | orchestrator | registry.osism.tech/osism/kolla-ansible 2024.2 3ca743f503c9 About an hour ago 635MB 2026-03-29 01:16:55.602750 | orchestrator | registry.osism.tech/osism/osism-kubernetes latest 1a794087d8d4 About an hour ago 1.24GB 2026-03-29 01:16:55.602754 | orchestrator | registry.osism.tech/osism/osism latest 4c8fcbd0869b About an hour ago 406MB 2026-03-29 01:16:55.602772 | orchestrator | registry.osism.tech/osism/ceph-ansible reef 41d9901606b2 About an hour ago 585MB 2026-03-29 01:16:55.602777 | orchestrator | registry.osism.tech/osism/osism-frontend latest 1bc05c93c067 About an hour ago 212MB 2026-03-29 01:16:55.602781 | orchestrator | registry.osism.tech/osism/inventory-reconciler latest 961d71132ad6 About an hour ago 357MB 2026-03-29 01:16:55.602785 | orchestrator | registry.osism.tech/osism/openstackclient 2024.2 79a5ae258a23 21 hours ago 239MB 2026-03-29 01:16:55.602789 | orchestrator | registry.osism.tech/osism/cephclient reef c4ba435ee8be 21 hours ago 453MB 2026-03-29 01:16:55.602792 | orchestrator | registry.osism.tech/kolla/fluentd 2024.2 0adcbfa3acec 22 hours ago 590MB 2026-03-29 01:16:55.602796 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2024.2 ba54e8f32140 22 hours ago 679MB 2026-03-29 01:16:55.602800 | orchestrator | registry.osism.tech/kolla/cron 2024.2 15336f5d1fc0 22 hours ago 277MB 2026-03-29 01:16:55.602804 | orchestrator | registry.osism.tech/kolla/prometheus-alertmanager 2024.2 d4b5279758b8 22 hours ago 415MB 2026-03-29 01:16:55.602808 | orchestrator | registry.osism.tech/kolla/prometheus-blackbox-exporter 2024.2 01d0b531a133 22 hours ago 319MB 2026-03-29 01:16:55.602825 | orchestrator | registry.osism.tech/kolla/prometheus-cadvisor 2024.2 3717dbe9780f 22 hours ago 368MB 2026-03-29 01:16:55.602829 | orchestrator | registry.osism.tech/kolla/prometheus-v2-server 2024.2 8f1dcd5cb691 22 hours ago 850MB 2026-03-29 01:16:55.602832 | orchestrator | registry.osism.tech/kolla/prometheus-node-exporter 2024.2 5029f1fb5219 22 hours ago 317MB 2026-03-29 01:16:55.602836 | orchestrator | registry.osism.tech/dockerhub/library/redis 7.4.7-alpine e08bd8d5a677 8 weeks ago 41.4MB 2026-03-29 01:16:55.602840 | orchestrator | registry.osism.tech/osism/homer v25.10.1 ea34b371c716 3 months ago 11.5MB 2026-03-29 01:16:55.602844 | orchestrator | registry.osism.tech/dockerhub/library/mariadb 11.8.4 70745dd8f1d0 4 months ago 334MB 2026-03-29 01:16:55.602847 | orchestrator | phpmyadmin/phpmyadmin 5.2 e66b1f5a8c58 5 months ago 742MB 2026-03-29 01:16:55.602851 | orchestrator | registry.osism.tech/osism/ara-server 1.7.3 d1b687333f2f 7 months ago 275MB 2026-03-29 01:16:55.602855 | orchestrator | registry.osism.tech/dockerhub/library/traefik v3.5.0 11cc59587f6a 8 months ago 226MB 2026-03-29 01:16:55.602859 | orchestrator | registry.osism.tech/dockerhub/ubuntu/squid 6.1-23.10_beta 34b6bbbcf74b 21 months ago 146MB 2026-03-29 01:16:55.791409 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-03-29 01:16:55.792172 | orchestrator | ++ semver latest 5.0.0 2026-03-29 01:16:55.864306 | orchestrator | 2026-03-29 01:16:55.864382 | orchestrator | ## Containers @ testbed-node-0 2026-03-29 01:16:55.864389 | orchestrator | 2026-03-29 01:16:55.864394 | orchestrator | + [[ -1 -eq -1 ]] 2026-03-29 01:16:55.864400 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-03-29 01:16:55.864407 | orchestrator | + echo 2026-03-29 01:16:55.864414 | orchestrator | + echo '## Containers @ testbed-node-0' 2026-03-29 01:16:55.864422 | orchestrator | + echo 2026-03-29 01:16:55.864430 | orchestrator | + osism container testbed-node-0 ps 2026-03-29 01:16:57.417439 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-03-29 01:16:57.417534 | orchestrator | 34a6c6e3a1e4 registry.osism.tech/kolla/octavia-worker:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_worker 2026-03-29 01:16:57.417544 | orchestrator | ac8e59bd4b3c registry.osism.tech/kolla/octavia-housekeeping:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_housekeeping 2026-03-29 01:16:57.417549 | orchestrator | 8326ef368496 registry.osism.tech/kolla/octavia-health-manager:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_health_manager 2026-03-29 01:16:57.417554 | orchestrator | e37d0048faf7 registry.osism.tech/kolla/octavia-driver-agent:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes octavia_driver_agent 2026-03-29 01:16:57.417561 | orchestrator | 82d7f160eb48 registry.osism.tech/kolla/octavia-api:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_api 2026-03-29 01:16:57.417567 | orchestrator | 4ea80cb129de registry.osism.tech/kolla/magnum-conductor:2024.2 "dumb-init --single-…" 8 minutes ago Up 7 minutes (healthy) magnum_conductor 2026-03-29 01:16:57.417573 | orchestrator | 3fe752dfbb19 registry.osism.tech/kolla/grafana:2024.2 "dumb-init --single-…" 8 minutes ago Up 8 minutes grafana 2026-03-29 01:16:57.417627 | orchestrator | 472eced3ee80 registry.osism.tech/kolla/magnum-api:2024.2 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) magnum_api 2026-03-29 01:16:57.417649 | orchestrator | 5a8b22688114 registry.osism.tech/kolla/placement-api:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) placement_api 2026-03-29 01:16:57.417672 | orchestrator | a4c36a1a1b19 registry.osism.tech/kolla/nova-novncproxy:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_novncproxy 2026-03-29 01:16:57.417678 | orchestrator | 83f446e67f31 registry.osism.tech/kolla/nova-conductor:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_conductor 2026-03-29 01:16:57.417683 | orchestrator | f07a19867175 registry.osism.tech/kolla/neutron-server:2024.2 "dumb-init --single-…" 10 minutes ago Up 9 minutes (healthy) neutron_server 2026-03-29 01:16:57.417689 | orchestrator | 34593b7ce1c2 registry.osism.tech/kolla/designate-worker:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_worker 2026-03-29 01:16:57.417694 | orchestrator | 2abf0e62bfc1 registry.osism.tech/kolla/designate-mdns:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_mdns 2026-03-29 01:16:57.417700 | orchestrator | 634e674ec289 registry.osism.tech/kolla/designate-producer:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_producer 2026-03-29 01:16:57.417706 | orchestrator | 45565d636de5 registry.osism.tech/kolla/designate-central:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_central 2026-03-29 01:16:57.417712 | orchestrator | a5d095524e3d registry.osism.tech/kolla/designate-api:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_api 2026-03-29 01:16:57.417718 | orchestrator | 354e5e658ba7 registry.osism.tech/kolla/designate-backend-bind9:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) designate_backend_bind9 2026-03-29 01:16:57.417724 | orchestrator | b0db57760987 registry.osism.tech/kolla/barbican-worker:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_worker 2026-03-29 01:16:57.417730 | orchestrator | b3989b8000a7 registry.osism.tech/kolla/nova-api:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) nova_api 2026-03-29 01:16:57.417735 | orchestrator | 4a6dcaac16f6 registry.osism.tech/kolla/barbican-keystone-listener:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_keystone_listener 2026-03-29 01:16:57.417758 | orchestrator | cd3b39c4e94f registry.osism.tech/kolla/barbican-api:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_api 2026-03-29 01:16:57.417765 | orchestrator | d18786a8820b registry.osism.tech/kolla/nova-scheduler:2024.2 "dumb-init --single-…" 11 minutes ago Up 9 minutes (healthy) nova_scheduler 2026-03-29 01:16:57.417771 | orchestrator | 29b656316a48 registry.osism.tech/kolla/cinder-backup:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) cinder_backup 2026-03-29 01:16:57.417777 | orchestrator | 793b17a4207e registry.osism.tech/kolla/glance-api:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) glance_api 2026-03-29 01:16:57.417787 | orchestrator | b93ead5e3590 registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_elasticsearch_exporter 2026-03-29 01:16:57.417794 | orchestrator | 67cd91d7c23e registry.osism.tech/kolla/cinder-volume:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) cinder_volume 2026-03-29 01:16:57.417798 | orchestrator | c94a4e24ff54 registry.osism.tech/kolla/cinder-scheduler:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) cinder_scheduler 2026-03-29 01:16:57.417806 | orchestrator | 8559184a0150 registry.osism.tech/kolla/prometheus-cadvisor:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_cadvisor 2026-03-29 01:16:57.417816 | orchestrator | 1441d288df77 registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_memcached_exporter 2026-03-29 01:16:57.417820 | orchestrator | 53c9267eb49b registry.osism.tech/kolla/cinder-api:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) cinder_api 2026-03-29 01:16:57.417824 | orchestrator | b7fca4e6db0e registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_mysqld_exporter 2026-03-29 01:16:57.417827 | orchestrator | d6748153c77c registry.osism.tech/kolla/prometheus-node-exporter:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_node_exporter 2026-03-29 01:16:57.417831 | orchestrator | fc9298a2d0eb registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mgr -…" 15 minutes ago Up 15 minutes ceph-mgr-testbed-node-0 2026-03-29 01:16:57.417835 | orchestrator | 4a0cd77f482e registry.osism.tech/kolla/keystone:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) keystone 2026-03-29 01:16:57.417839 | orchestrator | 21d7ab8ccc28 registry.osism.tech/kolla/keystone-fernet:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) keystone_fernet 2026-03-29 01:16:57.417843 | orchestrator | 443219021f27 registry.osism.tech/kolla/keystone-ssh:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) keystone_ssh 2026-03-29 01:16:57.417846 | orchestrator | 7a88e6fc1f71 registry.osism.tech/kolla/horizon:2024.2 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) horizon 2026-03-29 01:16:57.417850 | orchestrator | 91d33746ee0d registry.osism.tech/kolla/mariadb-server:2024.2 "dumb-init -- kolla_…" 19 minutes ago Up 19 minutes (healthy) mariadb 2026-03-29 01:16:57.417854 | orchestrator | a2f0674a0c35 registry.osism.tech/kolla/opensearch-dashboards:2024.2 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) opensearch_dashboards 2026-03-29 01:16:57.417858 | orchestrator | 04916623af5e registry.osism.tech/kolla/opensearch:2024.2 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) opensearch 2026-03-29 01:16:57.417862 | orchestrator | c215599420d8 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-crash" 21 minutes ago Up 21 minutes ceph-crash-testbed-node-0 2026-03-29 01:16:57.417866 | orchestrator | a32227182016 registry.osism.tech/kolla/keepalived:2024.2 "dumb-init --single-…" 22 minutes ago Up 22 minutes keepalived 2026-03-29 01:16:57.417870 | orchestrator | c1fce137489d registry.osism.tech/kolla/proxysql:2024.2 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) proxysql 2026-03-29 01:16:57.417879 | orchestrator | 4e1fc38043b2 registry.osism.tech/kolla/haproxy:2024.2 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) haproxy 2026-03-29 01:16:57.417883 | orchestrator | 966e93a44ac1 registry.osism.tech/kolla/ovn-northd:2024.2 "dumb-init --single-…" 25 minutes ago Up 25 minutes ovn_northd 2026-03-29 01:16:57.417887 | orchestrator | d0745544f795 registry.osism.tech/kolla/ovn-sb-db-server:2024.2 "dumb-init --single-…" 25 minutes ago Up 25 minutes ovn_sb_db 2026-03-29 01:16:57.417891 | orchestrator | d25cd1887a63 registry.osism.tech/kolla/ovn-nb-db-server:2024.2 "dumb-init --single-…" 25 minutes ago Up 25 minutes ovn_nb_db 2026-03-29 01:16:57.417903 | orchestrator | c5e500ac787e registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mon -…" 25 minutes ago Up 25 minutes ceph-mon-testbed-node-0 2026-03-29 01:16:57.417907 | orchestrator | 96e7ef6d6614 registry.osism.tech/kolla/ovn-controller:2024.2 "dumb-init --single-…" 26 minutes ago Up 26 minutes ovn_controller 2026-03-29 01:16:57.417910 | orchestrator | 5079acb50598 registry.osism.tech/kolla/rabbitmq:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) rabbitmq 2026-03-29 01:16:57.417914 | orchestrator | b3e831aa4b15 registry.osism.tech/kolla/openvswitch-vswitchd:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) openvswitch_vswitchd 2026-03-29 01:16:57.417918 | orchestrator | d03b636b2654 registry.osism.tech/kolla/redis-sentinel:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) redis_sentinel 2026-03-29 01:16:57.417922 | orchestrator | f462a29ac1e9 registry.osism.tech/kolla/openvswitch-db-server:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) openvswitch_db 2026-03-29 01:16:57.417928 | orchestrator | 2cab11d4c5c4 registry.osism.tech/kolla/redis:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) redis 2026-03-29 01:16:57.417932 | orchestrator | 76182f9347f9 registry.osism.tech/kolla/memcached:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) memcached 2026-03-29 01:16:57.417936 | orchestrator | 84de6584cfc3 registry.osism.tech/kolla/cron:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes cron 2026-03-29 01:16:57.417940 | orchestrator | 4844caff4f35 registry.osism.tech/kolla/kolla-toolbox:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes kolla_toolbox 2026-03-29 01:16:57.417944 | orchestrator | 30cf9fa38213 registry.osism.tech/kolla/fluentd:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes fluentd 2026-03-29 01:16:57.604833 | orchestrator | 2026-03-29 01:16:57.604927 | orchestrator | ## Images @ testbed-node-0 2026-03-29 01:16:57.604937 | orchestrator | 2026-03-29 01:16:57.604945 | orchestrator | + echo 2026-03-29 01:16:57.604952 | orchestrator | + echo '## Images @ testbed-node-0' 2026-03-29 01:16:57.604960 | orchestrator | + echo 2026-03-29 01:16:57.604966 | orchestrator | + osism container testbed-node-0 images 2026-03-29 01:16:59.137935 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-03-29 01:16:59.138007 | orchestrator | registry.osism.tech/osism/ceph-daemon reef 233dae3e7f75 22 hours ago 1.35GB 2026-03-29 01:16:59.138042 | orchestrator | registry.osism.tech/kolla/opensearch 2024.2 fff86a8dfe72 22 hours ago 1.57GB 2026-03-29 01:16:59.138049 | orchestrator | registry.osism.tech/kolla/opensearch-dashboards 2024.2 77a5d2c8cb3b 22 hours ago 1.54GB 2026-03-29 01:16:59.138053 | orchestrator | registry.osism.tech/kolla/memcached 2024.2 c8f3b0a8bb7d 22 hours ago 277MB 2026-03-29 01:16:59.138057 | orchestrator | registry.osism.tech/kolla/haproxy 2024.2 b0df71434c32 22 hours ago 285MB 2026-03-29 01:16:59.138061 | orchestrator | registry.osism.tech/kolla/fluentd 2024.2 0adcbfa3acec 22 hours ago 590MB 2026-03-29 01:16:59.138066 | orchestrator | registry.osism.tech/kolla/grafana 2024.2 363de30ad5df 22 hours ago 1.04GB 2026-03-29 01:16:59.138070 | orchestrator | registry.osism.tech/kolla/rabbitmq 2024.2 061b2ef690dc 22 hours ago 333MB 2026-03-29 01:16:59.138074 | orchestrator | registry.osism.tech/kolla/keepalived 2024.2 203985212c2b 22 hours ago 287MB 2026-03-29 01:16:59.138078 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2024.2 ba54e8f32140 22 hours ago 679MB 2026-03-29 01:16:59.138100 | orchestrator | registry.osism.tech/kolla/proxysql 2024.2 8910c6ab6b78 22 hours ago 427MB 2026-03-29 01:16:59.138104 | orchestrator | registry.osism.tech/kolla/cron 2024.2 15336f5d1fc0 22 hours ago 277MB 2026-03-29 01:16:59.138108 | orchestrator | registry.osism.tech/kolla/mariadb-server 2024.2 1eff2fbb28c8 22 hours ago 463MB 2026-03-29 01:16:59.138112 | orchestrator | registry.osism.tech/kolla/prometheus-elasticsearch-exporter 2024.2 fb4949a7e745 22 hours ago 303MB 2026-03-29 01:16:59.138116 | orchestrator | registry.osism.tech/kolla/prometheus-memcached-exporter 2024.2 b2d86a05941d 22 hours ago 309MB 2026-03-29 01:16:59.138120 | orchestrator | registry.osism.tech/kolla/prometheus-mysqld-exporter 2024.2 bc057029f0fe 22 hours ago 312MB 2026-03-29 01:16:59.138137 | orchestrator | registry.osism.tech/kolla/prometheus-cadvisor 2024.2 3717dbe9780f 22 hours ago 368MB 2026-03-29 01:16:59.138141 | orchestrator | registry.osism.tech/kolla/prometheus-node-exporter 2024.2 5029f1fb5219 22 hours ago 317MB 2026-03-29 01:16:59.138147 | orchestrator | registry.osism.tech/kolla/horizon 2024.2 2e9ebe226d62 22 hours ago 1.16GB 2026-03-29 01:16:59.138152 | orchestrator | registry.osism.tech/kolla/openvswitch-vswitchd 2024.2 85f87c391f52 22 hours ago 290MB 2026-03-29 01:16:59.138159 | orchestrator | registry.osism.tech/kolla/openvswitch-db-server 2024.2 136bfd26aba2 22 hours ago 290MB 2026-03-29 01:16:59.138167 | orchestrator | registry.osism.tech/kolla/redis-sentinel 2024.2 41bff60c56d1 22 hours ago 284MB 2026-03-29 01:16:59.138176 | orchestrator | registry.osism.tech/kolla/redis 2024.2 ab0d88e5bec1 22 hours ago 284MB 2026-03-29 01:16:59.138185 | orchestrator | registry.osism.tech/kolla/keystone 2024.2 a50f3bfd0c42 22 hours ago 1.08GB 2026-03-29 01:16:59.138190 | orchestrator | registry.osism.tech/kolla/keystone-ssh 2024.2 6a0c7c59f262 22 hours ago 1.05GB 2026-03-29 01:16:59.138196 | orchestrator | registry.osism.tech/kolla/keystone-fernet 2024.2 a567b59d0d84 22 hours ago 1.05GB 2026-03-29 01:16:59.138202 | orchestrator | registry.osism.tech/kolla/cinder-api 2024.2 a8d5cbc00b0f 22 hours ago 1.42GB 2026-03-29 01:16:59.138208 | orchestrator | registry.osism.tech/kolla/cinder-backup 2024.2 689b3be02772 22 hours ago 1.42GB 2026-03-29 01:16:59.138213 | orchestrator | registry.osism.tech/kolla/cinder-volume 2024.2 ab02dd061263 22 hours ago 1.73GB 2026-03-29 01:16:59.138219 | orchestrator | registry.osism.tech/kolla/cinder-scheduler 2024.2 ca63a5d6c3f2 22 hours ago 1.42GB 2026-03-29 01:16:59.138225 | orchestrator | registry.osism.tech/kolla/nova-conductor 2024.2 915f98e1e1ab 22 hours ago 1.22GB 2026-03-29 01:16:59.138231 | orchestrator | registry.osism.tech/kolla/nova-scheduler 2024.2 6380172c57f0 22 hours ago 1.22GB 2026-03-29 01:16:59.138237 | orchestrator | registry.osism.tech/kolla/nova-novncproxy 2024.2 bb03e7fc4641 22 hours ago 1.38GB 2026-03-29 01:16:59.138243 | orchestrator | registry.osism.tech/kolla/nova-api 2024.2 f1019912b5bc 22 hours ago 1.22GB 2026-03-29 01:16:59.138250 | orchestrator | registry.osism.tech/kolla/ceilometer-notification 2024.2 5649c5a8d9e0 22 hours ago 987MB 2026-03-29 01:16:59.138254 | orchestrator | registry.osism.tech/kolla/ceilometer-central 2024.2 5e87018b87e4 22 hours ago 987MB 2026-03-29 01:16:59.138273 | orchestrator | registry.osism.tech/kolla/aodh-api 2024.2 46e19a2738bf 22 hours ago 984MB 2026-03-29 01:16:59.138277 | orchestrator | registry.osism.tech/kolla/aodh-listener 2024.2 c53a908a3b99 22 hours ago 985MB 2026-03-29 01:16:59.138281 | orchestrator | registry.osism.tech/kolla/aodh-notifier 2024.2 620024132112 22 hours ago 985MB 2026-03-29 01:16:59.138285 | orchestrator | registry.osism.tech/kolla/aodh-evaluator 2024.2 4dea567beeea 22 hours ago 985MB 2026-03-29 01:16:59.138295 | orchestrator | registry.osism.tech/kolla/neutron-server 2024.2 586795ecfab0 22 hours ago 1.17GB 2026-03-29 01:16:59.138299 | orchestrator | registry.osism.tech/kolla/placement-api 2024.2 8e848eeed8c1 22 hours ago 986MB 2026-03-29 01:16:59.138303 | orchestrator | registry.osism.tech/kolla/skyline-apiserver 2024.2 5ad31f9f15a4 22 hours ago 1GB 2026-03-29 01:16:59.138307 | orchestrator | registry.osism.tech/kolla/skyline-console 2024.2 b527884766fc 22 hours ago 1.06GB 2026-03-29 01:16:59.138310 | orchestrator | registry.osism.tech/kolla/glance-api 2024.2 fa7df5d87941 22 hours ago 1.11GB 2026-03-29 01:16:59.138324 | orchestrator | registry.osism.tech/kolla/designate-producer 2024.2 d08646bdb70d 22 hours ago 995MB 2026-03-29 01:16:59.138330 | orchestrator | registry.osism.tech/kolla/designate-central 2024.2 c021aceafc6e 22 hours ago 994MB 2026-03-29 01:16:59.138335 | orchestrator | registry.osism.tech/kolla/designate-mdns 2024.2 cd17a44c126a 22 hours ago 995MB 2026-03-29 01:16:59.138343 | orchestrator | registry.osism.tech/kolla/designate-worker 2024.2 0927b791a082 22 hours ago 1e+03MB 2026-03-29 01:16:59.138351 | orchestrator | registry.osism.tech/kolla/designate-backend-bind9 2024.2 5964224d4edd 22 hours ago 1e+03MB 2026-03-29 01:16:59.138359 | orchestrator | registry.osism.tech/kolla/designate-api 2024.2 226e3f48acd8 22 hours ago 995MB 2026-03-29 01:16:59.138364 | orchestrator | registry.osism.tech/kolla/barbican-keystone-listener 2024.2 57cbdc1befa6 22 hours ago 1GB 2026-03-29 01:16:59.138370 | orchestrator | registry.osism.tech/kolla/barbican-worker 2024.2 0297ff7f5aa6 22 hours ago 1GB 2026-03-29 01:16:59.138375 | orchestrator | registry.osism.tech/kolla/barbican-api 2024.2 50948bbcb33e 22 hours ago 1GB 2026-03-29 01:16:59.138381 | orchestrator | registry.osism.tech/kolla/octavia-housekeeping 2024.2 78c0c4734053 22 hours ago 1.04GB 2026-03-29 01:16:59.138388 | orchestrator | registry.osism.tech/kolla/octavia-driver-agent 2024.2 a01a82fd6904 22 hours ago 1.06GB 2026-03-29 01:16:59.138393 | orchestrator | registry.osism.tech/kolla/octavia-api 2024.2 52e0b3c87a0d 22 hours ago 1.06GB 2026-03-29 01:16:59.138400 | orchestrator | registry.osism.tech/kolla/octavia-health-manager 2024.2 9e62358e4c10 22 hours ago 1.04GB 2026-03-29 01:16:59.138407 | orchestrator | registry.osism.tech/kolla/octavia-worker 2024.2 132d6f337160 22 hours ago 1.04GB 2026-03-29 01:16:59.138413 | orchestrator | registry.osism.tech/kolla/magnum-conductor 2024.2 fedad62d4394 22 hours ago 1.25GB 2026-03-29 01:16:59.138419 | orchestrator | registry.osism.tech/kolla/magnum-api 2024.2 fa08c78404d2 22 hours ago 1.14GB 2026-03-29 01:16:59.138425 | orchestrator | registry.osism.tech/kolla/ovn-controller 2024.2 a6e00fc3f91a 22 hours ago 851MB 2026-03-29 01:16:59.138433 | orchestrator | registry.osism.tech/kolla/ovn-sb-db-server 2024.2 5e07b971ee71 22 hours ago 851MB 2026-03-29 01:16:59.138441 | orchestrator | registry.osism.tech/kolla/ovn-nb-db-server 2024.2 65410be5b615 22 hours ago 851MB 2026-03-29 01:16:59.138450 | orchestrator | registry.osism.tech/kolla/ovn-northd 2024.2 ae71ccba3ea4 22 hours ago 851MB 2026-03-29 01:16:59.314651 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-03-29 01:16:59.315105 | orchestrator | ++ semver latest 5.0.0 2026-03-29 01:16:59.355568 | orchestrator | 2026-03-29 01:16:59.355699 | orchestrator | ## Containers @ testbed-node-1 2026-03-29 01:16:59.355710 | orchestrator | 2026-03-29 01:16:59.355718 | orchestrator | + [[ -1 -eq -1 ]] 2026-03-29 01:16:59.355724 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-03-29 01:16:59.355731 | orchestrator | + echo 2026-03-29 01:16:59.355739 | orchestrator | + echo '## Containers @ testbed-node-1' 2026-03-29 01:16:59.355747 | orchestrator | + echo 2026-03-29 01:16:59.355753 | orchestrator | + osism container testbed-node-1 ps 2026-03-29 01:17:00.838638 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-03-29 01:17:00.838694 | orchestrator | 7f971304920d registry.osism.tech/kolla/octavia-worker:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_worker 2026-03-29 01:17:00.838701 | orchestrator | 0d0ec3924d51 registry.osism.tech/kolla/octavia-housekeeping:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_housekeeping 2026-03-29 01:17:00.838704 | orchestrator | 19c973b08348 registry.osism.tech/kolla/octavia-health-manager:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_health_manager 2026-03-29 01:17:00.838707 | orchestrator | 7a470a62adf9 registry.osism.tech/kolla/octavia-driver-agent:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes octavia_driver_agent 2026-03-29 01:17:00.838711 | orchestrator | e84c1cff9c89 registry.osism.tech/kolla/octavia-api:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_api 2026-03-29 01:17:00.838723 | orchestrator | 1ea0a17a5916 registry.osism.tech/kolla/grafana:2024.2 "dumb-init --single-…" 7 minutes ago Up 7 minutes grafana 2026-03-29 01:17:00.838726 | orchestrator | f2535d5c1695 registry.osism.tech/kolla/magnum-conductor:2024.2 "dumb-init --single-…" 8 minutes ago Up 7 minutes (healthy) magnum_conductor 2026-03-29 01:17:00.838729 | orchestrator | 7b4f8eeeba2b registry.osism.tech/kolla/magnum-api:2024.2 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) magnum_api 2026-03-29 01:17:00.838735 | orchestrator | 72480b4c5501 registry.osism.tech/kolla/placement-api:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) placement_api 2026-03-29 01:17:00.838738 | orchestrator | 3ea70b3e8720 registry.osism.tech/kolla/nova-novncproxy:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_novncproxy 2026-03-29 01:17:00.838741 | orchestrator | 405b891d7cc6 registry.osism.tech/kolla/nova-conductor:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_conductor 2026-03-29 01:17:00.838744 | orchestrator | 61dd18675802 registry.osism.tech/kolla/neutron-server:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) neutron_server 2026-03-29 01:17:00.838747 | orchestrator | 2a0de81fa9cb registry.osism.tech/kolla/designate-worker:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_worker 2026-03-29 01:17:00.838751 | orchestrator | 766796500f9e registry.osism.tech/kolla/designate-mdns:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_mdns 2026-03-29 01:17:00.838754 | orchestrator | 7cbc556d8da5 registry.osism.tech/kolla/designate-producer:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_producer 2026-03-29 01:17:00.838757 | orchestrator | c4f9f1b27d03 registry.osism.tech/kolla/designate-central:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_central 2026-03-29 01:17:00.838760 | orchestrator | 5d64c82fb032 registry.osism.tech/kolla/designate-api:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_api 2026-03-29 01:17:00.838763 | orchestrator | fe397590fbc4 registry.osism.tech/kolla/designate-backend-bind9:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) designate_backend_bind9 2026-03-29 01:17:00.838766 | orchestrator | 10d93f77d16b registry.osism.tech/kolla/barbican-worker:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_worker 2026-03-29 01:17:00.838778 | orchestrator | 0aa9b2a6280a registry.osism.tech/kolla/nova-api:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) nova_api 2026-03-29 01:17:00.838781 | orchestrator | b5a54c8edb68 registry.osism.tech/kolla/barbican-keystone-listener:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_keystone_listener 2026-03-29 01:17:00.838792 | orchestrator | e8b84a10239b registry.osism.tech/kolla/nova-scheduler:2024.2 "dumb-init --single-…" 11 minutes ago Up 9 minutes (healthy) nova_scheduler 2026-03-29 01:17:00.838795 | orchestrator | 58515762b806 registry.osism.tech/kolla/barbican-api:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_api 2026-03-29 01:17:00.838798 | orchestrator | 2a9470160e82 registry.osism.tech/kolla/cinder-backup:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) cinder_backup 2026-03-29 01:17:00.838801 | orchestrator | 8bd8592c81d6 registry.osism.tech/kolla/cinder-volume:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) cinder_volume 2026-03-29 01:17:00.838805 | orchestrator | 6983cdcaa520 registry.osism.tech/kolla/glance-api:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) glance_api 2026-03-29 01:17:00.838808 | orchestrator | 67dc4a07cc78 registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_elasticsearch_exporter 2026-03-29 01:17:00.838813 | orchestrator | c123e4e46295 registry.osism.tech/kolla/cinder-scheduler:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) cinder_scheduler 2026-03-29 01:17:00.838816 | orchestrator | 7634d44a444a registry.osism.tech/kolla/cinder-api:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) cinder_api 2026-03-29 01:17:00.838819 | orchestrator | 2a14b3fa4972 registry.osism.tech/kolla/prometheus-cadvisor:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_cadvisor 2026-03-29 01:17:00.838822 | orchestrator | 8dbc35de6207 registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_memcached_exporter 2026-03-29 01:17:00.838826 | orchestrator | d7c600f51462 registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_mysqld_exporter 2026-03-29 01:17:00.838829 | orchestrator | 0527279caf42 registry.osism.tech/kolla/prometheus-node-exporter:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_node_exporter 2026-03-29 01:17:00.839544 | orchestrator | aed5640e9266 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mgr -…" 15 minutes ago Up 15 minutes ceph-mgr-testbed-node-1 2026-03-29 01:17:00.839574 | orchestrator | 9e5d46dd2ec5 registry.osism.tech/kolla/keystone:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) keystone 2026-03-29 01:17:00.839579 | orchestrator | 819e839fad94 registry.osism.tech/kolla/horizon:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) horizon 2026-03-29 01:17:00.839584 | orchestrator | a532a09e4dbf registry.osism.tech/kolla/keystone-fernet:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) keystone_fernet 2026-03-29 01:17:00.839588 | orchestrator | d0f54487b14f registry.osism.tech/kolla/keystone-ssh:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) keystone_ssh 2026-03-29 01:17:00.839592 | orchestrator | e2f3fbcbb60b registry.osism.tech/kolla/opensearch-dashboards:2024.2 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) opensearch_dashboards 2026-03-29 01:17:00.839696 | orchestrator | 37d8e52afd70 registry.osism.tech/kolla/mariadb-server:2024.2 "dumb-init -- kolla_…" 20 minutes ago Up 20 minutes (healthy) mariadb 2026-03-29 01:17:00.839703 | orchestrator | 5440a83298af registry.osism.tech/kolla/opensearch:2024.2 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) opensearch 2026-03-29 01:17:00.839707 | orchestrator | 999819e8f174 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-crash" 21 minutes ago Up 21 minutes ceph-crash-testbed-node-1 2026-03-29 01:17:00.839711 | orchestrator | a3bc8ce59cc6 registry.osism.tech/kolla/keepalived:2024.2 "dumb-init --single-…" 22 minutes ago Up 22 minutes keepalived 2026-03-29 01:17:00.839715 | orchestrator | 2db0ce91fe01 registry.osism.tech/kolla/proxysql:2024.2 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) proxysql 2026-03-29 01:17:00.839719 | orchestrator | d107d4cc24c1 registry.osism.tech/kolla/haproxy:2024.2 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) haproxy 2026-03-29 01:17:00.839723 | orchestrator | 40b40db81920 registry.osism.tech/kolla/ovn-northd:2024.2 "dumb-init --single-…" 25 minutes ago Up 24 minutes ovn_northd 2026-03-29 01:17:00.839726 | orchestrator | 273a39938826 registry.osism.tech/kolla/ovn-sb-db-server:2024.2 "dumb-init --single-…" 25 minutes ago Up 25 minutes ovn_sb_db 2026-03-29 01:17:00.839730 | orchestrator | 3041a633a894 registry.osism.tech/kolla/ovn-nb-db-server:2024.2 "dumb-init --single-…" 25 minutes ago Up 25 minutes ovn_nb_db 2026-03-29 01:17:00.839734 | orchestrator | fc9de10043cf registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mon -…" 26 minutes ago Up 26 minutes ceph-mon-testbed-node-1 2026-03-29 01:17:00.839738 | orchestrator | c6b612b985d0 registry.osism.tech/kolla/ovn-controller:2024.2 "dumb-init --single-…" 26 minutes ago Up 26 minutes ovn_controller 2026-03-29 01:17:00.839750 | orchestrator | 687c783cec47 registry.osism.tech/kolla/rabbitmq:2024.2 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) rabbitmq 2026-03-29 01:17:00.839754 | orchestrator | 3d0fa8032de4 registry.osism.tech/kolla/openvswitch-vswitchd:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) openvswitch_vswitchd 2026-03-29 01:17:00.839767 | orchestrator | 5016b1cee1e5 registry.osism.tech/kolla/redis-sentinel:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) redis_sentinel 2026-03-29 01:17:00.839771 | orchestrator | 48ab9a5fa8d0 registry.osism.tech/kolla/openvswitch-db-server:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) openvswitch_db 2026-03-29 01:17:00.839775 | orchestrator | 17f961a48435 registry.osism.tech/kolla/redis:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) redis 2026-03-29 01:17:00.839780 | orchestrator | 25f077123282 registry.osism.tech/kolla/memcached:2024.2 "dumb-init --single-…" 28 minutes ago Up 27 minutes (healthy) memcached 2026-03-29 01:17:00.839792 | orchestrator | ac350e02f420 registry.osism.tech/kolla/cron:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes cron 2026-03-29 01:17:00.839796 | orchestrator | ed5f270b2c4b registry.osism.tech/kolla/kolla-toolbox:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes kolla_toolbox 2026-03-29 01:17:00.839804 | orchestrator | c024672b637c registry.osism.tech/kolla/fluentd:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes fluentd 2026-03-29 01:17:01.007924 | orchestrator | 2026-03-29 01:17:01.007983 | orchestrator | ## Images @ testbed-node-1 2026-03-29 01:17:01.007992 | orchestrator | 2026-03-29 01:17:01.007997 | orchestrator | + echo 2026-03-29 01:17:01.008003 | orchestrator | + echo '## Images @ testbed-node-1' 2026-03-29 01:17:01.008009 | orchestrator | + echo 2026-03-29 01:17:01.008026 | orchestrator | + osism container testbed-node-1 images 2026-03-29 01:17:02.507755 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-03-29 01:17:02.507852 | orchestrator | registry.osism.tech/osism/ceph-daemon reef 233dae3e7f75 22 hours ago 1.35GB 2026-03-29 01:17:02.507863 | orchestrator | registry.osism.tech/kolla/opensearch 2024.2 fff86a8dfe72 22 hours ago 1.57GB 2026-03-29 01:17:02.507870 | orchestrator | registry.osism.tech/kolla/opensearch-dashboards 2024.2 77a5d2c8cb3b 22 hours ago 1.54GB 2026-03-29 01:17:02.507877 | orchestrator | registry.osism.tech/kolla/memcached 2024.2 c8f3b0a8bb7d 22 hours ago 277MB 2026-03-29 01:17:02.507884 | orchestrator | registry.osism.tech/kolla/haproxy 2024.2 b0df71434c32 22 hours ago 285MB 2026-03-29 01:17:02.507923 | orchestrator | registry.osism.tech/kolla/fluentd 2024.2 0adcbfa3acec 22 hours ago 590MB 2026-03-29 01:17:02.507933 | orchestrator | registry.osism.tech/kolla/rabbitmq 2024.2 061b2ef690dc 22 hours ago 333MB 2026-03-29 01:17:02.507945 | orchestrator | registry.osism.tech/kolla/grafana 2024.2 363de30ad5df 22 hours ago 1.04GB 2026-03-29 01:17:02.507956 | orchestrator | registry.osism.tech/kolla/keepalived 2024.2 203985212c2b 22 hours ago 287MB 2026-03-29 01:17:02.507986 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2024.2 ba54e8f32140 22 hours ago 679MB 2026-03-29 01:17:02.507998 | orchestrator | registry.osism.tech/kolla/proxysql 2024.2 8910c6ab6b78 22 hours ago 427MB 2026-03-29 01:17:02.508009 | orchestrator | registry.osism.tech/kolla/cron 2024.2 15336f5d1fc0 22 hours ago 277MB 2026-03-29 01:17:02.508020 | orchestrator | registry.osism.tech/kolla/mariadb-server 2024.2 1eff2fbb28c8 22 hours ago 463MB 2026-03-29 01:17:02.508032 | orchestrator | registry.osism.tech/kolla/prometheus-elasticsearch-exporter 2024.2 fb4949a7e745 22 hours ago 303MB 2026-03-29 01:17:02.508043 | orchestrator | registry.osism.tech/kolla/prometheus-memcached-exporter 2024.2 b2d86a05941d 22 hours ago 309MB 2026-03-29 01:17:02.508054 | orchestrator | registry.osism.tech/kolla/prometheus-mysqld-exporter 2024.2 bc057029f0fe 22 hours ago 312MB 2026-03-29 01:17:02.508066 | orchestrator | registry.osism.tech/kolla/prometheus-cadvisor 2024.2 3717dbe9780f 22 hours ago 368MB 2026-03-29 01:17:02.508077 | orchestrator | registry.osism.tech/kolla/prometheus-node-exporter 2024.2 5029f1fb5219 22 hours ago 317MB 2026-03-29 01:17:02.508089 | orchestrator | registry.osism.tech/kolla/horizon 2024.2 2e9ebe226d62 22 hours ago 1.16GB 2026-03-29 01:17:02.508100 | orchestrator | registry.osism.tech/kolla/openvswitch-vswitchd 2024.2 85f87c391f52 22 hours ago 290MB 2026-03-29 01:17:02.508111 | orchestrator | registry.osism.tech/kolla/openvswitch-db-server 2024.2 136bfd26aba2 22 hours ago 290MB 2026-03-29 01:17:02.508119 | orchestrator | registry.osism.tech/kolla/redis-sentinel 2024.2 41bff60c56d1 22 hours ago 284MB 2026-03-29 01:17:02.508125 | orchestrator | registry.osism.tech/kolla/redis 2024.2 ab0d88e5bec1 22 hours ago 284MB 2026-03-29 01:17:02.508132 | orchestrator | registry.osism.tech/kolla/keystone 2024.2 a50f3bfd0c42 22 hours ago 1.08GB 2026-03-29 01:17:02.508139 | orchestrator | registry.osism.tech/kolla/keystone-ssh 2024.2 6a0c7c59f262 22 hours ago 1.05GB 2026-03-29 01:17:02.508164 | orchestrator | registry.osism.tech/kolla/keystone-fernet 2024.2 a567b59d0d84 22 hours ago 1.05GB 2026-03-29 01:17:02.508171 | orchestrator | registry.osism.tech/kolla/cinder-api 2024.2 a8d5cbc00b0f 22 hours ago 1.42GB 2026-03-29 01:17:02.508178 | orchestrator | registry.osism.tech/kolla/cinder-backup 2024.2 689b3be02772 22 hours ago 1.42GB 2026-03-29 01:17:02.508185 | orchestrator | registry.osism.tech/kolla/cinder-volume 2024.2 ab02dd061263 22 hours ago 1.73GB 2026-03-29 01:17:02.508192 | orchestrator | registry.osism.tech/kolla/cinder-scheduler 2024.2 ca63a5d6c3f2 22 hours ago 1.42GB 2026-03-29 01:17:02.508199 | orchestrator | registry.osism.tech/kolla/nova-conductor 2024.2 915f98e1e1ab 22 hours ago 1.22GB 2026-03-29 01:17:02.508206 | orchestrator | registry.osism.tech/kolla/nova-scheduler 2024.2 6380172c57f0 22 hours ago 1.22GB 2026-03-29 01:17:02.508213 | orchestrator | registry.osism.tech/kolla/nova-novncproxy 2024.2 bb03e7fc4641 22 hours ago 1.38GB 2026-03-29 01:17:02.508233 | orchestrator | registry.osism.tech/kolla/nova-api 2024.2 f1019912b5bc 22 hours ago 1.22GB 2026-03-29 01:17:02.508240 | orchestrator | registry.osism.tech/kolla/neutron-server 2024.2 586795ecfab0 22 hours ago 1.17GB 2026-03-29 01:17:02.508247 | orchestrator | registry.osism.tech/kolla/placement-api 2024.2 8e848eeed8c1 22 hours ago 986MB 2026-03-29 01:17:02.508271 | orchestrator | registry.osism.tech/kolla/glance-api 2024.2 fa7df5d87941 22 hours ago 1.11GB 2026-03-29 01:17:02.508278 | orchestrator | registry.osism.tech/kolla/designate-producer 2024.2 d08646bdb70d 22 hours ago 995MB 2026-03-29 01:17:02.508284 | orchestrator | registry.osism.tech/kolla/designate-central 2024.2 c021aceafc6e 22 hours ago 994MB 2026-03-29 01:17:02.508291 | orchestrator | registry.osism.tech/kolla/designate-mdns 2024.2 cd17a44c126a 22 hours ago 995MB 2026-03-29 01:17:02.508298 | orchestrator | registry.osism.tech/kolla/designate-worker 2024.2 0927b791a082 22 hours ago 1e+03MB 2026-03-29 01:17:02.508305 | orchestrator | registry.osism.tech/kolla/designate-backend-bind9 2024.2 5964224d4edd 22 hours ago 1e+03MB 2026-03-29 01:17:02.508312 | orchestrator | registry.osism.tech/kolla/designate-api 2024.2 226e3f48acd8 22 hours ago 995MB 2026-03-29 01:17:02.508320 | orchestrator | registry.osism.tech/kolla/barbican-keystone-listener 2024.2 57cbdc1befa6 22 hours ago 1GB 2026-03-29 01:17:02.508328 | orchestrator | registry.osism.tech/kolla/barbican-worker 2024.2 0297ff7f5aa6 22 hours ago 1GB 2026-03-29 01:17:02.508336 | orchestrator | registry.osism.tech/kolla/barbican-api 2024.2 50948bbcb33e 22 hours ago 1GB 2026-03-29 01:17:02.508344 | orchestrator | registry.osism.tech/kolla/octavia-housekeeping 2024.2 78c0c4734053 22 hours ago 1.04GB 2026-03-29 01:17:02.508352 | orchestrator | registry.osism.tech/kolla/octavia-driver-agent 2024.2 a01a82fd6904 22 hours ago 1.06GB 2026-03-29 01:17:02.508360 | orchestrator | registry.osism.tech/kolla/octavia-api 2024.2 52e0b3c87a0d 22 hours ago 1.06GB 2026-03-29 01:17:02.508368 | orchestrator | registry.osism.tech/kolla/octavia-health-manager 2024.2 9e62358e4c10 22 hours ago 1.04GB 2026-03-29 01:17:02.508376 | orchestrator | registry.osism.tech/kolla/octavia-worker 2024.2 132d6f337160 22 hours ago 1.04GB 2026-03-29 01:17:02.508384 | orchestrator | registry.osism.tech/kolla/magnum-conductor 2024.2 fedad62d4394 22 hours ago 1.25GB 2026-03-29 01:17:02.508391 | orchestrator | registry.osism.tech/kolla/magnum-api 2024.2 fa08c78404d2 22 hours ago 1.14GB 2026-03-29 01:17:02.508399 | orchestrator | registry.osism.tech/kolla/ovn-controller 2024.2 a6e00fc3f91a 22 hours ago 851MB 2026-03-29 01:17:02.508407 | orchestrator | registry.osism.tech/kolla/ovn-sb-db-server 2024.2 5e07b971ee71 22 hours ago 851MB 2026-03-29 01:17:02.508421 | orchestrator | registry.osism.tech/kolla/ovn-nb-db-server 2024.2 65410be5b615 22 hours ago 851MB 2026-03-29 01:17:02.508429 | orchestrator | registry.osism.tech/kolla/ovn-northd 2024.2 ae71ccba3ea4 22 hours ago 851MB 2026-03-29 01:17:02.662377 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-03-29 01:17:02.662922 | orchestrator | ++ semver latest 5.0.0 2026-03-29 01:17:02.723874 | orchestrator | 2026-03-29 01:17:02.723938 | orchestrator | ## Containers @ testbed-node-2 2026-03-29 01:17:02.723944 | orchestrator | 2026-03-29 01:17:02.723949 | orchestrator | + [[ -1 -eq -1 ]] 2026-03-29 01:17:02.723953 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-03-29 01:17:02.723958 | orchestrator | + echo 2026-03-29 01:17:02.723962 | orchestrator | + echo '## Containers @ testbed-node-2' 2026-03-29 01:17:02.723967 | orchestrator | + echo 2026-03-29 01:17:02.723972 | orchestrator | + osism container testbed-node-2 ps 2026-03-29 01:17:04.239266 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-03-29 01:17:04.239357 | orchestrator | c061d5b6c55b registry.osism.tech/kolla/octavia-worker:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_worker 2026-03-29 01:17:04.239366 | orchestrator | 3374253b63a7 registry.osism.tech/kolla/octavia-housekeeping:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_housekeeping 2026-03-29 01:17:04.239373 | orchestrator | a36699d5dace registry.osism.tech/kolla/octavia-health-manager:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_health_manager 2026-03-29 01:17:04.239378 | orchestrator | b7b8dc7f36d3 registry.osism.tech/kolla/octavia-driver-agent:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes octavia_driver_agent 2026-03-29 01:17:04.239383 | orchestrator | b48431cfac53 registry.osism.tech/kolla/octavia-api:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_api 2026-03-29 01:17:04.239388 | orchestrator | cd4f4d94d6a2 registry.osism.tech/kolla/grafana:2024.2 "dumb-init --single-…" 7 minutes ago Up 7 minutes grafana 2026-03-29 01:17:04.239393 | orchestrator | a48a5ffb8d4e registry.osism.tech/kolla/magnum-conductor:2024.2 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) magnum_conductor 2026-03-29 01:17:04.239398 | orchestrator | 0123d51c2ef4 registry.osism.tech/kolla/magnum-api:2024.2 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) magnum_api 2026-03-29 01:17:04.239402 | orchestrator | 25ca8e0675e9 registry.osism.tech/kolla/placement-api:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) placement_api 2026-03-29 01:17:04.239407 | orchestrator | 1f252c046459 registry.osism.tech/kolla/nova-novncproxy:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_novncproxy 2026-03-29 01:17:04.239412 | orchestrator | 19b6e019eb92 registry.osism.tech/kolla/nova-conductor:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_conductor 2026-03-29 01:17:04.239416 | orchestrator | 93ba1fbf8289 registry.osism.tech/kolla/neutron-server:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) neutron_server 2026-03-29 01:17:04.239421 | orchestrator | d95b96475ca8 registry.osism.tech/kolla/designate-worker:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_worker 2026-03-29 01:17:04.239426 | orchestrator | 0b39524cec40 registry.osism.tech/kolla/designate-mdns:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_mdns 2026-03-29 01:17:04.239446 | orchestrator | 9aed6de91059 registry.osism.tech/kolla/designate-producer:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_producer 2026-03-29 01:17:04.239470 | orchestrator | 6cd1365c3f07 registry.osism.tech/kolla/designate-central:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_central 2026-03-29 01:17:04.239476 | orchestrator | 54ce1c77eb1a registry.osism.tech/kolla/designate-api:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) designate_api 2026-03-29 01:17:04.239480 | orchestrator | c3979aca814e registry.osism.tech/kolla/designate-backend-bind9:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) designate_backend_bind9 2026-03-29 01:17:04.239485 | orchestrator | 3f87f4e2fe54 registry.osism.tech/kolla/barbican-worker:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_worker 2026-03-29 01:17:04.239490 | orchestrator | ef658821f12c registry.osism.tech/kolla/nova-api:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) nova_api 2026-03-29 01:17:04.239495 | orchestrator | c67cdcdc3e6d registry.osism.tech/kolla/barbican-keystone-listener:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_keystone_listener 2026-03-29 01:17:04.239511 | orchestrator | 810bf57565fb registry.osism.tech/kolla/nova-scheduler:2024.2 "dumb-init --single-…" 11 minutes ago Up 9 minutes (healthy) nova_scheduler 2026-03-29 01:17:04.239516 | orchestrator | 32c377aa3e87 registry.osism.tech/kolla/barbican-api:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_api 2026-03-29 01:17:04.239521 | orchestrator | 8c387c4c5fd6 registry.osism.tech/kolla/cinder-backup:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) cinder_backup 2026-03-29 01:17:04.239525 | orchestrator | d15d6eba1e64 registry.osism.tech/kolla/cinder-volume:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) cinder_volume 2026-03-29 01:17:04.239530 | orchestrator | b6efdc8d7295 registry.osism.tech/kolla/glance-api:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) glance_api 2026-03-29 01:17:04.239535 | orchestrator | 33ffba935d33 registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_elasticsearch_exporter 2026-03-29 01:17:04.239541 | orchestrator | 106c999afcd1 registry.osism.tech/kolla/cinder-scheduler:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) cinder_scheduler 2026-03-29 01:17:04.239545 | orchestrator | 13ee6984acd6 registry.osism.tech/kolla/prometheus-cadvisor:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_cadvisor 2026-03-29 01:17:04.239550 | orchestrator | ef870212631b registry.osism.tech/kolla/cinder-api:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) cinder_api 2026-03-29 01:17:04.239555 | orchestrator | d4829849bcfc registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_memcached_exporter 2026-03-29 01:17:04.239571 | orchestrator | 4e703e74502f registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_mysqld_exporter 2026-03-29 01:17:04.239576 | orchestrator | 1e7e5728efe1 registry.osism.tech/kolla/prometheus-node-exporter:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_node_exporter 2026-03-29 01:17:04.239581 | orchestrator | f545ac2af97e registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mgr -…" 15 minutes ago Up 15 minutes ceph-mgr-testbed-node-2 2026-03-29 01:17:04.239590 | orchestrator | 2d42819a5d25 registry.osism.tech/kolla/keystone:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) keystone 2026-03-29 01:17:04.239595 | orchestrator | b751e9f44bfd registry.osism.tech/kolla/horizon:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) horizon 2026-03-29 01:17:04.239600 | orchestrator | 8004004b2c38 registry.osism.tech/kolla/keystone-fernet:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) keystone_fernet 2026-03-29 01:17:04.239604 | orchestrator | 03ab53e63d7e registry.osism.tech/kolla/keystone-ssh:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) keystone_ssh 2026-03-29 01:17:04.239655 | orchestrator | 1607033e8e47 registry.osism.tech/kolla/opensearch-dashboards:2024.2 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) opensearch_dashboards 2026-03-29 01:17:04.239662 | orchestrator | 41ad78a5a401 registry.osism.tech/kolla/mariadb-server:2024.2 "dumb-init -- kolla_…" 20 minutes ago Up 19 minutes (healthy) mariadb 2026-03-29 01:17:04.239667 | orchestrator | f548b81caa3d registry.osism.tech/kolla/opensearch:2024.2 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) opensearch 2026-03-29 01:17:04.239672 | orchestrator | 325b3e75d7d4 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-crash" 22 minutes ago Up 21 minutes ceph-crash-testbed-node-2 2026-03-29 01:17:04.239676 | orchestrator | 9f58ca7e954f registry.osism.tech/kolla/keepalived:2024.2 "dumb-init --single-…" 22 minutes ago Up 22 minutes keepalived 2026-03-29 01:17:04.239681 | orchestrator | 360a975d1d52 registry.osism.tech/kolla/proxysql:2024.2 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) proxysql 2026-03-29 01:17:04.239691 | orchestrator | 3fcf6c8bc19b registry.osism.tech/kolla/haproxy:2024.2 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) haproxy 2026-03-29 01:17:04.239696 | orchestrator | a91287d828fb registry.osism.tech/kolla/ovn-northd:2024.2 "dumb-init --single-…" 25 minutes ago Up 24 minutes ovn_northd 2026-03-29 01:17:04.239701 | orchestrator | 8d9f5bc90f6e registry.osism.tech/kolla/ovn-sb-db-server:2024.2 "dumb-init --single-…" 25 minutes ago Up 25 minutes ovn_sb_db 2026-03-29 01:17:04.239706 | orchestrator | c965879d9e13 registry.osism.tech/kolla/ovn-nb-db-server:2024.2 "dumb-init --single-…" 25 minutes ago Up 25 minutes ovn_nb_db 2026-03-29 01:17:04.239710 | orchestrator | 8a0bf6fb5678 registry.osism.tech/kolla/rabbitmq:2024.2 "dumb-init --single-…" 25 minutes ago Up 25 minutes (healthy) rabbitmq 2026-03-29 01:17:04.239719 | orchestrator | 9fd92284b6a7 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mon -…" 26 minutes ago Up 26 minutes ceph-mon-testbed-node-2 2026-03-29 01:17:04.239724 | orchestrator | 9e754f90d40d registry.osism.tech/kolla/ovn-controller:2024.2 "dumb-init --single-…" 26 minutes ago Up 26 minutes ovn_controller 2026-03-29 01:17:04.239729 | orchestrator | 8d9848909b79 registry.osism.tech/kolla/openvswitch-vswitchd:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) openvswitch_vswitchd 2026-03-29 01:17:04.239733 | orchestrator | fa2448ce39e8 registry.osism.tech/kolla/redis-sentinel:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) redis_sentinel 2026-03-29 01:17:04.239738 | orchestrator | f472d86d39df registry.osism.tech/kolla/openvswitch-db-server:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) openvswitch_db 2026-03-29 01:17:04.239747 | orchestrator | 0f860ebc1c96 registry.osism.tech/kolla/redis:2024.2 "dumb-init --single-…" 28 minutes ago Up 27 minutes (healthy) redis 2026-03-29 01:17:04.239752 | orchestrator | 1d11e41492de registry.osism.tech/kolla/memcached:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) memcached 2026-03-29 01:17:04.239756 | orchestrator | 835f1d9a3363 registry.osism.tech/kolla/cron:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes cron 2026-03-29 01:17:04.239761 | orchestrator | 888cba10ec44 registry.osism.tech/kolla/kolla-toolbox:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes kolla_toolbox 2026-03-29 01:17:04.239765 | orchestrator | 0f19726d1aa0 registry.osism.tech/kolla/fluentd:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes fluentd 2026-03-29 01:17:04.412910 | orchestrator | 2026-03-29 01:17:04.413005 | orchestrator | ## Images @ testbed-node-2 2026-03-29 01:17:04.413017 | orchestrator | 2026-03-29 01:17:04.413026 | orchestrator | + echo 2026-03-29 01:17:04.413035 | orchestrator | + echo '## Images @ testbed-node-2' 2026-03-29 01:17:04.413044 | orchestrator | + echo 2026-03-29 01:17:04.413053 | orchestrator | + osism container testbed-node-2 images 2026-03-29 01:17:05.894999 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-03-29 01:17:05.895078 | orchestrator | registry.osism.tech/osism/ceph-daemon reef 233dae3e7f75 22 hours ago 1.35GB 2026-03-29 01:17:05.895084 | orchestrator | registry.osism.tech/kolla/opensearch 2024.2 fff86a8dfe72 22 hours ago 1.57GB 2026-03-29 01:17:05.895101 | orchestrator | registry.osism.tech/kolla/opensearch-dashboards 2024.2 77a5d2c8cb3b 22 hours ago 1.54GB 2026-03-29 01:17:05.895106 | orchestrator | registry.osism.tech/kolla/memcached 2024.2 c8f3b0a8bb7d 22 hours ago 277MB 2026-03-29 01:17:05.895110 | orchestrator | registry.osism.tech/kolla/haproxy 2024.2 b0df71434c32 22 hours ago 285MB 2026-03-29 01:17:05.895114 | orchestrator | registry.osism.tech/kolla/fluentd 2024.2 0adcbfa3acec 22 hours ago 590MB 2026-03-29 01:17:05.895118 | orchestrator | registry.osism.tech/kolla/grafana 2024.2 363de30ad5df 22 hours ago 1.04GB 2026-03-29 01:17:05.895122 | orchestrator | registry.osism.tech/kolla/rabbitmq 2024.2 061b2ef690dc 22 hours ago 333MB 2026-03-29 01:17:05.895126 | orchestrator | registry.osism.tech/kolla/keepalived 2024.2 203985212c2b 22 hours ago 287MB 2026-03-29 01:17:05.895130 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2024.2 ba54e8f32140 22 hours ago 679MB 2026-03-29 01:17:05.895134 | orchestrator | registry.osism.tech/kolla/proxysql 2024.2 8910c6ab6b78 22 hours ago 427MB 2026-03-29 01:17:05.895137 | orchestrator | registry.osism.tech/kolla/cron 2024.2 15336f5d1fc0 22 hours ago 277MB 2026-03-29 01:17:05.895141 | orchestrator | registry.osism.tech/kolla/mariadb-server 2024.2 1eff2fbb28c8 22 hours ago 463MB 2026-03-29 01:17:05.895146 | orchestrator | registry.osism.tech/kolla/prometheus-elasticsearch-exporter 2024.2 fb4949a7e745 22 hours ago 303MB 2026-03-29 01:17:05.895151 | orchestrator | registry.osism.tech/kolla/prometheus-memcached-exporter 2024.2 b2d86a05941d 22 hours ago 309MB 2026-03-29 01:17:05.895157 | orchestrator | registry.osism.tech/kolla/prometheus-mysqld-exporter 2024.2 bc057029f0fe 22 hours ago 312MB 2026-03-29 01:17:05.895163 | orchestrator | registry.osism.tech/kolla/prometheus-cadvisor 2024.2 3717dbe9780f 22 hours ago 368MB 2026-03-29 01:17:05.895172 | orchestrator | registry.osism.tech/kolla/prometheus-node-exporter 2024.2 5029f1fb5219 22 hours ago 317MB 2026-03-29 01:17:05.895181 | orchestrator | registry.osism.tech/kolla/horizon 2024.2 2e9ebe226d62 22 hours ago 1.16GB 2026-03-29 01:17:05.895207 | orchestrator | registry.osism.tech/kolla/openvswitch-vswitchd 2024.2 85f87c391f52 22 hours ago 290MB 2026-03-29 01:17:05.895213 | orchestrator | registry.osism.tech/kolla/openvswitch-db-server 2024.2 136bfd26aba2 22 hours ago 290MB 2026-03-29 01:17:05.895219 | orchestrator | registry.osism.tech/kolla/redis-sentinel 2024.2 41bff60c56d1 22 hours ago 284MB 2026-03-29 01:17:05.895224 | orchestrator | registry.osism.tech/kolla/redis 2024.2 ab0d88e5bec1 22 hours ago 284MB 2026-03-29 01:17:05.895230 | orchestrator | registry.osism.tech/kolla/keystone 2024.2 a50f3bfd0c42 22 hours ago 1.08GB 2026-03-29 01:17:05.895236 | orchestrator | registry.osism.tech/kolla/keystone-ssh 2024.2 6a0c7c59f262 22 hours ago 1.05GB 2026-03-29 01:17:05.895242 | orchestrator | registry.osism.tech/kolla/keystone-fernet 2024.2 a567b59d0d84 22 hours ago 1.05GB 2026-03-29 01:17:05.895248 | orchestrator | registry.osism.tech/kolla/cinder-api 2024.2 a8d5cbc00b0f 22 hours ago 1.42GB 2026-03-29 01:17:05.895254 | orchestrator | registry.osism.tech/kolla/cinder-backup 2024.2 689b3be02772 22 hours ago 1.42GB 2026-03-29 01:17:05.895260 | orchestrator | registry.osism.tech/kolla/cinder-volume 2024.2 ab02dd061263 22 hours ago 1.73GB 2026-03-29 01:17:05.895266 | orchestrator | registry.osism.tech/kolla/cinder-scheduler 2024.2 ca63a5d6c3f2 22 hours ago 1.42GB 2026-03-29 01:17:05.895274 | orchestrator | registry.osism.tech/kolla/nova-conductor 2024.2 915f98e1e1ab 22 hours ago 1.22GB 2026-03-29 01:17:05.895284 | orchestrator | registry.osism.tech/kolla/nova-scheduler 2024.2 6380172c57f0 22 hours ago 1.22GB 2026-03-29 01:17:05.895289 | orchestrator | registry.osism.tech/kolla/nova-novncproxy 2024.2 bb03e7fc4641 22 hours ago 1.38GB 2026-03-29 01:17:05.895295 | orchestrator | registry.osism.tech/kolla/nova-api 2024.2 f1019912b5bc 22 hours ago 1.22GB 2026-03-29 01:17:05.895301 | orchestrator | registry.osism.tech/kolla/neutron-server 2024.2 586795ecfab0 22 hours ago 1.17GB 2026-03-29 01:17:05.895306 | orchestrator | registry.osism.tech/kolla/placement-api 2024.2 8e848eeed8c1 22 hours ago 986MB 2026-03-29 01:17:05.895329 | orchestrator | registry.osism.tech/kolla/glance-api 2024.2 fa7df5d87941 22 hours ago 1.11GB 2026-03-29 01:17:05.895335 | orchestrator | registry.osism.tech/kolla/designate-producer 2024.2 d08646bdb70d 22 hours ago 995MB 2026-03-29 01:17:05.895341 | orchestrator | registry.osism.tech/kolla/designate-central 2024.2 c021aceafc6e 22 hours ago 994MB 2026-03-29 01:17:05.895347 | orchestrator | registry.osism.tech/kolla/designate-mdns 2024.2 cd17a44c126a 22 hours ago 995MB 2026-03-29 01:17:05.895354 | orchestrator | registry.osism.tech/kolla/designate-worker 2024.2 0927b791a082 22 hours ago 1e+03MB 2026-03-29 01:17:05.895361 | orchestrator | registry.osism.tech/kolla/designate-backend-bind9 2024.2 5964224d4edd 22 hours ago 1e+03MB 2026-03-29 01:17:05.895367 | orchestrator | registry.osism.tech/kolla/designate-api 2024.2 226e3f48acd8 22 hours ago 995MB 2026-03-29 01:17:05.895374 | orchestrator | registry.osism.tech/kolla/barbican-keystone-listener 2024.2 57cbdc1befa6 22 hours ago 1GB 2026-03-29 01:17:05.895379 | orchestrator | registry.osism.tech/kolla/barbican-worker 2024.2 0297ff7f5aa6 22 hours ago 1GB 2026-03-29 01:17:05.895382 | orchestrator | registry.osism.tech/kolla/barbican-api 2024.2 50948bbcb33e 22 hours ago 1GB 2026-03-29 01:17:05.895386 | orchestrator | registry.osism.tech/kolla/octavia-housekeeping 2024.2 78c0c4734053 22 hours ago 1.04GB 2026-03-29 01:17:05.895390 | orchestrator | registry.osism.tech/kolla/octavia-driver-agent 2024.2 a01a82fd6904 22 hours ago 1.06GB 2026-03-29 01:17:05.895401 | orchestrator | registry.osism.tech/kolla/octavia-api 2024.2 52e0b3c87a0d 22 hours ago 1.06GB 2026-03-29 01:17:05.895412 | orchestrator | registry.osism.tech/kolla/octavia-health-manager 2024.2 9e62358e4c10 22 hours ago 1.04GB 2026-03-29 01:17:05.895416 | orchestrator | registry.osism.tech/kolla/octavia-worker 2024.2 132d6f337160 22 hours ago 1.04GB 2026-03-29 01:17:05.895420 | orchestrator | registry.osism.tech/kolla/magnum-conductor 2024.2 fedad62d4394 22 hours ago 1.25GB 2026-03-29 01:17:05.895424 | orchestrator | registry.osism.tech/kolla/magnum-api 2024.2 fa08c78404d2 22 hours ago 1.14GB 2026-03-29 01:17:05.895430 | orchestrator | registry.osism.tech/kolla/ovn-controller 2024.2 a6e00fc3f91a 22 hours ago 851MB 2026-03-29 01:17:05.895436 | orchestrator | registry.osism.tech/kolla/ovn-sb-db-server 2024.2 5e07b971ee71 22 hours ago 851MB 2026-03-29 01:17:05.895442 | orchestrator | registry.osism.tech/kolla/ovn-nb-db-server 2024.2 65410be5b615 22 hours ago 851MB 2026-03-29 01:17:05.895447 | orchestrator | registry.osism.tech/kolla/ovn-northd 2024.2 ae71ccba3ea4 22 hours ago 851MB 2026-03-29 01:17:06.050366 | orchestrator | + sh -c /opt/configuration/scripts/check-services.sh 2026-03-29 01:17:06.059446 | orchestrator | + set -e 2026-03-29 01:17:06.059532 | orchestrator | + source /opt/manager-vars.sh 2026-03-29 01:17:06.059998 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-03-29 01:17:06.060024 | orchestrator | ++ NUMBER_OF_NODES=6 2026-03-29 01:17:06.060029 | orchestrator | ++ export CEPH_VERSION=reef 2026-03-29 01:17:06.060033 | orchestrator | ++ CEPH_VERSION=reef 2026-03-29 01:17:06.060037 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-03-29 01:17:06.060042 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-03-29 01:17:06.060047 | orchestrator | ++ export MANAGER_VERSION=latest 2026-03-29 01:17:06.060052 | orchestrator | ++ MANAGER_VERSION=latest 2026-03-29 01:17:06.060059 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-03-29 01:17:06.060065 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-03-29 01:17:06.060071 | orchestrator | ++ export ARA=false 2026-03-29 01:17:06.060078 | orchestrator | ++ ARA=false 2026-03-29 01:17:06.060085 | orchestrator | ++ export DEPLOY_MODE=manager 2026-03-29 01:17:06.060090 | orchestrator | ++ DEPLOY_MODE=manager 2026-03-29 01:17:06.060289 | orchestrator | ++ export TEMPEST=true 2026-03-29 01:17:06.060346 | orchestrator | ++ TEMPEST=true 2026-03-29 01:17:06.060352 | orchestrator | ++ export IS_ZUUL=true 2026-03-29 01:17:06.060357 | orchestrator | ++ IS_ZUUL=true 2026-03-29 01:17:06.060361 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.35 2026-03-29 01:17:06.060367 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.35 2026-03-29 01:17:06.060371 | orchestrator | ++ export EXTERNAL_API=false 2026-03-29 01:17:06.060376 | orchestrator | ++ EXTERNAL_API=false 2026-03-29 01:17:06.060380 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-03-29 01:17:06.060384 | orchestrator | ++ IMAGE_USER=ubuntu 2026-03-29 01:17:06.060388 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-03-29 01:17:06.060392 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-03-29 01:17:06.060396 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-03-29 01:17:06.060400 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-03-29 01:17:06.060404 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-03-29 01:17:06.060408 | orchestrator | + sh -c /opt/configuration/scripts/check/100-ceph-with-ansible.sh 2026-03-29 01:17:06.068976 | orchestrator | + set -e 2026-03-29 01:17:06.069059 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-29 01:17:06.069068 | orchestrator | ++ export INTERACTIVE=false 2026-03-29 01:17:06.069073 | orchestrator | ++ INTERACTIVE=false 2026-03-29 01:17:06.069077 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-29 01:17:06.069082 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-29 01:17:06.069086 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-03-29 01:17:06.070211 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-03-29 01:17:06.076057 | orchestrator | 2026-03-29 01:17:06.076166 | orchestrator | # Ceph status 2026-03-29 01:17:06.076192 | orchestrator | 2026-03-29 01:17:06.076206 | orchestrator | ++ export MANAGER_VERSION=latest 2026-03-29 01:17:06.076214 | orchestrator | ++ MANAGER_VERSION=latest 2026-03-29 01:17:06.076221 | orchestrator | + echo 2026-03-29 01:17:06.076227 | orchestrator | + echo '# Ceph status' 2026-03-29 01:17:06.076232 | orchestrator | + echo 2026-03-29 01:17:06.076238 | orchestrator | + ceph -s 2026-03-29 01:17:06.678230 | orchestrator | cluster: 2026-03-29 01:17:06.678340 | orchestrator | id: 11111111-1111-1111-1111-111111111111 2026-03-29 01:17:06.678349 | orchestrator | health: HEALTH_OK 2026-03-29 01:17:06.678355 | orchestrator | 2026-03-29 01:17:06.678359 | orchestrator | services: 2026-03-29 01:17:06.678364 | orchestrator | mon: 3 daemons, quorum testbed-node-0,testbed-node-1,testbed-node-2 (age 26m) 2026-03-29 01:17:06.678371 | orchestrator | mgr: testbed-node-1(active, since 15m), standbys: testbed-node-2, testbed-node-0 2026-03-29 01:17:06.678376 | orchestrator | mds: 1/1 daemons up, 2 standby 2026-03-29 01:17:06.678381 | orchestrator | osd: 6 osds: 6 up (since 22m), 6 in (since 23m) 2026-03-29 01:17:06.678386 | orchestrator | rgw: 3 daemons active (3 hosts, 1 zones) 2026-03-29 01:17:06.678390 | orchestrator | 2026-03-29 01:17:06.678395 | orchestrator | data: 2026-03-29 01:17:06.678399 | orchestrator | volumes: 1/1 healthy 2026-03-29 01:17:06.678403 | orchestrator | pools: 14 pools, 401 pgs 2026-03-29 01:17:06.678408 | orchestrator | objects: 554 objects, 2.2 GiB 2026-03-29 01:17:06.678412 | orchestrator | usage: 7.1 GiB used, 113 GiB / 120 GiB avail 2026-03-29 01:17:06.678416 | orchestrator | pgs: 401 active+clean 2026-03-29 01:17:06.678420 | orchestrator | 2026-03-29 01:17:06.678425 | orchestrator | io: 2026-03-29 01:17:06.678429 | orchestrator | client: 8.2 KiB/s rd, 0 B/s wr, 8 op/s rd, 5 op/s wr 2026-03-29 01:17:06.678433 | orchestrator | 2026-03-29 01:17:06.723980 | orchestrator | 2026-03-29 01:17:06.724077 | orchestrator | # Ceph versions 2026-03-29 01:17:06.724092 | orchestrator | 2026-03-29 01:17:06.724104 | orchestrator | + echo 2026-03-29 01:17:06.724115 | orchestrator | + echo '# Ceph versions' 2026-03-29 01:17:06.724126 | orchestrator | + echo 2026-03-29 01:17:06.724137 | orchestrator | + ceph versions 2026-03-29 01:17:07.334131 | orchestrator | { 2026-03-29 01:17:07.334229 | orchestrator | "mon": { 2026-03-29 01:17:07.334240 | orchestrator | "ceph version 18.2.8 (efac5a54607c13fa50d4822e50242b86e6e446df) reef (stable)": 3 2026-03-29 01:17:07.334249 | orchestrator | }, 2026-03-29 01:17:07.334255 | orchestrator | "mgr": { 2026-03-29 01:17:07.334277 | orchestrator | "ceph version 18.2.8 (efac5a54607c13fa50d4822e50242b86e6e446df) reef (stable)": 3 2026-03-29 01:17:07.334283 | orchestrator | }, 2026-03-29 01:17:07.334289 | orchestrator | "osd": { 2026-03-29 01:17:07.334295 | orchestrator | "ceph version 18.2.8 (efac5a54607c13fa50d4822e50242b86e6e446df) reef (stable)": 6 2026-03-29 01:17:07.334301 | orchestrator | }, 2026-03-29 01:17:07.334308 | orchestrator | "mds": { 2026-03-29 01:17:07.334315 | orchestrator | "ceph version 18.2.8 (efac5a54607c13fa50d4822e50242b86e6e446df) reef (stable)": 3 2026-03-29 01:17:07.334321 | orchestrator | }, 2026-03-29 01:17:07.334327 | orchestrator | "rgw": { 2026-03-29 01:17:07.334333 | orchestrator | "ceph version 18.2.8 (efac5a54607c13fa50d4822e50242b86e6e446df) reef (stable)": 3 2026-03-29 01:17:07.334340 | orchestrator | }, 2026-03-29 01:17:07.334346 | orchestrator | "overall": { 2026-03-29 01:17:07.334352 | orchestrator | "ceph version 18.2.8 (efac5a54607c13fa50d4822e50242b86e6e446df) reef (stable)": 18 2026-03-29 01:17:07.334359 | orchestrator | } 2026-03-29 01:17:07.334365 | orchestrator | } 2026-03-29 01:17:07.384623 | orchestrator | 2026-03-29 01:17:07.384715 | orchestrator | # Ceph OSD tree 2026-03-29 01:17:07.384722 | orchestrator | 2026-03-29 01:17:07.384726 | orchestrator | + echo 2026-03-29 01:17:07.384731 | orchestrator | + echo '# Ceph OSD tree' 2026-03-29 01:17:07.384736 | orchestrator | + echo 2026-03-29 01:17:07.384740 | orchestrator | + ceph osd df tree 2026-03-29 01:17:07.929188 | orchestrator | ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS TYPE NAME 2026-03-29 01:17:07.929295 | orchestrator | -1 0.11691 - 120 GiB 7.1 GiB 6.7 GiB 6 KiB 430 MiB 113 GiB 5.92 1.00 - root default 2026-03-29 01:17:07.929306 | orchestrator | -7 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 143 MiB 38 GiB 5.92 1.00 - host testbed-node-3 2026-03-29 01:17:07.929314 | orchestrator | 0 hdd 0.01949 1.00000 20 GiB 1.3 GiB 1.3 GiB 1 KiB 74 MiB 19 GiB 6.75 1.14 175 up osd.0 2026-03-29 01:17:07.929320 | orchestrator | 3 hdd 0.01949 1.00000 20 GiB 1.0 GiB 971 MiB 1 KiB 70 MiB 19 GiB 5.09 0.86 213 up osd.3 2026-03-29 01:17:07.929326 | orchestrator | -5 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 143 MiB 38 GiB 5.92 1.00 - host testbed-node-4 2026-03-29 01:17:07.929333 | orchestrator | 1 hdd 0.01949 1.00000 20 GiB 1.2 GiB 1.1 GiB 1 KiB 70 MiB 19 GiB 5.99 1.01 195 up osd.1 2026-03-29 01:17:07.929363 | orchestrator | 5 hdd 0.01949 1.00000 20 GiB 1.2 GiB 1.1 GiB 1 KiB 74 MiB 19 GiB 5.85 0.99 197 up osd.5 2026-03-29 01:17:07.929369 | orchestrator | -3 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 143 MiB 38 GiB 5.92 1.00 - host testbed-node-5 2026-03-29 01:17:07.929375 | orchestrator | 2 hdd 0.01949 1.00000 20 GiB 1.4 GiB 1.3 GiB 1 KiB 70 MiB 19 GiB 6.92 1.17 195 up osd.2 2026-03-29 01:17:07.929381 | orchestrator | 4 hdd 0.01949 1.00000 20 GiB 1005 MiB 931 MiB 1 KiB 74 MiB 19 GiB 4.91 0.83 195 up osd.4 2026-03-29 01:17:07.929387 | orchestrator | TOTAL 120 GiB 7.1 GiB 6.7 GiB 9.3 KiB 430 MiB 113 GiB 5.92 2026-03-29 01:17:07.929394 | orchestrator | MIN/MAX VAR: 0.83/1.17 STDDEV: 0.76 2026-03-29 01:17:08.002730 | orchestrator | 2026-03-29 01:17:08.002815 | orchestrator | # Ceph monitor status 2026-03-29 01:17:08.002828 | orchestrator | 2026-03-29 01:17:08.002834 | orchestrator | + echo 2026-03-29 01:17:08.002841 | orchestrator | + echo '# Ceph monitor status' 2026-03-29 01:17:08.002847 | orchestrator | + echo 2026-03-29 01:17:08.002853 | orchestrator | + ceph mon stat 2026-03-29 01:17:08.593321 | orchestrator | e1: 3 mons at {testbed-node-0=[v2:192.168.16.10:3300/0,v1:192.168.16.10:6789/0],testbed-node-1=[v2:192.168.16.11:3300/0,v1:192.168.16.11:6789/0],testbed-node-2=[v2:192.168.16.12:3300/0,v1:192.168.16.12:6789/0]} removed_ranks: {} disallowed_leaders: {}, election epoch 8, leader 0 testbed-node-0, quorum 0,1,2 testbed-node-0,testbed-node-1,testbed-node-2 2026-03-29 01:17:08.638555 | orchestrator | 2026-03-29 01:17:08.638739 | orchestrator | # Ceph quorum status 2026-03-29 01:17:08.638756 | orchestrator | 2026-03-29 01:17:08.638763 | orchestrator | + echo 2026-03-29 01:17:08.638770 | orchestrator | + echo '# Ceph quorum status' 2026-03-29 01:17:08.638778 | orchestrator | + echo 2026-03-29 01:17:08.638843 | orchestrator | + ceph quorum_status 2026-03-29 01:17:08.639261 | orchestrator | + jq 2026-03-29 01:17:09.283810 | orchestrator | { 2026-03-29 01:17:09.283945 | orchestrator | "election_epoch": 8, 2026-03-29 01:17:09.283956 | orchestrator | "quorum": [ 2026-03-29 01:17:09.283963 | orchestrator | 0, 2026-03-29 01:17:09.283969 | orchestrator | 1, 2026-03-29 01:17:09.283975 | orchestrator | 2 2026-03-29 01:17:09.283981 | orchestrator | ], 2026-03-29 01:17:09.283987 | orchestrator | "quorum_names": [ 2026-03-29 01:17:09.283994 | orchestrator | "testbed-node-0", 2026-03-29 01:17:09.284000 | orchestrator | "testbed-node-1", 2026-03-29 01:17:09.284006 | orchestrator | "testbed-node-2" 2026-03-29 01:17:09.284011 | orchestrator | ], 2026-03-29 01:17:09.284017 | orchestrator | "quorum_leader_name": "testbed-node-0", 2026-03-29 01:17:09.284025 | orchestrator | "quorum_age": 1563, 2026-03-29 01:17:09.284031 | orchestrator | "features": { 2026-03-29 01:17:09.284037 | orchestrator | "quorum_con": "4540138322906710015", 2026-03-29 01:17:09.284043 | orchestrator | "quorum_mon": [ 2026-03-29 01:17:09.284048 | orchestrator | "kraken", 2026-03-29 01:17:09.284054 | orchestrator | "luminous", 2026-03-29 01:17:09.284060 | orchestrator | "mimic", 2026-03-29 01:17:09.284070 | orchestrator | "osdmap-prune", 2026-03-29 01:17:09.284079 | orchestrator | "nautilus", 2026-03-29 01:17:09.284093 | orchestrator | "octopus", 2026-03-29 01:17:09.284107 | orchestrator | "pacific", 2026-03-29 01:17:09.284115 | orchestrator | "elector-pinging", 2026-03-29 01:17:09.284124 | orchestrator | "quincy", 2026-03-29 01:17:09.284135 | orchestrator | "reef" 2026-03-29 01:17:09.284145 | orchestrator | ] 2026-03-29 01:17:09.284155 | orchestrator | }, 2026-03-29 01:17:09.284165 | orchestrator | "monmap": { 2026-03-29 01:17:09.284175 | orchestrator | "epoch": 1, 2026-03-29 01:17:09.284186 | orchestrator | "fsid": "11111111-1111-1111-1111-111111111111", 2026-03-29 01:17:09.284198 | orchestrator | "modified": "2026-03-29T00:50:45.755517Z", 2026-03-29 01:17:09.284209 | orchestrator | "created": "2026-03-29T00:50:45.755517Z", 2026-03-29 01:17:09.284221 | orchestrator | "min_mon_release": 18, 2026-03-29 01:17:09.284230 | orchestrator | "min_mon_release_name": "reef", 2026-03-29 01:17:09.284237 | orchestrator | "election_strategy": 1, 2026-03-29 01:17:09.284244 | orchestrator | "disallowed_leaders": "", 2026-03-29 01:17:09.284251 | orchestrator | "stretch_mode": false, 2026-03-29 01:17:09.284258 | orchestrator | "tiebreaker_mon": "", 2026-03-29 01:17:09.284266 | orchestrator | "removed_ranks": "", 2026-03-29 01:17:09.284273 | orchestrator | "features": { 2026-03-29 01:17:09.284280 | orchestrator | "persistent": [ 2026-03-29 01:17:09.284311 | orchestrator | "kraken", 2026-03-29 01:17:09.284319 | orchestrator | "luminous", 2026-03-29 01:17:09.284325 | orchestrator | "mimic", 2026-03-29 01:17:09.284332 | orchestrator | "osdmap-prune", 2026-03-29 01:17:09.284339 | orchestrator | "nautilus", 2026-03-29 01:17:09.284345 | orchestrator | "octopus", 2026-03-29 01:17:09.284352 | orchestrator | "pacific", 2026-03-29 01:17:09.284359 | orchestrator | "elector-pinging", 2026-03-29 01:17:09.284365 | orchestrator | "quincy", 2026-03-29 01:17:09.284372 | orchestrator | "reef" 2026-03-29 01:17:09.284380 | orchestrator | ], 2026-03-29 01:17:09.284386 | orchestrator | "optional": [] 2026-03-29 01:17:09.284393 | orchestrator | }, 2026-03-29 01:17:09.284400 | orchestrator | "mons": [ 2026-03-29 01:17:09.284406 | orchestrator | { 2026-03-29 01:17:09.284413 | orchestrator | "rank": 0, 2026-03-29 01:17:09.284420 | orchestrator | "name": "testbed-node-0", 2026-03-29 01:17:09.284427 | orchestrator | "public_addrs": { 2026-03-29 01:17:09.284434 | orchestrator | "addrvec": [ 2026-03-29 01:17:09.284441 | orchestrator | { 2026-03-29 01:17:09.284447 | orchestrator | "type": "v2", 2026-03-29 01:17:09.284455 | orchestrator | "addr": "192.168.16.10:3300", 2026-03-29 01:17:09.284462 | orchestrator | "nonce": 0 2026-03-29 01:17:09.284471 | orchestrator | }, 2026-03-29 01:17:09.284483 | orchestrator | { 2026-03-29 01:17:09.284497 | orchestrator | "type": "v1", 2026-03-29 01:17:09.284506 | orchestrator | "addr": "192.168.16.10:6789", 2026-03-29 01:17:09.284514 | orchestrator | "nonce": 0 2026-03-29 01:17:09.284524 | orchestrator | } 2026-03-29 01:17:09.284533 | orchestrator | ] 2026-03-29 01:17:09.284541 | orchestrator | }, 2026-03-29 01:17:09.284551 | orchestrator | "addr": "192.168.16.10:6789/0", 2026-03-29 01:17:09.284560 | orchestrator | "public_addr": "192.168.16.10:6789/0", 2026-03-29 01:17:09.284570 | orchestrator | "priority": 0, 2026-03-29 01:17:09.284578 | orchestrator | "weight": 0, 2026-03-29 01:17:09.284587 | orchestrator | "crush_location": "{}" 2026-03-29 01:17:09.284597 | orchestrator | }, 2026-03-29 01:17:09.284605 | orchestrator | { 2026-03-29 01:17:09.284614 | orchestrator | "rank": 1, 2026-03-29 01:17:09.284623 | orchestrator | "name": "testbed-node-1", 2026-03-29 01:17:09.284650 | orchestrator | "public_addrs": { 2026-03-29 01:17:09.284661 | orchestrator | "addrvec": [ 2026-03-29 01:17:09.284670 | orchestrator | { 2026-03-29 01:17:09.284679 | orchestrator | "type": "v2", 2026-03-29 01:17:09.284687 | orchestrator | "addr": "192.168.16.11:3300", 2026-03-29 01:17:09.284695 | orchestrator | "nonce": 0 2026-03-29 01:17:09.284703 | orchestrator | }, 2026-03-29 01:17:09.284711 | orchestrator | { 2026-03-29 01:17:09.284720 | orchestrator | "type": "v1", 2026-03-29 01:17:09.284752 | orchestrator | "addr": "192.168.16.11:6789", 2026-03-29 01:17:09.284760 | orchestrator | "nonce": 0 2026-03-29 01:17:09.284768 | orchestrator | } 2026-03-29 01:17:09.284799 | orchestrator | ] 2026-03-29 01:17:09.284819 | orchestrator | }, 2026-03-29 01:17:09.284827 | orchestrator | "addr": "192.168.16.11:6789/0", 2026-03-29 01:17:09.284836 | orchestrator | "public_addr": "192.168.16.11:6789/0", 2026-03-29 01:17:09.284845 | orchestrator | "priority": 0, 2026-03-29 01:17:09.284853 | orchestrator | "weight": 0, 2026-03-29 01:17:09.284861 | orchestrator | "crush_location": "{}" 2026-03-29 01:17:09.284870 | orchestrator | }, 2026-03-29 01:17:09.284879 | orchestrator | { 2026-03-29 01:17:09.284889 | orchestrator | "rank": 2, 2026-03-29 01:17:09.284898 | orchestrator | "name": "testbed-node-2", 2026-03-29 01:17:09.284906 | orchestrator | "public_addrs": { 2026-03-29 01:17:09.284914 | orchestrator | "addrvec": [ 2026-03-29 01:17:09.284922 | orchestrator | { 2026-03-29 01:17:09.284930 | orchestrator | "type": "v2", 2026-03-29 01:17:09.284939 | orchestrator | "addr": "192.168.16.12:3300", 2026-03-29 01:17:09.284947 | orchestrator | "nonce": 0 2026-03-29 01:17:09.284955 | orchestrator | }, 2026-03-29 01:17:09.284964 | orchestrator | { 2026-03-29 01:17:09.284974 | orchestrator | "type": "v1", 2026-03-29 01:17:09.284983 | orchestrator | "addr": "192.168.16.12:6789", 2026-03-29 01:17:09.284991 | orchestrator | "nonce": 0 2026-03-29 01:17:09.284999 | orchestrator | } 2026-03-29 01:17:09.285007 | orchestrator | ] 2026-03-29 01:17:09.285016 | orchestrator | }, 2026-03-29 01:17:09.285024 | orchestrator | "addr": "192.168.16.12:6789/0", 2026-03-29 01:17:09.285033 | orchestrator | "public_addr": "192.168.16.12:6789/0", 2026-03-29 01:17:09.285053 | orchestrator | "priority": 0, 2026-03-29 01:17:09.285061 | orchestrator | "weight": 0, 2026-03-29 01:17:09.285071 | orchestrator | "crush_location": "{}" 2026-03-29 01:17:09.285080 | orchestrator | } 2026-03-29 01:17:09.285090 | orchestrator | ] 2026-03-29 01:17:09.285100 | orchestrator | } 2026-03-29 01:17:09.285108 | orchestrator | } 2026-03-29 01:17:09.285298 | orchestrator | 2026-03-29 01:17:09.285309 | orchestrator | # Ceph free space status 2026-03-29 01:17:09.285315 | orchestrator | 2026-03-29 01:17:09.285321 | orchestrator | + echo 2026-03-29 01:17:09.285327 | orchestrator | + echo '# Ceph free space status' 2026-03-29 01:17:09.285333 | orchestrator | + echo 2026-03-29 01:17:09.285339 | orchestrator | + ceph df 2026-03-29 01:17:09.901695 | orchestrator | --- RAW STORAGE --- 2026-03-29 01:17:09.901778 | orchestrator | CLASS SIZE AVAIL USED RAW USED %RAW USED 2026-03-29 01:17:09.901795 | orchestrator | hdd 120 GiB 113 GiB 7.1 GiB 7.1 GiB 5.92 2026-03-29 01:17:09.901800 | orchestrator | TOTAL 120 GiB 113 GiB 7.1 GiB 7.1 GiB 5.92 2026-03-29 01:17:09.901804 | orchestrator | 2026-03-29 01:17:09.901809 | orchestrator | --- POOLS --- 2026-03-29 01:17:09.901814 | orchestrator | POOL ID PGS STORED OBJECTS USED %USED MAX AVAIL 2026-03-29 01:17:09.901819 | orchestrator | .mgr 1 1 577 KiB 2 1.1 MiB 0 53 GiB 2026-03-29 01:17:09.901824 | orchestrator | cephfs_data 2 32 0 B 0 0 B 0 35 GiB 2026-03-29 01:17:09.901828 | orchestrator | cephfs_metadata 3 16 4.4 KiB 22 96 KiB 0 35 GiB 2026-03-29 01:17:09.901832 | orchestrator | default.rgw.buckets.data 4 32 0 B 0 0 B 0 35 GiB 2026-03-29 01:17:09.901836 | orchestrator | default.rgw.buckets.index 5 32 0 B 0 0 B 0 35 GiB 2026-03-29 01:17:09.901840 | orchestrator | default.rgw.control 6 32 0 B 8 0 B 0 35 GiB 2026-03-29 01:17:09.901843 | orchestrator | default.rgw.log 7 32 3.6 KiB 209 408 KiB 0 35 GiB 2026-03-29 01:17:09.901847 | orchestrator | default.rgw.meta 8 32 0 B 0 0 B 0 35 GiB 2026-03-29 01:17:09.901851 | orchestrator | .rgw.root 9 32 3.1 KiB 6 48 KiB 0 53 GiB 2026-03-29 01:17:09.901855 | orchestrator | backups 10 32 19 B 2 12 KiB 0 35 GiB 2026-03-29 01:17:09.901859 | orchestrator | volumes 11 32 19 B 2 12 KiB 0 35 GiB 2026-03-29 01:17:09.901862 | orchestrator | images 12 32 2.2 GiB 299 6.7 GiB 5.93 35 GiB 2026-03-29 01:17:09.901866 | orchestrator | metrics 13 32 19 B 2 12 KiB 0 35 GiB 2026-03-29 01:17:09.901870 | orchestrator | vms 14 32 19 B 2 12 KiB 0 35 GiB 2026-03-29 01:17:09.953814 | orchestrator | ++ semver latest 5.0.0 2026-03-29 01:17:10.010511 | orchestrator | + [[ -1 -eq -1 ]] 2026-03-29 01:17:10.010607 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-03-29 01:17:10.010620 | orchestrator | + osism apply facts 2026-03-29 01:17:21.372646 | orchestrator | 2026-03-29 01:17:21 | INFO  | Prepare task for execution of facts. 2026-03-29 01:17:21.461764 | orchestrator | 2026-03-29 01:17:21 | INFO  | Task ead32149-d3f1-49df-9eaa-d8f79f7ff156 (facts) was prepared for execution. 2026-03-29 01:17:21.461854 | orchestrator | 2026-03-29 01:17:21 | INFO  | It takes a moment until task ead32149-d3f1-49df-9eaa-d8f79f7ff156 (facts) has been started and output is visible here. 2026-03-29 01:17:34.783810 | orchestrator | 2026-03-29 01:17:34.783878 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-03-29 01:17:34.783893 | orchestrator | 2026-03-29 01:17:34.783902 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-03-29 01:17:34.783911 | orchestrator | Sunday 29 March 2026 01:17:24 +0000 (0:00:00.339) 0:00:00.340 ********** 2026-03-29 01:17:34.783920 | orchestrator | ok: [testbed-node-0] 2026-03-29 01:17:34.783929 | orchestrator | ok: [testbed-node-1] 2026-03-29 01:17:34.783937 | orchestrator | ok: [testbed-manager] 2026-03-29 01:17:34.783946 | orchestrator | ok: [testbed-node-2] 2026-03-29 01:17:34.783954 | orchestrator | ok: [testbed-node-3] 2026-03-29 01:17:34.783960 | orchestrator | ok: [testbed-node-4] 2026-03-29 01:17:34.783965 | orchestrator | ok: [testbed-node-5] 2026-03-29 01:17:34.783984 | orchestrator | 2026-03-29 01:17:34.783990 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-03-29 01:17:34.784002 | orchestrator | Sunday 29 March 2026 01:17:26 +0000 (0:00:01.358) 0:00:01.698 ********** 2026-03-29 01:17:34.784007 | orchestrator | skipping: [testbed-manager] 2026-03-29 01:17:34.784013 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:17:34.784018 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:17:34.784023 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:17:34.784029 | orchestrator | skipping: [testbed-node-3] 2026-03-29 01:17:34.784034 | orchestrator | skipping: [testbed-node-4] 2026-03-29 01:17:34.784039 | orchestrator | skipping: [testbed-node-5] 2026-03-29 01:17:34.784044 | orchestrator | 2026-03-29 01:17:34.784049 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-03-29 01:17:34.784054 | orchestrator | 2026-03-29 01:17:34.784059 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-29 01:17:34.784076 | orchestrator | Sunday 29 March 2026 01:17:27 +0000 (0:00:01.243) 0:00:02.942 ********** 2026-03-29 01:17:34.784085 | orchestrator | ok: [testbed-node-2] 2026-03-29 01:17:34.784097 | orchestrator | ok: [testbed-node-1] 2026-03-29 01:17:34.784106 | orchestrator | ok: [testbed-node-0] 2026-03-29 01:17:34.784115 | orchestrator | ok: [testbed-manager] 2026-03-29 01:17:34.784125 | orchestrator | ok: [testbed-node-5] 2026-03-29 01:17:34.784134 | orchestrator | ok: [testbed-node-3] 2026-03-29 01:17:34.784143 | orchestrator | ok: [testbed-node-4] 2026-03-29 01:17:34.784152 | orchestrator | 2026-03-29 01:17:34.784161 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-03-29 01:17:34.784171 | orchestrator | 2026-03-29 01:17:34.784180 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-03-29 01:17:34.784190 | orchestrator | Sunday 29 March 2026 01:17:33 +0000 (0:00:06.263) 0:00:09.205 ********** 2026-03-29 01:17:34.784199 | orchestrator | skipping: [testbed-manager] 2026-03-29 01:17:34.784209 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:17:34.784218 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:17:34.784228 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:17:34.784239 | orchestrator | skipping: [testbed-node-3] 2026-03-29 01:17:34.784248 | orchestrator | skipping: [testbed-node-4] 2026-03-29 01:17:34.784254 | orchestrator | skipping: [testbed-node-5] 2026-03-29 01:17:34.784259 | orchestrator | 2026-03-29 01:17:34.784264 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 01:17:34.784269 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-29 01:17:34.784275 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-29 01:17:34.784280 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-29 01:17:34.784285 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-29 01:17:34.784290 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-29 01:17:34.784296 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-29 01:17:34.784301 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-29 01:17:34.784306 | orchestrator | 2026-03-29 01:17:34.784311 | orchestrator | 2026-03-29 01:17:34.784316 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-29 01:17:34.784321 | orchestrator | Sunday 29 March 2026 01:17:34 +0000 (0:00:00.747) 0:00:09.952 ********** 2026-03-29 01:17:34.784333 | orchestrator | =============================================================================== 2026-03-29 01:17:34.784339 | orchestrator | Gathers facts about hosts ----------------------------------------------- 6.26s 2026-03-29 01:17:34.784344 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.36s 2026-03-29 01:17:34.784349 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.24s 2026-03-29 01:17:34.784354 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.75s 2026-03-29 01:17:34.993301 | orchestrator | + osism validate ceph-mons 2026-03-29 01:18:05.870486 | orchestrator | 2026-03-29 01:18:05.870552 | orchestrator | PLAY [Ceph validate mons] ****************************************************** 2026-03-29 01:18:05.870562 | orchestrator | 2026-03-29 01:18:05.870570 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2026-03-29 01:18:05.870579 | orchestrator | Sunday 29 March 2026 01:17:50 +0000 (0:00:00.537) 0:00:00.537 ********** 2026-03-29 01:18:05.870585 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-29 01:18:05.870590 | orchestrator | 2026-03-29 01:18:05.870594 | orchestrator | TASK [Create report output directory] ****************************************** 2026-03-29 01:18:05.870598 | orchestrator | Sunday 29 March 2026 01:17:51 +0000 (0:00:01.008) 0:00:01.546 ********** 2026-03-29 01:18:05.870603 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-29 01:18:05.870608 | orchestrator | 2026-03-29 01:18:05.870612 | orchestrator | TASK [Define report vars] ****************************************************** 2026-03-29 01:18:05.870616 | orchestrator | Sunday 29 March 2026 01:17:51 +0000 (0:00:00.694) 0:00:02.240 ********** 2026-03-29 01:18:05.870621 | orchestrator | ok: [testbed-node-0] 2026-03-29 01:18:05.870626 | orchestrator | 2026-03-29 01:18:05.870630 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2026-03-29 01:18:05.870635 | orchestrator | Sunday 29 March 2026 01:17:51 +0000 (0:00:00.120) 0:00:02.361 ********** 2026-03-29 01:18:05.870639 | orchestrator | ok: [testbed-node-0] 2026-03-29 01:18:05.870643 | orchestrator | ok: [testbed-node-1] 2026-03-29 01:18:05.870648 | orchestrator | ok: [testbed-node-2] 2026-03-29 01:18:05.870652 | orchestrator | 2026-03-29 01:18:05.870657 | orchestrator | TASK [Get container info] ****************************************************** 2026-03-29 01:18:05.870661 | orchestrator | Sunday 29 March 2026 01:17:52 +0000 (0:00:00.273) 0:00:02.635 ********** 2026-03-29 01:18:05.870666 | orchestrator | ok: [testbed-node-1] 2026-03-29 01:18:05.870670 | orchestrator | ok: [testbed-node-2] 2026-03-29 01:18:05.870674 | orchestrator | ok: [testbed-node-0] 2026-03-29 01:18:05.870679 | orchestrator | 2026-03-29 01:18:05.870683 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2026-03-29 01:18:05.870688 | orchestrator | Sunday 29 March 2026 01:17:53 +0000 (0:00:01.544) 0:00:04.179 ********** 2026-03-29 01:18:05.870692 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:18:05.870697 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:18:05.870702 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:18:05.870706 | orchestrator | 2026-03-29 01:18:05.870710 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2026-03-29 01:18:05.870715 | orchestrator | Sunday 29 March 2026 01:17:53 +0000 (0:00:00.299) 0:00:04.478 ********** 2026-03-29 01:18:05.870719 | orchestrator | ok: [testbed-node-0] 2026-03-29 01:18:05.870724 | orchestrator | ok: [testbed-node-1] 2026-03-29 01:18:05.870734 | orchestrator | ok: [testbed-node-2] 2026-03-29 01:18:05.870739 | orchestrator | 2026-03-29 01:18:05.870743 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-03-29 01:18:05.870748 | orchestrator | Sunday 29 March 2026 01:17:54 +0000 (0:00:00.323) 0:00:04.802 ********** 2026-03-29 01:18:05.870752 | orchestrator | ok: [testbed-node-0] 2026-03-29 01:18:05.870756 | orchestrator | ok: [testbed-node-1] 2026-03-29 01:18:05.870761 | orchestrator | ok: [testbed-node-2] 2026-03-29 01:18:05.870765 | orchestrator | 2026-03-29 01:18:05.870770 | orchestrator | TASK [Set test result to failed if ceph-mon is not running] ******************** 2026-03-29 01:18:05.870774 | orchestrator | Sunday 29 March 2026 01:17:54 +0000 (0:00:00.294) 0:00:05.096 ********** 2026-03-29 01:18:05.870793 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:18:05.870798 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:18:05.870802 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:18:05.870806 | orchestrator | 2026-03-29 01:18:05.870811 | orchestrator | TASK [Set test result to passed if ceph-mon is running] ************************ 2026-03-29 01:18:05.870815 | orchestrator | Sunday 29 March 2026 01:17:55 +0000 (0:00:00.447) 0:00:05.544 ********** 2026-03-29 01:18:05.870820 | orchestrator | ok: [testbed-node-0] 2026-03-29 01:18:05.870832 | orchestrator | ok: [testbed-node-1] 2026-03-29 01:18:05.870837 | orchestrator | ok: [testbed-node-2] 2026-03-29 01:18:05.870841 | orchestrator | 2026-03-29 01:18:05.870846 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-03-29 01:18:05.870850 | orchestrator | Sunday 29 March 2026 01:17:55 +0000 (0:00:00.314) 0:00:05.858 ********** 2026-03-29 01:18:05.870855 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:18:05.870859 | orchestrator | 2026-03-29 01:18:05.870863 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-03-29 01:18:05.870868 | orchestrator | Sunday 29 March 2026 01:17:55 +0000 (0:00:00.245) 0:00:06.104 ********** 2026-03-29 01:18:05.870903 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:18:05.870910 | orchestrator | 2026-03-29 01:18:05.870918 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-03-29 01:18:05.870923 | orchestrator | Sunday 29 March 2026 01:17:55 +0000 (0:00:00.245) 0:00:06.350 ********** 2026-03-29 01:18:05.870928 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:18:05.870936 | orchestrator | 2026-03-29 01:18:05.870941 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-29 01:18:05.870945 | orchestrator | Sunday 29 March 2026 01:17:56 +0000 (0:00:00.241) 0:00:06.592 ********** 2026-03-29 01:18:05.870949 | orchestrator | 2026-03-29 01:18:05.870954 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-29 01:18:05.870958 | orchestrator | Sunday 29 March 2026 01:17:56 +0000 (0:00:00.069) 0:00:06.661 ********** 2026-03-29 01:18:05.870962 | orchestrator | 2026-03-29 01:18:05.870967 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-29 01:18:05.870971 | orchestrator | Sunday 29 March 2026 01:17:56 +0000 (0:00:00.067) 0:00:06.729 ********** 2026-03-29 01:18:05.870975 | orchestrator | 2026-03-29 01:18:05.870980 | orchestrator | TASK [Print report file information] ******************************************* 2026-03-29 01:18:05.870984 | orchestrator | Sunday 29 March 2026 01:17:56 +0000 (0:00:00.230) 0:00:06.959 ********** 2026-03-29 01:18:05.870988 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:18:05.870993 | orchestrator | 2026-03-29 01:18:05.870997 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2026-03-29 01:18:05.871001 | orchestrator | Sunday 29 March 2026 01:17:56 +0000 (0:00:00.248) 0:00:07.208 ********** 2026-03-29 01:18:05.871006 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:18:05.871010 | orchestrator | 2026-03-29 01:18:05.871023 | orchestrator | TASK [Prepare quorum test vars] ************************************************ 2026-03-29 01:18:05.871028 | orchestrator | Sunday 29 March 2026 01:17:56 +0000 (0:00:00.250) 0:00:07.458 ********** 2026-03-29 01:18:05.871032 | orchestrator | ok: [testbed-node-0] 2026-03-29 01:18:05.871037 | orchestrator | 2026-03-29 01:18:05.871041 | orchestrator | TASK [Get monmap info from one mon container] ********************************** 2026-03-29 01:18:05.871046 | orchestrator | Sunday 29 March 2026 01:17:57 +0000 (0:00:00.122) 0:00:07.580 ********** 2026-03-29 01:18:05.871050 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:18:05.871054 | orchestrator | 2026-03-29 01:18:05.871059 | orchestrator | TASK [Set quorum test data] **************************************************** 2026-03-29 01:18:05.871063 | orchestrator | Sunday 29 March 2026 01:17:58 +0000 (0:00:01.696) 0:00:09.277 ********** 2026-03-29 01:18:05.871069 | orchestrator | ok: [testbed-node-0] 2026-03-29 01:18:05.871074 | orchestrator | 2026-03-29 01:18:05.871079 | orchestrator | TASK [Fail quorum test if not all monitors are in quorum] ********************** 2026-03-29 01:18:05.871089 | orchestrator | Sunday 29 March 2026 01:17:59 +0000 (0:00:00.309) 0:00:09.586 ********** 2026-03-29 01:18:05.871094 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:18:05.871099 | orchestrator | 2026-03-29 01:18:05.871104 | orchestrator | TASK [Pass quorum test if all monitors are in quorum] ************************** 2026-03-29 01:18:05.871109 | orchestrator | Sunday 29 March 2026 01:17:59 +0000 (0:00:00.135) 0:00:09.722 ********** 2026-03-29 01:18:05.871114 | orchestrator | ok: [testbed-node-0] 2026-03-29 01:18:05.871119 | orchestrator | 2026-03-29 01:18:05.871124 | orchestrator | TASK [Set fsid test vars] ****************************************************** 2026-03-29 01:18:05.871132 | orchestrator | Sunday 29 March 2026 01:17:59 +0000 (0:00:00.320) 0:00:10.042 ********** 2026-03-29 01:18:05.871137 | orchestrator | ok: [testbed-node-0] 2026-03-29 01:18:05.871142 | orchestrator | 2026-03-29 01:18:05.871147 | orchestrator | TASK [Fail Cluster FSID test if FSID does not match configuration] ************* 2026-03-29 01:18:05.871152 | orchestrator | Sunday 29 March 2026 01:17:59 +0000 (0:00:00.305) 0:00:10.347 ********** 2026-03-29 01:18:05.871157 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:18:05.871162 | orchestrator | 2026-03-29 01:18:05.871167 | orchestrator | TASK [Pass Cluster FSID test if it matches configuration] ********************** 2026-03-29 01:18:05.871172 | orchestrator | Sunday 29 March 2026 01:17:59 +0000 (0:00:00.118) 0:00:10.466 ********** 2026-03-29 01:18:05.871178 | orchestrator | ok: [testbed-node-0] 2026-03-29 01:18:05.871183 | orchestrator | 2026-03-29 01:18:05.871188 | orchestrator | TASK [Prepare status test vars] ************************************************ 2026-03-29 01:18:05.871193 | orchestrator | Sunday 29 March 2026 01:18:00 +0000 (0:00:00.140) 0:00:10.606 ********** 2026-03-29 01:18:05.871198 | orchestrator | ok: [testbed-node-0] 2026-03-29 01:18:05.871203 | orchestrator | 2026-03-29 01:18:05.871208 | orchestrator | TASK [Gather status data] ****************************************************** 2026-03-29 01:18:05.871213 | orchestrator | Sunday 29 March 2026 01:18:00 +0000 (0:00:00.289) 0:00:10.896 ********** 2026-03-29 01:18:05.871219 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:18:05.871224 | orchestrator | 2026-03-29 01:18:05.871229 | orchestrator | TASK [Set health test data] **************************************************** 2026-03-29 01:18:05.871234 | orchestrator | Sunday 29 March 2026 01:18:01 +0000 (0:00:01.309) 0:00:12.206 ********** 2026-03-29 01:18:05.871242 | orchestrator | ok: [testbed-node-0] 2026-03-29 01:18:05.871249 | orchestrator | 2026-03-29 01:18:05.871257 | orchestrator | TASK [Fail cluster-health if health is not acceptable] ************************* 2026-03-29 01:18:05.871264 | orchestrator | Sunday 29 March 2026 01:18:02 +0000 (0:00:00.333) 0:00:12.539 ********** 2026-03-29 01:18:05.871272 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:18:05.871279 | orchestrator | 2026-03-29 01:18:05.871287 | orchestrator | TASK [Pass cluster-health if health is acceptable] ***************************** 2026-03-29 01:18:05.871296 | orchestrator | Sunday 29 March 2026 01:18:02 +0000 (0:00:00.140) 0:00:12.680 ********** 2026-03-29 01:18:05.871304 | orchestrator | ok: [testbed-node-0] 2026-03-29 01:18:05.871312 | orchestrator | 2026-03-29 01:18:05.871317 | orchestrator | TASK [Fail cluster-health if health is not acceptable (strict)] **************** 2026-03-29 01:18:05.871322 | orchestrator | Sunday 29 March 2026 01:18:02 +0000 (0:00:00.161) 0:00:12.842 ********** 2026-03-29 01:18:05.871327 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:18:05.871332 | orchestrator | 2026-03-29 01:18:05.871339 | orchestrator | TASK [Pass cluster-health if status is OK (strict)] **************************** 2026-03-29 01:18:05.871347 | orchestrator | Sunday 29 March 2026 01:18:02 +0000 (0:00:00.139) 0:00:12.982 ********** 2026-03-29 01:18:05.871354 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:18:05.871361 | orchestrator | 2026-03-29 01:18:05.871369 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2026-03-29 01:18:05.871377 | orchestrator | Sunday 29 March 2026 01:18:02 +0000 (0:00:00.134) 0:00:13.116 ********** 2026-03-29 01:18:05.871385 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-29 01:18:05.871392 | orchestrator | 2026-03-29 01:18:05.871401 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2026-03-29 01:18:05.871417 | orchestrator | Sunday 29 March 2026 01:18:02 +0000 (0:00:00.260) 0:00:13.377 ********** 2026-03-29 01:18:05.871423 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:18:05.871428 | orchestrator | 2026-03-29 01:18:05.871434 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-03-29 01:18:05.871441 | orchestrator | Sunday 29 March 2026 01:18:03 +0000 (0:00:00.238) 0:00:13.616 ********** 2026-03-29 01:18:05.871448 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-29 01:18:05.871455 | orchestrator | 2026-03-29 01:18:05.871463 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-03-29 01:18:05.871470 | orchestrator | Sunday 29 March 2026 01:18:04 +0000 (0:00:01.821) 0:00:15.438 ********** 2026-03-29 01:18:05.871478 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-29 01:18:05.871485 | orchestrator | 2026-03-29 01:18:05.871492 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-03-29 01:18:05.871500 | orchestrator | Sunday 29 March 2026 01:18:05 +0000 (0:00:00.287) 0:00:15.725 ********** 2026-03-29 01:18:05.871508 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-29 01:18:05.871514 | orchestrator | 2026-03-29 01:18:05.871525 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-29 01:18:08.094265 | orchestrator | Sunday 29 March 2026 01:18:05 +0000 (0:00:00.636) 0:00:16.362 ********** 2026-03-29 01:18:08.094347 | orchestrator | 2026-03-29 01:18:08.094354 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-29 01:18:08.094359 | orchestrator | Sunday 29 March 2026 01:18:05 +0000 (0:00:00.069) 0:00:16.432 ********** 2026-03-29 01:18:08.094363 | orchestrator | 2026-03-29 01:18:08.094367 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-29 01:18:08.094372 | orchestrator | Sunday 29 March 2026 01:18:06 +0000 (0:00:00.068) 0:00:16.501 ********** 2026-03-29 01:18:08.094375 | orchestrator | 2026-03-29 01:18:08.094379 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2026-03-29 01:18:08.094383 | orchestrator | Sunday 29 March 2026 01:18:06 +0000 (0:00:00.076) 0:00:16.577 ********** 2026-03-29 01:18:08.094388 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-29 01:18:08.094392 | orchestrator | 2026-03-29 01:18:08.094396 | orchestrator | TASK [Print report file information] ******************************************* 2026-03-29 01:18:08.094400 | orchestrator | Sunday 29 March 2026 01:18:07 +0000 (0:00:01.302) 0:00:17.880 ********** 2026-03-29 01:18:08.094406 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2026-03-29 01:18:08.094412 | orchestrator |  "msg": [ 2026-03-29 01:18:08.094420 | orchestrator |  "Validator run completed.", 2026-03-29 01:18:08.094427 | orchestrator |  "You can find the report file here:", 2026-03-29 01:18:08.094434 | orchestrator |  "/opt/reports/validator/ceph-mons-validator-2026-03-29T01:17:50+00:00-report.json", 2026-03-29 01:18:08.094442 | orchestrator |  "on the following host:", 2026-03-29 01:18:08.094448 | orchestrator |  "testbed-manager" 2026-03-29 01:18:08.094455 | orchestrator |  ] 2026-03-29 01:18:08.094460 | orchestrator | } 2026-03-29 01:18:08.094467 | orchestrator | 2026-03-29 01:18:08.094472 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 01:18:08.094480 | orchestrator | testbed-node-0 : ok=24  changed=5  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2026-03-29 01:18:08.094488 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-29 01:18:08.094496 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-29 01:18:08.094501 | orchestrator | 2026-03-29 01:18:08.094507 | orchestrator | 2026-03-29 01:18:08.094513 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-29 01:18:08.094567 | orchestrator | Sunday 29 March 2026 01:18:07 +0000 (0:00:00.407) 0:00:18.287 ********** 2026-03-29 01:18:08.094575 | orchestrator | =============================================================================== 2026-03-29 01:18:08.094581 | orchestrator | Aggregate test results step one ----------------------------------------- 1.82s 2026-03-29 01:18:08.094587 | orchestrator | Get monmap info from one mon container ---------------------------------- 1.70s 2026-03-29 01:18:08.094593 | orchestrator | Get container info ------------------------------------------------------ 1.54s 2026-03-29 01:18:08.094599 | orchestrator | Gather status data ------------------------------------------------------ 1.31s 2026-03-29 01:18:08.094605 | orchestrator | Write report file ------------------------------------------------------- 1.30s 2026-03-29 01:18:08.094611 | orchestrator | Get timestamp for report file ------------------------------------------- 1.01s 2026-03-29 01:18:08.094617 | orchestrator | Create report output directory ------------------------------------------ 0.69s 2026-03-29 01:18:08.094622 | orchestrator | Aggregate test results step three --------------------------------------- 0.64s 2026-03-29 01:18:08.094628 | orchestrator | Set test result to failed if ceph-mon is not running -------------------- 0.45s 2026-03-29 01:18:08.094633 | orchestrator | Print report file information ------------------------------------------- 0.41s 2026-03-29 01:18:08.094638 | orchestrator | Flush handlers ---------------------------------------------------------- 0.37s 2026-03-29 01:18:08.094644 | orchestrator | Set health test data ---------------------------------------------------- 0.33s 2026-03-29 01:18:08.094649 | orchestrator | Set test result to passed if container is existing ---------------------- 0.32s 2026-03-29 01:18:08.094655 | orchestrator | Pass quorum test if all monitors are in quorum -------------------------- 0.32s 2026-03-29 01:18:08.094660 | orchestrator | Set test result to passed if ceph-mon is running ------------------------ 0.31s 2026-03-29 01:18:08.094666 | orchestrator | Set quorum test data ---------------------------------------------------- 0.31s 2026-03-29 01:18:08.094672 | orchestrator | Set fsid test vars ------------------------------------------------------ 0.31s 2026-03-29 01:18:08.094678 | orchestrator | Set test result to failed if container is missing ----------------------- 0.30s 2026-03-29 01:18:08.094684 | orchestrator | Prepare test data ------------------------------------------------------- 0.29s 2026-03-29 01:18:08.094690 | orchestrator | Prepare status test vars ------------------------------------------------ 0.29s 2026-03-29 01:18:08.299869 | orchestrator | + osism validate ceph-mgrs 2026-03-29 01:18:37.775273 | orchestrator | 2026-03-29 01:18:37.775364 | orchestrator | PLAY [Ceph validate mgrs] ****************************************************** 2026-03-29 01:18:37.775373 | orchestrator | 2026-03-29 01:18:37.775377 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2026-03-29 01:18:37.775382 | orchestrator | Sunday 29 March 2026 01:18:23 +0000 (0:00:00.536) 0:00:00.536 ********** 2026-03-29 01:18:37.775387 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-29 01:18:37.775391 | orchestrator | 2026-03-29 01:18:37.775395 | orchestrator | TASK [Create report output directory] ****************************************** 2026-03-29 01:18:37.775399 | orchestrator | Sunday 29 March 2026 01:18:24 +0000 (0:00:01.045) 0:00:01.582 ********** 2026-03-29 01:18:37.775404 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-29 01:18:37.775409 | orchestrator | 2026-03-29 01:18:37.775412 | orchestrator | TASK [Define report vars] ****************************************************** 2026-03-29 01:18:37.775417 | orchestrator | Sunday 29 March 2026 01:18:25 +0000 (0:00:00.693) 0:00:02.276 ********** 2026-03-29 01:18:37.775435 | orchestrator | ok: [testbed-node-0] 2026-03-29 01:18:37.775440 | orchestrator | 2026-03-29 01:18:37.775444 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2026-03-29 01:18:37.775448 | orchestrator | Sunday 29 March 2026 01:18:25 +0000 (0:00:00.145) 0:00:02.422 ********** 2026-03-29 01:18:37.775451 | orchestrator | ok: [testbed-node-0] 2026-03-29 01:18:37.775455 | orchestrator | ok: [testbed-node-1] 2026-03-29 01:18:37.775459 | orchestrator | ok: [testbed-node-2] 2026-03-29 01:18:37.775479 | orchestrator | 2026-03-29 01:18:37.775483 | orchestrator | TASK [Get container info] ****************************************************** 2026-03-29 01:18:37.775487 | orchestrator | Sunday 29 March 2026 01:18:25 +0000 (0:00:00.278) 0:00:02.700 ********** 2026-03-29 01:18:37.775491 | orchestrator | ok: [testbed-node-1] 2026-03-29 01:18:37.775495 | orchestrator | ok: [testbed-node-0] 2026-03-29 01:18:37.775499 | orchestrator | ok: [testbed-node-2] 2026-03-29 01:18:37.775502 | orchestrator | 2026-03-29 01:18:37.775506 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2026-03-29 01:18:37.775510 | orchestrator | Sunday 29 March 2026 01:18:26 +0000 (0:00:01.470) 0:00:04.171 ********** 2026-03-29 01:18:37.775514 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:18:37.775518 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:18:37.775525 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:18:37.775529 | orchestrator | 2026-03-29 01:18:37.775533 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2026-03-29 01:18:37.775537 | orchestrator | Sunday 29 March 2026 01:18:27 +0000 (0:00:00.277) 0:00:04.449 ********** 2026-03-29 01:18:37.775541 | orchestrator | ok: [testbed-node-0] 2026-03-29 01:18:37.775544 | orchestrator | ok: [testbed-node-1] 2026-03-29 01:18:37.775548 | orchestrator | ok: [testbed-node-2] 2026-03-29 01:18:37.775552 | orchestrator | 2026-03-29 01:18:37.775556 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-03-29 01:18:37.775559 | orchestrator | Sunday 29 March 2026 01:18:27 +0000 (0:00:00.296) 0:00:04.745 ********** 2026-03-29 01:18:37.775563 | orchestrator | ok: [testbed-node-0] 2026-03-29 01:18:37.775567 | orchestrator | ok: [testbed-node-1] 2026-03-29 01:18:37.775571 | orchestrator | ok: [testbed-node-2] 2026-03-29 01:18:37.775574 | orchestrator | 2026-03-29 01:18:37.775578 | orchestrator | TASK [Set test result to failed if ceph-mgr is not running] ******************** 2026-03-29 01:18:37.775582 | orchestrator | Sunday 29 March 2026 01:18:27 +0000 (0:00:00.287) 0:00:05.033 ********** 2026-03-29 01:18:37.775586 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:18:37.775589 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:18:37.775593 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:18:37.775597 | orchestrator | 2026-03-29 01:18:37.775601 | orchestrator | TASK [Set test result to passed if ceph-mgr is running] ************************ 2026-03-29 01:18:37.775604 | orchestrator | Sunday 29 March 2026 01:18:28 +0000 (0:00:00.445) 0:00:05.479 ********** 2026-03-29 01:18:37.775608 | orchestrator | ok: [testbed-node-0] 2026-03-29 01:18:37.775612 | orchestrator | ok: [testbed-node-1] 2026-03-29 01:18:37.775616 | orchestrator | ok: [testbed-node-2] 2026-03-29 01:18:37.775620 | orchestrator | 2026-03-29 01:18:37.775623 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-03-29 01:18:37.775627 | orchestrator | Sunday 29 March 2026 01:18:28 +0000 (0:00:00.301) 0:00:05.780 ********** 2026-03-29 01:18:37.775631 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:18:37.775635 | orchestrator | 2026-03-29 01:18:37.775639 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-03-29 01:18:37.775642 | orchestrator | Sunday 29 March 2026 01:18:28 +0000 (0:00:00.248) 0:00:06.028 ********** 2026-03-29 01:18:37.775646 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:18:37.775650 | orchestrator | 2026-03-29 01:18:37.775654 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-03-29 01:18:37.775658 | orchestrator | Sunday 29 March 2026 01:18:29 +0000 (0:00:00.241) 0:00:06.269 ********** 2026-03-29 01:18:37.775661 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:18:37.775665 | orchestrator | 2026-03-29 01:18:37.775669 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-29 01:18:37.775673 | orchestrator | Sunday 29 March 2026 01:18:29 +0000 (0:00:00.244) 0:00:06.514 ********** 2026-03-29 01:18:37.775676 | orchestrator | 2026-03-29 01:18:37.775680 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-29 01:18:37.775684 | orchestrator | Sunday 29 March 2026 01:18:29 +0000 (0:00:00.070) 0:00:06.584 ********** 2026-03-29 01:18:37.775688 | orchestrator | 2026-03-29 01:18:37.775696 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-29 01:18:37.775700 | orchestrator | Sunday 29 March 2026 01:18:29 +0000 (0:00:00.071) 0:00:06.655 ********** 2026-03-29 01:18:37.775703 | orchestrator | 2026-03-29 01:18:37.775707 | orchestrator | TASK [Print report file information] ******************************************* 2026-03-29 01:18:37.775711 | orchestrator | Sunday 29 March 2026 01:18:29 +0000 (0:00:00.234) 0:00:06.890 ********** 2026-03-29 01:18:37.775715 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:18:37.775719 | orchestrator | 2026-03-29 01:18:37.775723 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2026-03-29 01:18:37.775726 | orchestrator | Sunday 29 March 2026 01:18:29 +0000 (0:00:00.279) 0:00:07.169 ********** 2026-03-29 01:18:37.775730 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:18:37.775734 | orchestrator | 2026-03-29 01:18:37.775749 | orchestrator | TASK [Define mgr module test vars] ********************************************* 2026-03-29 01:18:37.775753 | orchestrator | Sunday 29 March 2026 01:18:30 +0000 (0:00:00.303) 0:00:07.473 ********** 2026-03-29 01:18:37.775757 | orchestrator | ok: [testbed-node-0] 2026-03-29 01:18:37.775761 | orchestrator | 2026-03-29 01:18:37.775764 | orchestrator | TASK [Gather list of mgr modules] ********************************************** 2026-03-29 01:18:37.775768 | orchestrator | Sunday 29 March 2026 01:18:30 +0000 (0:00:00.130) 0:00:07.603 ********** 2026-03-29 01:18:37.775772 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:18:37.775776 | orchestrator | 2026-03-29 01:18:37.775780 | orchestrator | TASK [Parse mgr module list from json] ***************************************** 2026-03-29 01:18:37.775783 | orchestrator | Sunday 29 March 2026 01:18:32 +0000 (0:00:01.882) 0:00:09.486 ********** 2026-03-29 01:18:37.775787 | orchestrator | ok: [testbed-node-0] 2026-03-29 01:18:37.775791 | orchestrator | 2026-03-29 01:18:37.775795 | orchestrator | TASK [Extract list of enabled mgr modules] ************************************* 2026-03-29 01:18:37.775799 | orchestrator | Sunday 29 March 2026 01:18:32 +0000 (0:00:00.248) 0:00:09.735 ********** 2026-03-29 01:18:37.775802 | orchestrator | ok: [testbed-node-0] 2026-03-29 01:18:37.775806 | orchestrator | 2026-03-29 01:18:37.775810 | orchestrator | TASK [Fail test if mgr modules are disabled that should be enabled] ************ 2026-03-29 01:18:37.775814 | orchestrator | Sunday 29 March 2026 01:18:32 +0000 (0:00:00.324) 0:00:10.059 ********** 2026-03-29 01:18:37.775818 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:18:37.775822 | orchestrator | 2026-03-29 01:18:37.775827 | orchestrator | TASK [Pass test if required mgr modules are enabled] *************************** 2026-03-29 01:18:37.775831 | orchestrator | Sunday 29 March 2026 01:18:32 +0000 (0:00:00.138) 0:00:10.198 ********** 2026-03-29 01:18:37.775836 | orchestrator | ok: [testbed-node-0] 2026-03-29 01:18:37.775840 | orchestrator | 2026-03-29 01:18:37.775844 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2026-03-29 01:18:37.775849 | orchestrator | Sunday 29 March 2026 01:18:33 +0000 (0:00:00.145) 0:00:10.343 ********** 2026-03-29 01:18:37.775853 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-29 01:18:37.775858 | orchestrator | 2026-03-29 01:18:37.775862 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2026-03-29 01:18:37.775870 | orchestrator | Sunday 29 March 2026 01:18:33 +0000 (0:00:00.250) 0:00:10.594 ********** 2026-03-29 01:18:37.775876 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:18:37.775882 | orchestrator | 2026-03-29 01:18:37.775888 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-03-29 01:18:37.775895 | orchestrator | Sunday 29 March 2026 01:18:33 +0000 (0:00:00.260) 0:00:10.855 ********** 2026-03-29 01:18:37.775900 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-29 01:18:37.775906 | orchestrator | 2026-03-29 01:18:37.775913 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-03-29 01:18:37.775918 | orchestrator | Sunday 29 March 2026 01:18:35 +0000 (0:00:01.644) 0:00:12.499 ********** 2026-03-29 01:18:37.775925 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-29 01:18:37.775936 | orchestrator | 2026-03-29 01:18:37.775943 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-03-29 01:18:37.775949 | orchestrator | Sunday 29 March 2026 01:18:35 +0000 (0:00:00.279) 0:00:12.779 ********** 2026-03-29 01:18:37.775956 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-29 01:18:37.775961 | orchestrator | 2026-03-29 01:18:37.775967 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-29 01:18:37.775973 | orchestrator | Sunday 29 March 2026 01:18:35 +0000 (0:00:00.260) 0:00:13.039 ********** 2026-03-29 01:18:37.775980 | orchestrator | 2026-03-29 01:18:37.775986 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-29 01:18:37.775992 | orchestrator | Sunday 29 March 2026 01:18:35 +0000 (0:00:00.070) 0:00:13.110 ********** 2026-03-29 01:18:37.776025 | orchestrator | 2026-03-29 01:18:37.776032 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-29 01:18:37.776039 | orchestrator | Sunday 29 March 2026 01:18:35 +0000 (0:00:00.071) 0:00:13.181 ********** 2026-03-29 01:18:37.776045 | orchestrator | 2026-03-29 01:18:37.776052 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2026-03-29 01:18:37.776058 | orchestrator | Sunday 29 March 2026 01:18:36 +0000 (0:00:00.074) 0:00:13.256 ********** 2026-03-29 01:18:37.776065 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-29 01:18:37.776071 | orchestrator | 2026-03-29 01:18:37.776078 | orchestrator | TASK [Print report file information] ******************************************* 2026-03-29 01:18:37.776085 | orchestrator | Sunday 29 March 2026 01:18:37 +0000 (0:00:01.293) 0:00:14.549 ********** 2026-03-29 01:18:37.776091 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2026-03-29 01:18:37.776098 | orchestrator |  "msg": [ 2026-03-29 01:18:37.776104 | orchestrator |  "Validator run completed.", 2026-03-29 01:18:37.776111 | orchestrator |  "You can find the report file here:", 2026-03-29 01:18:37.776117 | orchestrator |  "/opt/reports/validator/ceph-mgrs-validator-2026-03-29T01:18:24+00:00-report.json", 2026-03-29 01:18:37.776125 | orchestrator |  "on the following host:", 2026-03-29 01:18:37.776132 | orchestrator |  "testbed-manager" 2026-03-29 01:18:37.776138 | orchestrator |  ] 2026-03-29 01:18:37.776144 | orchestrator | } 2026-03-29 01:18:37.776150 | orchestrator | 2026-03-29 01:18:37.776156 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 01:18:37.776164 | orchestrator | testbed-node-0 : ok=19  changed=3  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-03-29 01:18:37.776171 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-29 01:18:37.776184 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-29 01:18:38.123287 | orchestrator | 2026-03-29 01:18:38.123378 | orchestrator | 2026-03-29 01:18:38.123387 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-29 01:18:38.123395 | orchestrator | Sunday 29 March 2026 01:18:37 +0000 (0:00:00.431) 0:00:14.981 ********** 2026-03-29 01:18:38.123402 | orchestrator | =============================================================================== 2026-03-29 01:18:38.123407 | orchestrator | Gather list of mgr modules ---------------------------------------------- 1.88s 2026-03-29 01:18:38.123414 | orchestrator | Aggregate test results step one ----------------------------------------- 1.64s 2026-03-29 01:18:38.123420 | orchestrator | Get container info ------------------------------------------------------ 1.47s 2026-03-29 01:18:38.123427 | orchestrator | Write report file ------------------------------------------------------- 1.29s 2026-03-29 01:18:38.123432 | orchestrator | Get timestamp for report file ------------------------------------------- 1.05s 2026-03-29 01:18:38.123439 | orchestrator | Create report output directory ------------------------------------------ 0.69s 2026-03-29 01:18:38.123470 | orchestrator | Set test result to failed if ceph-mgr is not running -------------------- 0.45s 2026-03-29 01:18:38.123477 | orchestrator | Print report file information ------------------------------------------- 0.43s 2026-03-29 01:18:38.123483 | orchestrator | Flush handlers ---------------------------------------------------------- 0.38s 2026-03-29 01:18:38.123489 | orchestrator | Extract list of enabled mgr modules ------------------------------------- 0.32s 2026-03-29 01:18:38.123495 | orchestrator | Fail due to missing containers ------------------------------------------ 0.30s 2026-03-29 01:18:38.123501 | orchestrator | Set test result to passed if ceph-mgr is running ------------------------ 0.30s 2026-03-29 01:18:38.123506 | orchestrator | Set test result to passed if container is existing ---------------------- 0.30s 2026-03-29 01:18:38.123513 | orchestrator | Prepare test data ------------------------------------------------------- 0.29s 2026-03-29 01:18:38.123519 | orchestrator | Aggregate test results step two ----------------------------------------- 0.28s 2026-03-29 01:18:38.123525 | orchestrator | Print report file information ------------------------------------------- 0.28s 2026-03-29 01:18:38.123531 | orchestrator | Prepare test data for container existance test -------------------------- 0.28s 2026-03-29 01:18:38.123537 | orchestrator | Set test result to failed if container is missing ----------------------- 0.28s 2026-03-29 01:18:38.123543 | orchestrator | Set validation result to failed if a test failed ------------------------ 0.26s 2026-03-29 01:18:38.123549 | orchestrator | Aggregate test results step three --------------------------------------- 0.26s 2026-03-29 01:18:38.307444 | orchestrator | + osism validate ceph-osds 2026-03-29 01:18:57.428526 | orchestrator | 2026-03-29 01:18:57.428643 | orchestrator | PLAY [Ceph validate OSDs] ****************************************************** 2026-03-29 01:18:57.428656 | orchestrator | 2026-03-29 01:18:57.428696 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2026-03-29 01:18:57.428704 | orchestrator | Sunday 29 March 2026 01:18:53 +0000 (0:00:00.510) 0:00:00.510 ********** 2026-03-29 01:18:57.428711 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-29 01:18:57.428719 | orchestrator | 2026-03-29 01:18:57.428726 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-29 01:18:57.428732 | orchestrator | Sunday 29 March 2026 01:18:54 +0000 (0:00:00.971) 0:00:01.481 ********** 2026-03-29 01:18:57.428739 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-29 01:18:57.428745 | orchestrator | 2026-03-29 01:18:57.428752 | orchestrator | TASK [Create report output directory] ****************************************** 2026-03-29 01:18:57.428758 | orchestrator | Sunday 29 March 2026 01:18:54 +0000 (0:00:00.234) 0:00:01.715 ********** 2026-03-29 01:18:57.428764 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-29 01:18:57.428770 | orchestrator | 2026-03-29 01:18:57.428776 | orchestrator | TASK [Define report vars] ****************************************************** 2026-03-29 01:18:57.428783 | orchestrator | Sunday 29 March 2026 01:18:55 +0000 (0:00:00.732) 0:00:02.448 ********** 2026-03-29 01:18:57.428789 | orchestrator | ok: [testbed-node-3] 2026-03-29 01:18:57.428797 | orchestrator | 2026-03-29 01:18:57.428803 | orchestrator | TASK [Define OSD test variables] *********************************************** 2026-03-29 01:18:57.428809 | orchestrator | Sunday 29 March 2026 01:18:55 +0000 (0:00:00.130) 0:00:02.578 ********** 2026-03-29 01:18:57.428816 | orchestrator | skipping: [testbed-node-3] 2026-03-29 01:18:57.428822 | orchestrator | 2026-03-29 01:18:57.428828 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2026-03-29 01:18:57.428834 | orchestrator | Sunday 29 March 2026 01:18:55 +0000 (0:00:00.123) 0:00:02.702 ********** 2026-03-29 01:18:57.428841 | orchestrator | skipping: [testbed-node-3] 2026-03-29 01:18:57.428847 | orchestrator | skipping: [testbed-node-4] 2026-03-29 01:18:57.428853 | orchestrator | skipping: [testbed-node-5] 2026-03-29 01:18:57.428859 | orchestrator | 2026-03-29 01:18:57.428865 | orchestrator | TASK [Define OSD test variables] *********************************************** 2026-03-29 01:18:57.428871 | orchestrator | Sunday 29 March 2026 01:18:56 +0000 (0:00:00.459) 0:00:03.162 ********** 2026-03-29 01:18:57.428899 | orchestrator | ok: [testbed-node-3] 2026-03-29 01:18:57.428906 | orchestrator | 2026-03-29 01:18:57.428911 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2026-03-29 01:18:57.428917 | orchestrator | Sunday 29 March 2026 01:18:56 +0000 (0:00:00.166) 0:00:03.328 ********** 2026-03-29 01:18:57.428923 | orchestrator | ok: [testbed-node-3] 2026-03-29 01:18:57.428929 | orchestrator | ok: [testbed-node-4] 2026-03-29 01:18:57.428935 | orchestrator | ok: [testbed-node-5] 2026-03-29 01:18:57.428941 | orchestrator | 2026-03-29 01:18:57.428961 | orchestrator | TASK [Calculate total number of OSDs in cluster] ******************************* 2026-03-29 01:18:57.428968 | orchestrator | Sunday 29 March 2026 01:18:56 +0000 (0:00:00.313) 0:00:03.642 ********** 2026-03-29 01:18:57.428975 | orchestrator | ok: [testbed-node-3] 2026-03-29 01:18:57.428981 | orchestrator | 2026-03-29 01:18:57.428987 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-03-29 01:18:57.428994 | orchestrator | Sunday 29 March 2026 01:18:56 +0000 (0:00:00.355) 0:00:03.998 ********** 2026-03-29 01:18:57.429000 | orchestrator | ok: [testbed-node-3] 2026-03-29 01:18:57.429006 | orchestrator | ok: [testbed-node-4] 2026-03-29 01:18:57.429012 | orchestrator | ok: [testbed-node-5] 2026-03-29 01:18:57.429018 | orchestrator | 2026-03-29 01:18:57.429024 | orchestrator | TASK [Get list of ceph-osd containers on host] ********************************* 2026-03-29 01:18:57.429031 | orchestrator | Sunday 29 March 2026 01:18:57 +0000 (0:00:00.298) 0:00:04.297 ********** 2026-03-29 01:18:57.429039 | orchestrator | skipping: [testbed-node-3] => (item={'id': '3298d1d0f4d88306008d54c47b72542898ee81ac36ce41740ae38203322a51e4', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 8 minutes (healthy)'})  2026-03-29 01:18:57.429049 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'a0d472768c3129cd5b225b721cce5bafc86c60b71a22ab56577f1d5fe548f984', 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 8 minutes (healthy)'})  2026-03-29 01:18:57.429058 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'ca462f39a2375138087a1806f72c26b38ee655e35ee0438b7547ea7b1b4eafd0', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2026-03-29 01:18:57.429082 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'c4cc6c54f8b32d51f372be1fdc5c59ab837f2c5b4b5080b3d297cba22f4b8ce4', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2026-03-29 01:18:57.429100 | orchestrator | skipping: [testbed-node-3] => (item={'id': '867ca1b33114d49f6031af55de7d198c3cb31b674eef956418ae2980580d8b74', 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 13 minutes'})  2026-03-29 01:18:57.429120 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'd9e684e354de279d97d2f4d5a0e1404782efee9b856e8e25c9f3d0ee236a3e15', 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 14 minutes'})  2026-03-29 01:18:57.429126 | orchestrator | skipping: [testbed-node-3] => (item={'id': '577c5fca7b04a3ac5cf5c424d602b73cc428ce19e4534de2e9f013a63d0f2e05', 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 15 minutes'})  2026-03-29 01:18:57.429133 | orchestrator | skipping: [testbed-node-3] => (item={'id': '0bdad7245bbef3b5fcc2886cfa5f11846cdbc752a6c6a557986202ce8e0b2a31', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-rgw-default-testbed-node-3-rgw0', 'state': 'running', 'status': 'Up 20 minutes'})  2026-03-29 01:18:57.429140 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'accd9c1696404420f27892504a1ec64ab5df82539f2abc8b1ae867444087e735', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-mds-testbed-node-3', 'state': 'running', 'status': 'Up 21 minutes'})  2026-03-29 01:18:57.429153 | orchestrator | skipping: [testbed-node-3] => (item={'id': '39042f0e8e20b809ab5e62fbbf68eab2d1cab08db2346aba74b8650de5c9d5c7', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-crash-testbed-node-3', 'state': 'running', 'status': 'Up 22 minutes'})  2026-03-29 01:18:57.429160 | orchestrator | ok: [testbed-node-3] => (item={'id': 'e6390b2a53eca18ff5268245e9313a23de092d4b5ce8f86503168f13e69327b7', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-3', 'state': 'running', 'status': 'Up 23 minutes'}) 2026-03-29 01:18:57.429168 | orchestrator | ok: [testbed-node-3] => (item={'id': 'f5840f9f6c3d24f8354599ec4e1ff142d697439b516f67937c3d0085c2e31f95', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-0', 'state': 'running', 'status': 'Up 23 minutes'}) 2026-03-29 01:18:57.429174 | orchestrator | skipping: [testbed-node-3] => (item={'id': '10afffe7861e6c4e2ad562bb7ef78649197c1495bc34a46d1ae243c24d59f21d', 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 26 minutes'})  2026-03-29 01:18:57.429181 | orchestrator | skipping: [testbed-node-3] => (item={'id': '6d8081eea5db17acff3be909eb65103dba1c9ce29a5d2dec3d98f8299d0100e0', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 27 minutes (healthy)'})  2026-03-29 01:18:57.429187 | orchestrator | skipping: [testbed-node-3] => (item={'id': '98dce2d6f34b312e6406048fe91f02be235b1bf044e500da65c2077fe2fe1160', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 28 minutes (healthy)'})  2026-03-29 01:18:57.429195 | orchestrator | skipping: [testbed-node-3] => (item={'id': '9cc740a2c29d4b31d4f5d2a4c890677f6d653fbfe9dc24ec0f568eeba2d76a2c', 'image': 'registry.osism.tech/kolla/cron:2024.2', 'name': '/cron', 'state': 'running', 'status': 'Up 28 minutes'})  2026-03-29 01:18:57.429201 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'da10731728b9e64bd74f9e5462bcc5255a99d8b51be227ee11c0d52ac7264509', 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 28 minutes'})  2026-03-29 01:18:57.429208 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'c418454f7bbb2b2f3a85afbb2fee8b761e5bfa03751759f7e998f8b0c7217062', 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'name': '/fluentd', 'state': 'running', 'status': 'Up 29 minutes'})  2026-03-29 01:18:57.429214 | orchestrator | skipping: [testbed-node-4] => (item={'id': '035835a0884c1696c645af54156bf72b2853031c251e59f7f327a19f04d128fb', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 8 minutes (healthy)'})  2026-03-29 01:18:57.429219 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'f9c4121385c22e9b3561fccf9b9da7edda8a850588cd373cd13002fcda6e02de', 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 8 minutes (healthy)'})  2026-03-29 01:18:57.429227 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'ae38ef1bbdfd1d688f78169ea9a7c8fd6c9e21fac3a1556f5651c46792345d17', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2026-03-29 01:18:57.429237 | orchestrator | skipping: [testbed-node-4] => (item={'id': '5e7d64d411ad2b5bdf55356780f90647c985fb18d2f7b66384250967cabe0186', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2026-03-29 01:18:57.562343 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'cc63a37c4e98c75a62d738a11d6bcef17d6f23a079ca53d4a59f65b257d73763', 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 13 minutes'})  2026-03-29 01:18:57.562481 | orchestrator | skipping: [testbed-node-4] => (item={'id': '6618309e84f9f16222a4bf4d0d1a5d0f2e117c9d0f23219f668ca2eddc581333', 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 14 minutes'})  2026-03-29 01:18:57.562496 | orchestrator | skipping: [testbed-node-4] => (item={'id': '3c4ad9b3fdc15a324c66ced32178db72f1e55aaa62a86324d76f48ae71c5cc11', 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 15 minutes'})  2026-03-29 01:18:57.562505 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'aa7f9e1960e85c220ef7cf308f8a01d916be5cacecaeac27d67e62d43d4e0b4e', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-rgw-default-testbed-node-4-rgw0', 'state': 'running', 'status': 'Up 20 minutes'})  2026-03-29 01:18:57.562512 | orchestrator | skipping: [testbed-node-4] => (item={'id': '1523bdeba486617b710b3a79657708689917f60a54c312649f591fbcda3bf413', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-mds-testbed-node-4', 'state': 'running', 'status': 'Up 21 minutes'})  2026-03-29 01:18:57.562518 | orchestrator | skipping: [testbed-node-4] => (item={'id': '859b90fc48b281f2b3ff644c0b239a12ab4e508fd960fedd3fea526472905aa1', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-crash-testbed-node-4', 'state': 'running', 'status': 'Up 22 minutes'})  2026-03-29 01:18:57.562527 | orchestrator | ok: [testbed-node-4] => (item={'id': '0114a723d307836097ca9ebac4289d39aa037273d3ae81d8e679ae346004ea14', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-5', 'state': 'running', 'status': 'Up 23 minutes'}) 2026-03-29 01:18:57.562534 | orchestrator | ok: [testbed-node-4] => (item={'id': '777acf53d08f828aa363ce6758529192244ca930c95ce35e136fb79eb9657665', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-1', 'state': 'running', 'status': 'Up 23 minutes'}) 2026-03-29 01:18:57.562540 | orchestrator | skipping: [testbed-node-4] => (item={'id': '0f46622e1753be2b38f2016b43df512fd94c997351a0ba40ba4729b3dca218b4', 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 26 minutes'})  2026-03-29 01:18:57.562546 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'f131f4d17200e68d1b182a5785f3e51458d766b803d02246e15d7d724293dbfe', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 27 minutes (healthy)'})  2026-03-29 01:18:57.562552 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'bb08283944284777a38d36e572c270aff3eba4466fa7372a37e8bec1b7c0e7c2', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 28 minutes (healthy)'})  2026-03-29 01:18:57.562560 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'd48a650f6eadbf38716fe7258696b0b0a5e3dfc707c151d6b21186c9d7a30228', 'image': 'registry.osism.tech/kolla/cron:2024.2', 'name': '/cron', 'state': 'running', 'status': 'Up 28 minutes'})  2026-03-29 01:18:57.562566 | orchestrator | skipping: [testbed-node-4] => (item={'id': '087580ae783e69ff5418b11f86ae195e4e5da88926a006e52dcb4a3f2d986ea0', 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 28 minutes'})  2026-03-29 01:18:57.562573 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'cfd853747b18a45688f6e156da6f848945bc9867e6b773fc0336203dc153cb1f', 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'name': '/fluentd', 'state': 'running', 'status': 'Up 29 minutes'})  2026-03-29 01:18:57.562579 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'cebf24810d86c93af581cd5b8f0b18ea0b30b09657b161101eaa3e2510d5a13b', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 8 minutes (healthy)'})  2026-03-29 01:18:57.562610 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'f2e602f296b07eb5293211a2898ce6f5f2eeda8e9bd4f79787033e8c7317f618', 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 8 minutes (healthy)'})  2026-03-29 01:18:57.562618 | orchestrator | skipping: [testbed-node-5] => (item={'id': '286c4a67d1dfbc251f8a27cb3908f57827ffe059f2050de2979d78e52ab4a5d2', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2026-03-29 01:18:57.562624 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'cdcc4631efcd2808bad22cb0e8d385a437c480ec156d6fe34c8b4bf45a33616b', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2026-03-29 01:18:57.562630 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'd2dfe11b4011c4fc7b8fd5adcc7f043f57d9fa9aeecdb91a3c7a34c860437e9b', 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 13 minutes'})  2026-03-29 01:18:57.562635 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'f6eb0c53a0b498de89be6282b5e793efa1eb1dd5fdfce21d0ec152e23b57d3f0', 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 14 minutes'})  2026-03-29 01:18:57.562641 | orchestrator | skipping: [testbed-node-5] => (item={'id': '8d4fdbf91ac45ceaee16988bb510284faa4532768d3fe4f44dc186b0c6446ea8', 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 15 minutes'})  2026-03-29 01:18:57.562646 | orchestrator | skipping: [testbed-node-5] => (item={'id': '94799b7fbd7d64217f2af7c912b73ccb6f1a1f545442a78fee62492bb4509d9c', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-rgw-default-testbed-node-5-rgw0', 'state': 'running', 'status': 'Up 20 minutes'})  2026-03-29 01:18:57.562651 | orchestrator | skipping: [testbed-node-5] => (item={'id': '33ad58a945cd88d002bc46750920c75003f1565627dcb90d1c96d19cc8244470', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-mds-testbed-node-5', 'state': 'running', 'status': 'Up 21 minutes'})  2026-03-29 01:18:57.562656 | orchestrator | skipping: [testbed-node-5] => (item={'id': '807706f3ba9af5b443f2c5b2db44c313883b104bcb8f803363e9308d98fc93aa', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-crash-testbed-node-5', 'state': 'running', 'status': 'Up 22 minutes'})  2026-03-29 01:18:57.562677 | orchestrator | ok: [testbed-node-5] => (item={'id': 'ed146a0b45b275e36f227ef28ef4750abbad23bc27c31727382b699101c5dbaa', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-4', 'state': 'running', 'status': 'Up 23 minutes'}) 2026-03-29 01:18:57.562683 | orchestrator | ok: [testbed-node-5] => (item={'id': 'c561de8f63ac76ea7b0657b75b87d678c31f4e3dffeae87c3bb50b41d957d520', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-2', 'state': 'running', 'status': 'Up 23 minutes'}) 2026-03-29 01:18:57.562689 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'd88e13c1a73bab0c6fa7997dc53d454ae83499154a5760fd4fb8a85516d6b007', 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 26 minutes'})  2026-03-29 01:18:57.562695 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'f30f3dd48ed8d2ea914c95c1808585c9f9caea832d7927c6882656a142da3166', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 27 minutes (healthy)'})  2026-03-29 01:18:57.562701 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'd2792ac3d4c97b11a54e7b5dee5702ddf48c1318687f5907cd6c0d2b07812d71', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 28 minutes (healthy)'})  2026-03-29 01:18:57.562715 | orchestrator | skipping: [testbed-node-5] => (item={'id': '28d4a3ada12c9455d6b86eee35966d24eedf906d7e8a672dab7134cb4e92a44f', 'image': 'registry.osism.tech/kolla/cron:2024.2', 'name': '/cron', 'state': 'running', 'status': 'Up 28 minutes'})  2026-03-29 01:18:57.562721 | orchestrator | skipping: [testbed-node-5] => (item={'id': '7947ef46e090c0cd4208dbdad7b002b0484e1ceacacd3d2fde3e9b03b10da24f', 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 28 minutes'})  2026-03-29 01:18:57.562735 | orchestrator | skipping: [testbed-node-5] => (item={'id': '7a391e1f352d13af35e0e29b87179b4740f6efe8be5f1c506a72d9dcdaea598b', 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'name': '/fluentd', 'state': 'running', 'status': 'Up 29 minutes'})  2026-03-29 01:19:10.428752 | orchestrator | 2026-03-29 01:19:10.428815 | orchestrator | TASK [Get count of ceph-osd containers on host] ******************************** 2026-03-29 01:19:10.428825 | orchestrator | Sunday 29 March 2026 01:18:57 +0000 (0:00:00.663) 0:00:04.961 ********** 2026-03-29 01:19:10.428832 | orchestrator | ok: [testbed-node-3] 2026-03-29 01:19:10.428838 | orchestrator | ok: [testbed-node-4] 2026-03-29 01:19:10.428844 | orchestrator | ok: [testbed-node-5] 2026-03-29 01:19:10.428850 | orchestrator | 2026-03-29 01:19:10.428856 | orchestrator | TASK [Set test result to failed when count of containers is wrong] ************* 2026-03-29 01:19:10.428862 | orchestrator | Sunday 29 March 2026 01:18:58 +0000 (0:00:00.315) 0:00:05.277 ********** 2026-03-29 01:19:10.428885 | orchestrator | skipping: [testbed-node-3] 2026-03-29 01:19:10.428892 | orchestrator | skipping: [testbed-node-4] 2026-03-29 01:19:10.428899 | orchestrator | skipping: [testbed-node-5] 2026-03-29 01:19:10.428905 | orchestrator | 2026-03-29 01:19:10.428912 | orchestrator | TASK [Set test result to passed if count matches] ****************************** 2026-03-29 01:19:10.428918 | orchestrator | Sunday 29 March 2026 01:18:58 +0000 (0:00:00.268) 0:00:05.545 ********** 2026-03-29 01:19:10.428925 | orchestrator | ok: [testbed-node-3] 2026-03-29 01:19:10.428931 | orchestrator | ok: [testbed-node-4] 2026-03-29 01:19:10.428937 | orchestrator | ok: [testbed-node-5] 2026-03-29 01:19:10.428944 | orchestrator | 2026-03-29 01:19:10.428950 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-03-29 01:19:10.428960 | orchestrator | Sunday 29 March 2026 01:18:58 +0000 (0:00:00.285) 0:00:05.831 ********** 2026-03-29 01:19:10.428968 | orchestrator | ok: [testbed-node-3] 2026-03-29 01:19:10.428974 | orchestrator | ok: [testbed-node-4] 2026-03-29 01:19:10.428980 | orchestrator | ok: [testbed-node-5] 2026-03-29 01:19:10.428987 | orchestrator | 2026-03-29 01:19:10.428993 | orchestrator | TASK [Get list of ceph-osd containers that are not running] ******************** 2026-03-29 01:19:10.428999 | orchestrator | Sunday 29 March 2026 01:18:59 +0000 (0:00:00.455) 0:00:06.286 ********** 2026-03-29 01:19:10.429006 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-3', 'osd_id': '3', 'state': 'running'})  2026-03-29 01:19:10.429016 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-0', 'osd_id': '0', 'state': 'running'})  2026-03-29 01:19:10.429024 | orchestrator | skipping: [testbed-node-3] 2026-03-29 01:19:10.429031 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-5', 'osd_id': '5', 'state': 'running'})  2026-03-29 01:19:10.429037 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-1', 'osd_id': '1', 'state': 'running'})  2026-03-29 01:19:10.429043 | orchestrator | skipping: [testbed-node-4] 2026-03-29 01:19:10.429048 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-4', 'osd_id': '4', 'state': 'running'})  2026-03-29 01:19:10.429055 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-2', 'osd_id': '2', 'state': 'running'})  2026-03-29 01:19:10.429061 | orchestrator | skipping: [testbed-node-5] 2026-03-29 01:19:10.429067 | orchestrator | 2026-03-29 01:19:10.429072 | orchestrator | TASK [Get count of ceph-osd containers that are not running] ******************* 2026-03-29 01:19:10.429078 | orchestrator | Sunday 29 March 2026 01:18:59 +0000 (0:00:00.316) 0:00:06.603 ********** 2026-03-29 01:19:10.429098 | orchestrator | ok: [testbed-node-3] 2026-03-29 01:19:10.429105 | orchestrator | ok: [testbed-node-4] 2026-03-29 01:19:10.429110 | orchestrator | ok: [testbed-node-5] 2026-03-29 01:19:10.429144 | orchestrator | 2026-03-29 01:19:10.429151 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2026-03-29 01:19:10.429157 | orchestrator | Sunday 29 March 2026 01:18:59 +0000 (0:00:00.326) 0:00:06.929 ********** 2026-03-29 01:19:10.429164 | orchestrator | skipping: [testbed-node-3] 2026-03-29 01:19:10.429170 | orchestrator | skipping: [testbed-node-4] 2026-03-29 01:19:10.429176 | orchestrator | skipping: [testbed-node-5] 2026-03-29 01:19:10.429182 | orchestrator | 2026-03-29 01:19:10.429188 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2026-03-29 01:19:10.429194 | orchestrator | Sunday 29 March 2026 01:19:00 +0000 (0:00:00.314) 0:00:07.244 ********** 2026-03-29 01:19:10.429200 | orchestrator | skipping: [testbed-node-3] 2026-03-29 01:19:10.429206 | orchestrator | skipping: [testbed-node-4] 2026-03-29 01:19:10.429212 | orchestrator | skipping: [testbed-node-5] 2026-03-29 01:19:10.429218 | orchestrator | 2026-03-29 01:19:10.429225 | orchestrator | TASK [Set test result to passed if all containers are running] ***************** 2026-03-29 01:19:10.429231 | orchestrator | Sunday 29 March 2026 01:19:00 +0000 (0:00:00.490) 0:00:07.734 ********** 2026-03-29 01:19:10.429237 | orchestrator | ok: [testbed-node-3] 2026-03-29 01:19:10.429243 | orchestrator | ok: [testbed-node-4] 2026-03-29 01:19:10.429249 | orchestrator | ok: [testbed-node-5] 2026-03-29 01:19:10.429255 | orchestrator | 2026-03-29 01:19:10.429261 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-03-29 01:19:10.429268 | orchestrator | Sunday 29 March 2026 01:19:00 +0000 (0:00:00.305) 0:00:08.039 ********** 2026-03-29 01:19:10.429274 | orchestrator | skipping: [testbed-node-3] 2026-03-29 01:19:10.429280 | orchestrator | 2026-03-29 01:19:10.429286 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-03-29 01:19:10.429302 | orchestrator | Sunday 29 March 2026 01:19:01 +0000 (0:00:00.250) 0:00:08.290 ********** 2026-03-29 01:19:10.429309 | orchestrator | skipping: [testbed-node-3] 2026-03-29 01:19:10.429315 | orchestrator | 2026-03-29 01:19:10.429322 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-03-29 01:19:10.429328 | orchestrator | Sunday 29 March 2026 01:19:01 +0000 (0:00:00.250) 0:00:08.541 ********** 2026-03-29 01:19:10.429335 | orchestrator | skipping: [testbed-node-3] 2026-03-29 01:19:10.429341 | orchestrator | 2026-03-29 01:19:10.429348 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-29 01:19:10.429355 | orchestrator | Sunday 29 March 2026 01:19:01 +0000 (0:00:00.241) 0:00:08.783 ********** 2026-03-29 01:19:10.429362 | orchestrator | 2026-03-29 01:19:10.429369 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-29 01:19:10.429376 | orchestrator | Sunday 29 March 2026 01:19:01 +0000 (0:00:00.070) 0:00:08.854 ********** 2026-03-29 01:19:10.429383 | orchestrator | 2026-03-29 01:19:10.429391 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-29 01:19:10.429412 | orchestrator | Sunday 29 March 2026 01:19:01 +0000 (0:00:00.071) 0:00:08.925 ********** 2026-03-29 01:19:10.429418 | orchestrator | 2026-03-29 01:19:10.429423 | orchestrator | TASK [Print report file information] ******************************************* 2026-03-29 01:19:10.429427 | orchestrator | Sunday 29 March 2026 01:19:01 +0000 (0:00:00.077) 0:00:09.003 ********** 2026-03-29 01:19:10.429431 | orchestrator | skipping: [testbed-node-3] 2026-03-29 01:19:10.429436 | orchestrator | 2026-03-29 01:19:10.429440 | orchestrator | TASK [Fail early due to containers not running] ******************************** 2026-03-29 01:19:10.429444 | orchestrator | Sunday 29 March 2026 01:19:02 +0000 (0:00:00.615) 0:00:09.618 ********** 2026-03-29 01:19:10.429449 | orchestrator | skipping: [testbed-node-3] 2026-03-29 01:19:10.429453 | orchestrator | 2026-03-29 01:19:10.429457 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-03-29 01:19:10.429462 | orchestrator | Sunday 29 March 2026 01:19:02 +0000 (0:00:00.238) 0:00:09.857 ********** 2026-03-29 01:19:10.429472 | orchestrator | ok: [testbed-node-3] 2026-03-29 01:19:10.429477 | orchestrator | ok: [testbed-node-4] 2026-03-29 01:19:10.429481 | orchestrator | ok: [testbed-node-5] 2026-03-29 01:19:10.429485 | orchestrator | 2026-03-29 01:19:10.429490 | orchestrator | TASK [Set _mon_hostname fact] ************************************************** 2026-03-29 01:19:10.429494 | orchestrator | Sunday 29 March 2026 01:19:03 +0000 (0:00:00.308) 0:00:10.165 ********** 2026-03-29 01:19:10.429499 | orchestrator | ok: [testbed-node-3] 2026-03-29 01:19:10.429503 | orchestrator | 2026-03-29 01:19:10.429507 | orchestrator | TASK [Get ceph osd tree] ******************************************************* 2026-03-29 01:19:10.429512 | orchestrator | Sunday 29 March 2026 01:19:03 +0000 (0:00:00.232) 0:00:10.397 ********** 2026-03-29 01:19:10.429516 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-29 01:19:10.429520 | orchestrator | 2026-03-29 01:19:10.429525 | orchestrator | TASK [Parse osd tree from JSON] ************************************************ 2026-03-29 01:19:10.429531 | orchestrator | Sunday 29 March 2026 01:19:05 +0000 (0:00:01.833) 0:00:12.231 ********** 2026-03-29 01:19:10.429539 | orchestrator | ok: [testbed-node-3] 2026-03-29 01:19:10.429548 | orchestrator | 2026-03-29 01:19:10.429554 | orchestrator | TASK [Get OSDs that are not up or in] ****************************************** 2026-03-29 01:19:10.429561 | orchestrator | Sunday 29 March 2026 01:19:05 +0000 (0:00:00.128) 0:00:12.359 ********** 2026-03-29 01:19:10.429567 | orchestrator | ok: [testbed-node-3] 2026-03-29 01:19:10.429573 | orchestrator | 2026-03-29 01:19:10.429579 | orchestrator | TASK [Fail test if OSDs are not up or in] ************************************** 2026-03-29 01:19:10.429586 | orchestrator | Sunday 29 March 2026 01:19:05 +0000 (0:00:00.303) 0:00:12.662 ********** 2026-03-29 01:19:10.429592 | orchestrator | skipping: [testbed-node-3] 2026-03-29 01:19:10.429598 | orchestrator | 2026-03-29 01:19:10.429604 | orchestrator | TASK [Pass test if OSDs are all up and in] ************************************* 2026-03-29 01:19:10.429610 | orchestrator | Sunday 29 March 2026 01:19:05 +0000 (0:00:00.117) 0:00:12.780 ********** 2026-03-29 01:19:10.429616 | orchestrator | ok: [testbed-node-3] 2026-03-29 01:19:10.429622 | orchestrator | 2026-03-29 01:19:10.429628 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-03-29 01:19:10.429634 | orchestrator | Sunday 29 March 2026 01:19:05 +0000 (0:00:00.143) 0:00:12.923 ********** 2026-03-29 01:19:10.429640 | orchestrator | ok: [testbed-node-3] 2026-03-29 01:19:10.429646 | orchestrator | ok: [testbed-node-4] 2026-03-29 01:19:10.429652 | orchestrator | ok: [testbed-node-5] 2026-03-29 01:19:10.429658 | orchestrator | 2026-03-29 01:19:10.429664 | orchestrator | TASK [List ceph LVM volumes and collect data] ********************************** 2026-03-29 01:19:10.429670 | orchestrator | Sunday 29 March 2026 01:19:06 +0000 (0:00:00.451) 0:00:13.374 ********** 2026-03-29 01:19:10.429676 | orchestrator | changed: [testbed-node-3] 2026-03-29 01:19:10.429682 | orchestrator | changed: [testbed-node-4] 2026-03-29 01:19:10.429689 | orchestrator | changed: [testbed-node-5] 2026-03-29 01:19:10.429696 | orchestrator | 2026-03-29 01:19:10.429703 | orchestrator | TASK [Parse LVM data as JSON] ************************************************** 2026-03-29 01:19:10.429709 | orchestrator | Sunday 29 March 2026 01:19:07 +0000 (0:00:01.541) 0:00:14.916 ********** 2026-03-29 01:19:10.429715 | orchestrator | ok: [testbed-node-3] 2026-03-29 01:19:10.429722 | orchestrator | ok: [testbed-node-4] 2026-03-29 01:19:10.429728 | orchestrator | ok: [testbed-node-5] 2026-03-29 01:19:10.429735 | orchestrator | 2026-03-29 01:19:10.429742 | orchestrator | TASK [Get unencrypted and encrypted OSDs] ************************************** 2026-03-29 01:19:10.429749 | orchestrator | Sunday 29 March 2026 01:19:08 +0000 (0:00:00.329) 0:00:15.245 ********** 2026-03-29 01:19:10.429755 | orchestrator | ok: [testbed-node-3] 2026-03-29 01:19:10.429762 | orchestrator | ok: [testbed-node-4] 2026-03-29 01:19:10.429769 | orchestrator | ok: [testbed-node-5] 2026-03-29 01:19:10.429776 | orchestrator | 2026-03-29 01:19:10.429783 | orchestrator | TASK [Fail if count of encrypted OSDs does not match] ************************** 2026-03-29 01:19:10.429789 | orchestrator | Sunday 29 March 2026 01:19:09 +0000 (0:00:00.898) 0:00:16.144 ********** 2026-03-29 01:19:10.429802 | orchestrator | skipping: [testbed-node-3] 2026-03-29 01:19:10.429809 | orchestrator | skipping: [testbed-node-4] 2026-03-29 01:19:10.429816 | orchestrator | skipping: [testbed-node-5] 2026-03-29 01:19:10.429823 | orchestrator | 2026-03-29 01:19:10.429829 | orchestrator | TASK [Pass if count of encrypted OSDs equals count of OSDs] ******************** 2026-03-29 01:19:10.429841 | orchestrator | Sunday 29 March 2026 01:19:09 +0000 (0:00:00.300) 0:00:16.445 ********** 2026-03-29 01:19:10.429848 | orchestrator | ok: [testbed-node-3] 2026-03-29 01:19:10.429855 | orchestrator | ok: [testbed-node-4] 2026-03-29 01:19:10.429861 | orchestrator | ok: [testbed-node-5] 2026-03-29 01:19:10.429868 | orchestrator | 2026-03-29 01:19:10.429875 | orchestrator | TASK [Fail if count of unencrypted OSDs does not match] ************************ 2026-03-29 01:19:10.429882 | orchestrator | Sunday 29 March 2026 01:19:09 +0000 (0:00:00.312) 0:00:16.758 ********** 2026-03-29 01:19:10.429889 | orchestrator | skipping: [testbed-node-3] 2026-03-29 01:19:10.429896 | orchestrator | skipping: [testbed-node-4] 2026-03-29 01:19:10.429903 | orchestrator | skipping: [testbed-node-5] 2026-03-29 01:19:10.429909 | orchestrator | 2026-03-29 01:19:10.429916 | orchestrator | TASK [Pass if count of unencrypted OSDs equals count of OSDs] ****************** 2026-03-29 01:19:10.429923 | orchestrator | Sunday 29 March 2026 01:19:09 +0000 (0:00:00.287) 0:00:17.045 ********** 2026-03-29 01:19:10.429930 | orchestrator | skipping: [testbed-node-3] 2026-03-29 01:19:10.429936 | orchestrator | skipping: [testbed-node-4] 2026-03-29 01:19:10.429943 | orchestrator | skipping: [testbed-node-5] 2026-03-29 01:19:10.429950 | orchestrator | 2026-03-29 01:19:10.429963 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-03-29 01:19:17.635110 | orchestrator | Sunday 29 March 2026 01:19:10 +0000 (0:00:00.506) 0:00:17.552 ********** 2026-03-29 01:19:17.635271 | orchestrator | ok: [testbed-node-3] 2026-03-29 01:19:17.635286 | orchestrator | ok: [testbed-node-4] 2026-03-29 01:19:17.635295 | orchestrator | ok: [testbed-node-5] 2026-03-29 01:19:17.635303 | orchestrator | 2026-03-29 01:19:17.635312 | orchestrator | TASK [Get CRUSH node data of each OSD host and root node childs] *************** 2026-03-29 01:19:17.635321 | orchestrator | Sunday 29 March 2026 01:19:10 +0000 (0:00:00.518) 0:00:18.070 ********** 2026-03-29 01:19:17.635330 | orchestrator | ok: [testbed-node-3] 2026-03-29 01:19:17.635335 | orchestrator | ok: [testbed-node-4] 2026-03-29 01:19:17.635340 | orchestrator | ok: [testbed-node-5] 2026-03-29 01:19:17.635345 | orchestrator | 2026-03-29 01:19:17.635350 | orchestrator | TASK [Calculate sub test expression results] *********************************** 2026-03-29 01:19:17.635355 | orchestrator | Sunday 29 March 2026 01:19:11 +0000 (0:00:00.513) 0:00:18.583 ********** 2026-03-29 01:19:17.635360 | orchestrator | ok: [testbed-node-3] 2026-03-29 01:19:17.635365 | orchestrator | ok: [testbed-node-4] 2026-03-29 01:19:17.635369 | orchestrator | ok: [testbed-node-5] 2026-03-29 01:19:17.635374 | orchestrator | 2026-03-29 01:19:17.635379 | orchestrator | TASK [Fail test if any sub test failed] **************************************** 2026-03-29 01:19:17.635383 | orchestrator | Sunday 29 March 2026 01:19:11 +0000 (0:00:00.297) 0:00:18.880 ********** 2026-03-29 01:19:17.635388 | orchestrator | skipping: [testbed-node-3] 2026-03-29 01:19:17.635394 | orchestrator | skipping: [testbed-node-4] 2026-03-29 01:19:17.635399 | orchestrator | skipping: [testbed-node-5] 2026-03-29 01:19:17.635403 | orchestrator | 2026-03-29 01:19:17.635408 | orchestrator | TASK [Pass test if no sub test failed] ***************************************** 2026-03-29 01:19:17.635413 | orchestrator | Sunday 29 March 2026 01:19:12 +0000 (0:00:00.461) 0:00:19.342 ********** 2026-03-29 01:19:17.635417 | orchestrator | ok: [testbed-node-3] 2026-03-29 01:19:17.635422 | orchestrator | ok: [testbed-node-4] 2026-03-29 01:19:17.635426 | orchestrator | ok: [testbed-node-5] 2026-03-29 01:19:17.635431 | orchestrator | 2026-03-29 01:19:17.635436 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2026-03-29 01:19:17.635440 | orchestrator | Sunday 29 March 2026 01:19:12 +0000 (0:00:00.297) 0:00:19.639 ********** 2026-03-29 01:19:17.635445 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-29 01:19:17.635477 | orchestrator | 2026-03-29 01:19:17.635489 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2026-03-29 01:19:17.635495 | orchestrator | Sunday 29 March 2026 01:19:12 +0000 (0:00:00.257) 0:00:19.897 ********** 2026-03-29 01:19:17.635502 | orchestrator | skipping: [testbed-node-3] 2026-03-29 01:19:17.635508 | orchestrator | 2026-03-29 01:19:17.635517 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-03-29 01:19:17.635523 | orchestrator | Sunday 29 March 2026 01:19:13 +0000 (0:00:00.246) 0:00:20.143 ********** 2026-03-29 01:19:17.635530 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-29 01:19:17.635537 | orchestrator | 2026-03-29 01:19:17.635543 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-03-29 01:19:17.635550 | orchestrator | Sunday 29 March 2026 01:19:14 +0000 (0:00:01.735) 0:00:21.879 ********** 2026-03-29 01:19:17.635556 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-29 01:19:17.635563 | orchestrator | 2026-03-29 01:19:17.635569 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-03-29 01:19:17.635575 | orchestrator | Sunday 29 March 2026 01:19:15 +0000 (0:00:00.255) 0:00:22.135 ********** 2026-03-29 01:19:17.635583 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-29 01:19:17.635589 | orchestrator | 2026-03-29 01:19:17.635596 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-29 01:19:17.635603 | orchestrator | Sunday 29 March 2026 01:19:15 +0000 (0:00:00.252) 0:00:22.387 ********** 2026-03-29 01:19:17.635610 | orchestrator | 2026-03-29 01:19:17.635617 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-29 01:19:17.635624 | orchestrator | Sunday 29 March 2026 01:19:15 +0000 (0:00:00.239) 0:00:22.626 ********** 2026-03-29 01:19:17.635632 | orchestrator | 2026-03-29 01:19:17.635639 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-29 01:19:17.635646 | orchestrator | Sunday 29 March 2026 01:19:15 +0000 (0:00:00.066) 0:00:22.692 ********** 2026-03-29 01:19:17.635653 | orchestrator | 2026-03-29 01:19:17.635661 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2026-03-29 01:19:17.635669 | orchestrator | Sunday 29 March 2026 01:19:15 +0000 (0:00:00.071) 0:00:22.764 ********** 2026-03-29 01:19:17.635677 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-29 01:19:17.635685 | orchestrator | 2026-03-29 01:19:17.635693 | orchestrator | TASK [Print report file information] ******************************************* 2026-03-29 01:19:17.635700 | orchestrator | Sunday 29 March 2026 01:19:16 +0000 (0:00:01.279) 0:00:24.043 ********** 2026-03-29 01:19:17.635707 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => { 2026-03-29 01:19:17.635716 | orchestrator |  "msg": [ 2026-03-29 01:19:17.635729 | orchestrator |  "Validator run completed.", 2026-03-29 01:19:17.635737 | orchestrator |  "You can find the report file here:", 2026-03-29 01:19:17.635746 | orchestrator |  "/opt/reports/validator/ceph-osds-validator-2026-03-29T01:18:54+00:00-report.json", 2026-03-29 01:19:17.635755 | orchestrator |  "on the following host:", 2026-03-29 01:19:17.635763 | orchestrator |  "testbed-manager" 2026-03-29 01:19:17.635771 | orchestrator |  ] 2026-03-29 01:19:17.635779 | orchestrator | } 2026-03-29 01:19:17.635788 | orchestrator | 2026-03-29 01:19:17.635796 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 01:19:17.635805 | orchestrator | testbed-node-3 : ok=35  changed=4  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-03-29 01:19:17.635815 | orchestrator | testbed-node-4 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-03-29 01:19:17.635837 | orchestrator | testbed-node-5 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-03-29 01:19:17.635850 | orchestrator | 2026-03-29 01:19:17.635855 | orchestrator | 2026-03-29 01:19:17.635860 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-29 01:19:17.635899 | orchestrator | Sunday 29 March 2026 01:19:17 +0000 (0:00:00.407) 0:00:24.451 ********** 2026-03-29 01:19:17.635905 | orchestrator | =============================================================================== 2026-03-29 01:19:17.635909 | orchestrator | Get ceph osd tree ------------------------------------------------------- 1.83s 2026-03-29 01:19:17.635914 | orchestrator | Aggregate test results step one ----------------------------------------- 1.74s 2026-03-29 01:19:17.635919 | orchestrator | List ceph LVM volumes and collect data ---------------------------------- 1.54s 2026-03-29 01:19:17.635924 | orchestrator | Write report file ------------------------------------------------------- 1.28s 2026-03-29 01:19:17.635928 | orchestrator | Get timestamp for report file ------------------------------------------- 0.97s 2026-03-29 01:19:17.635933 | orchestrator | Get unencrypted and encrypted OSDs -------------------------------------- 0.90s 2026-03-29 01:19:17.635938 | orchestrator | Create report output directory ------------------------------------------ 0.73s 2026-03-29 01:19:17.635942 | orchestrator | Get list of ceph-osd containers on host --------------------------------- 0.66s 2026-03-29 01:19:17.635947 | orchestrator | Print report file information ------------------------------------------- 0.62s 2026-03-29 01:19:17.635951 | orchestrator | Prepare test data ------------------------------------------------------- 0.52s 2026-03-29 01:19:17.635956 | orchestrator | Get CRUSH node data of each OSD host and root node childs --------------- 0.51s 2026-03-29 01:19:17.635960 | orchestrator | Pass if count of unencrypted OSDs equals count of OSDs ------------------ 0.51s 2026-03-29 01:19:17.635965 | orchestrator | Set test result to failed if an OSD is not running ---------------------- 0.49s 2026-03-29 01:19:17.635969 | orchestrator | Fail test if any sub test failed ---------------------------------------- 0.46s 2026-03-29 01:19:17.635974 | orchestrator | Calculate OSD devices for each host ------------------------------------- 0.46s 2026-03-29 01:19:17.635978 | orchestrator | Prepare test data ------------------------------------------------------- 0.46s 2026-03-29 01:19:17.635983 | orchestrator | Prepare test data ------------------------------------------------------- 0.45s 2026-03-29 01:19:17.635987 | orchestrator | Print report file information ------------------------------------------- 0.41s 2026-03-29 01:19:17.635992 | orchestrator | Flush handlers ---------------------------------------------------------- 0.38s 2026-03-29 01:19:17.635997 | orchestrator | Calculate total number of OSDs in cluster ------------------------------- 0.36s 2026-03-29 01:19:17.826957 | orchestrator | + sh -c /opt/configuration/scripts/check/200-infrastructure.sh 2026-03-29 01:19:17.836972 | orchestrator | + set -e 2026-03-29 01:19:17.837047 | orchestrator | + source /opt/manager-vars.sh 2026-03-29 01:19:17.837055 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-03-29 01:19:17.837060 | orchestrator | ++ NUMBER_OF_NODES=6 2026-03-29 01:19:17.837064 | orchestrator | ++ export CEPH_VERSION=reef 2026-03-29 01:19:17.837068 | orchestrator | ++ CEPH_VERSION=reef 2026-03-29 01:19:17.837072 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-03-29 01:19:17.837078 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-03-29 01:19:17.837082 | orchestrator | ++ export MANAGER_VERSION=latest 2026-03-29 01:19:17.837086 | orchestrator | ++ MANAGER_VERSION=latest 2026-03-29 01:19:17.837091 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-03-29 01:19:17.837095 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-03-29 01:19:17.837099 | orchestrator | ++ export ARA=false 2026-03-29 01:19:17.837103 | orchestrator | ++ ARA=false 2026-03-29 01:19:17.837107 | orchestrator | ++ export DEPLOY_MODE=manager 2026-03-29 01:19:17.837111 | orchestrator | ++ DEPLOY_MODE=manager 2026-03-29 01:19:17.837115 | orchestrator | ++ export TEMPEST=true 2026-03-29 01:19:17.837118 | orchestrator | ++ TEMPEST=true 2026-03-29 01:19:17.837122 | orchestrator | ++ export IS_ZUUL=true 2026-03-29 01:19:17.837126 | orchestrator | ++ IS_ZUUL=true 2026-03-29 01:19:17.837130 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.35 2026-03-29 01:19:17.837134 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.35 2026-03-29 01:19:17.837170 | orchestrator | ++ export EXTERNAL_API=false 2026-03-29 01:19:17.837176 | orchestrator | ++ EXTERNAL_API=false 2026-03-29 01:19:17.837179 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-03-29 01:19:17.837183 | orchestrator | ++ IMAGE_USER=ubuntu 2026-03-29 01:19:17.837204 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-03-29 01:19:17.837208 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-03-29 01:19:17.837212 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-03-29 01:19:17.837216 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-03-29 01:19:17.837220 | orchestrator | + source /etc/os-release 2026-03-29 01:19:17.837223 | orchestrator | ++ PRETTY_NAME='Ubuntu 24.04.4 LTS' 2026-03-29 01:19:17.837227 | orchestrator | ++ NAME=Ubuntu 2026-03-29 01:19:17.837231 | orchestrator | ++ VERSION_ID=24.04 2026-03-29 01:19:17.837236 | orchestrator | ++ VERSION='24.04.4 LTS (Noble Numbat)' 2026-03-29 01:19:17.837240 | orchestrator | ++ VERSION_CODENAME=noble 2026-03-29 01:19:17.837243 | orchestrator | ++ ID=ubuntu 2026-03-29 01:19:17.837247 | orchestrator | ++ ID_LIKE=debian 2026-03-29 01:19:17.837251 | orchestrator | ++ HOME_URL=https://www.ubuntu.com/ 2026-03-29 01:19:17.837255 | orchestrator | ++ SUPPORT_URL=https://help.ubuntu.com/ 2026-03-29 01:19:17.837259 | orchestrator | ++ BUG_REPORT_URL=https://bugs.launchpad.net/ubuntu/ 2026-03-29 01:19:17.837263 | orchestrator | ++ PRIVACY_POLICY_URL=https://www.ubuntu.com/legal/terms-and-policies/privacy-policy 2026-03-29 01:19:17.837268 | orchestrator | ++ UBUNTU_CODENAME=noble 2026-03-29 01:19:17.837271 | orchestrator | ++ LOGO=ubuntu-logo 2026-03-29 01:19:17.837275 | orchestrator | + [[ ubuntu == \u\b\u\n\t\u ]] 2026-03-29 01:19:17.837294 | orchestrator | + packages='libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client' 2026-03-29 01:19:17.837302 | orchestrator | + dpkg -s libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2026-03-29 01:19:17.869006 | orchestrator | + sudo apt-get install -y libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2026-03-29 01:19:41.359781 | orchestrator | 2026-03-29 01:19:41.359862 | orchestrator | # Status of Elasticsearch 2026-03-29 01:19:41.359870 | orchestrator | 2026-03-29 01:19:41.359874 | orchestrator | + pushd /opt/configuration/contrib 2026-03-29 01:19:41.359880 | orchestrator | + echo 2026-03-29 01:19:41.359884 | orchestrator | + echo '# Status of Elasticsearch' 2026-03-29 01:19:41.359888 | orchestrator | + echo 2026-03-29 01:19:41.359893 | orchestrator | + bash nagios-plugins/check_elasticsearch -H api-int.testbed.osism.xyz -s 2026-03-29 01:19:41.527734 | orchestrator | OK - elasticsearch (kolla_logging) is running. status: green; timed_out: false; number_of_nodes: 3; number_of_data_nodes: 3; active_primary_shards: 9; active_shards: 22; relocating_shards: 0; initializing_shards: 0; delayed_unassigned_shards: 0; unassigned_shards: 0 | 'active_primary'=9 'active'=22 'relocating'=0 'init'=0 'delay_unass'=0 'unass'=0 2026-03-29 01:19:41.527806 | orchestrator | 2026-03-29 01:19:41.527813 | orchestrator | # Status of MariaDB 2026-03-29 01:19:41.527818 | orchestrator | 2026-03-29 01:19:41.527823 | orchestrator | + echo 2026-03-29 01:19:41.527828 | orchestrator | + echo '# Status of MariaDB' 2026-03-29 01:19:41.527832 | orchestrator | + echo 2026-03-29 01:19:41.529013 | orchestrator | ++ semver latest 10.0.0-0 2026-03-29 01:19:41.583504 | orchestrator | + [[ -1 -ge 0 ]] 2026-03-29 01:19:41.583597 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-03-29 01:19:41.583608 | orchestrator | + osism status database 2026-03-29 01:19:43.145147 | orchestrator | 2026-03-29 01:19:43 | ERROR  | Unable to get ansible vault password 2026-03-29 01:19:43.145203 | orchestrator | 2026-03-29 01:19:43 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-03-29 01:19:43.145210 | orchestrator | 2026-03-29 01:19:43 | ERROR  | Dropping encrypted entries 2026-03-29 01:19:43.179291 | orchestrator | 2026-03-29 01:19:43 | INFO  | Connecting to MariaDB at 192.168.16.9 as root_shard_0... 2026-03-29 01:19:43.188188 | orchestrator | 2026-03-29 01:19:43 | INFO  | Cluster Status: Primary 2026-03-29 01:19:43.188305 | orchestrator | 2026-03-29 01:19:43 | INFO  | Connected: ON 2026-03-29 01:19:43.188317 | orchestrator | 2026-03-29 01:19:43 | INFO  | Ready: ON 2026-03-29 01:19:43.188321 | orchestrator | 2026-03-29 01:19:43 | INFO  | Cluster Size: 3 2026-03-29 01:19:43.188325 | orchestrator | 2026-03-29 01:19:43 | INFO  | Local State: Synced 2026-03-29 01:19:43.188460 | orchestrator | 2026-03-29 01:19:43 | INFO  | Cluster State UUID: 152f56f1-2b0a-11f1-936c-2f47c1d163ca 2026-03-29 01:19:43.188475 | orchestrator | 2026-03-29 01:19:43 | INFO  | Cluster Members: 192.168.16.11:3306,192.168.16.12:3306,192.168.16.10:3306 2026-03-29 01:19:43.188495 | orchestrator | 2026-03-29 01:19:43 | INFO  | Galera Version: 26.4.25(r7387a566) 2026-03-29 01:19:43.188500 | orchestrator | 2026-03-29 01:19:43 | INFO  | Local Node UUID: 46955ba8-2b0a-11f1-b5c3-8efd8b4d45b0 2026-03-29 01:19:43.188504 | orchestrator | 2026-03-29 01:19:43 | INFO  | Flow Control Paused: 0.00% 2026-03-29 01:19:43.188508 | orchestrator | 2026-03-29 01:19:43 | INFO  | Recv Queue Avg: 0.0222222 2026-03-29 01:19:43.188511 | orchestrator | 2026-03-29 01:19:43 | INFO  | Send Queue Avg: 0.00119474 2026-03-29 01:19:43.188572 | orchestrator | 2026-03-29 01:19:43 | INFO  | Transactions: 4453 local commits, 6638 replicated, 90 received 2026-03-29 01:19:43.188579 | orchestrator | 2026-03-29 01:19:43 | INFO  | Conflicts: 0 cert failures, 0 bf aborts 2026-03-29 01:19:43.188846 | orchestrator | 2026-03-29 01:19:43 | INFO  | MariaDB Uptime: 22 minutes, 8 seconds 2026-03-29 01:19:43.188885 | orchestrator | 2026-03-29 01:19:43 | INFO  | Threads: 135 connected, 1 running 2026-03-29 01:19:43.188891 | orchestrator | 2026-03-29 01:19:43 | INFO  | Queries: 214048 total, 0 slow 2026-03-29 01:19:43.188895 | orchestrator | 2026-03-29 01:19:43 | INFO  | Aborted Connects: 144 2026-03-29 01:19:43.188900 | orchestrator | 2026-03-29 01:19:43 | INFO  | MariaDB Galera Cluster validation PASSED 2026-03-29 01:19:43.397767 | orchestrator | 2026-03-29 01:19:43.397813 | orchestrator | # Status of Prometheus 2026-03-29 01:19:43.397819 | orchestrator | 2026-03-29 01:19:43.397824 | orchestrator | + echo 2026-03-29 01:19:43.397828 | orchestrator | + echo '# Status of Prometheus' 2026-03-29 01:19:43.397832 | orchestrator | + echo 2026-03-29 01:19:43.397836 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/healthy 2026-03-29 01:19:43.458931 | orchestrator | Unauthorized 2026-03-29 01:19:43.462329 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/ready 2026-03-29 01:19:43.529458 | orchestrator | Unauthorized 2026-03-29 01:19:43.532609 | orchestrator | 2026-03-29 01:19:43.532667 | orchestrator | # Status of RabbitMQ 2026-03-29 01:19:43.532673 | orchestrator | 2026-03-29 01:19:43.532678 | orchestrator | + echo 2026-03-29 01:19:43.532682 | orchestrator | + echo '# Status of RabbitMQ' 2026-03-29 01:19:43.532686 | orchestrator | + echo 2026-03-29 01:19:43.533920 | orchestrator | ++ semver latest 10.0.0-0 2026-03-29 01:19:43.584217 | orchestrator | + [[ -1 -ge 0 ]] 2026-03-29 01:19:43.584333 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-03-29 01:19:43.584339 | orchestrator | + osism status messaging 2026-03-29 01:19:50.760547 | orchestrator | 2026-03-29 01:19:50 | ERROR  | Unable to get ansible vault password 2026-03-29 01:19:50.760619 | orchestrator | 2026-03-29 01:19:50 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-03-29 01:19:50.760631 | orchestrator | 2026-03-29 01:19:50 | ERROR  | Dropping encrypted entries 2026-03-29 01:19:50.797759 | orchestrator | 2026-03-29 01:19:50 | INFO  | [testbed-node-0] Connecting to RabbitMQ Management API at 192.168.16.10:15672 as openstack... 2026-03-29 01:19:50.872409 | orchestrator | 2026-03-29 01:19:50 | INFO  | [testbed-node-0] RabbitMQ Version: 3.13.7 2026-03-29 01:19:50.872478 | orchestrator | 2026-03-29 01:19:50 | INFO  | [testbed-node-0] Erlang Version: 26.2.5.15 2026-03-29 01:19:50.872488 | orchestrator | 2026-03-29 01:19:50 | INFO  | [testbed-node-0] Cluster Name: rabbit@testbed-node-0 2026-03-29 01:19:50.872494 | orchestrator | 2026-03-29 01:19:50 | INFO  | [testbed-node-0] Cluster Size: 3 2026-03-29 01:19:50.872500 | orchestrator | 2026-03-29 01:19:50 | INFO  | [testbed-node-0] Nodes: rabbit@testbed-node-0, rabbit@testbed-node-1, rabbit@testbed-node-2 2026-03-29 01:19:50.872507 | orchestrator | 2026-03-29 01:19:50 | INFO  | [testbed-node-0] Running Nodes: rabbit@testbed-node-0, rabbit@testbed-node-1, rabbit@testbed-node-2 2026-03-29 01:19:50.872534 | orchestrator | 2026-03-29 01:19:50 | INFO  | [testbed-node-0] Partitions: None (healthy) 2026-03-29 01:19:50.872717 | orchestrator | 2026-03-29 01:19:50 | INFO  | [testbed-node-0] Connections: 206, Channels: 205, Queues: 173 2026-03-29 01:19:50.872954 | orchestrator | 2026-03-29 01:19:50 | INFO  | [testbed-node-0] Messages: 233 total, 232 ready, 1 unacked 2026-03-29 01:19:50.872967 | orchestrator | 2026-03-29 01:19:50 | INFO  | [testbed-node-0] Message Rates: 5.8/s publish, 7.2/s deliver 2026-03-29 01:19:50.873198 | orchestrator | 2026-03-29 01:19:50 | INFO  | [testbed-node-0] Disk Free: 58.1 GB (limit: 0.0 GB) 2026-03-29 01:19:50.873210 | orchestrator | 2026-03-29 01:19:50 | INFO  | [testbed-node-0] Memory Used: 0.17 GB (limit: 12.54 GB) 2026-03-29 01:19:50.873480 | orchestrator | 2026-03-29 01:19:50 | INFO  | [testbed-node-0] File Descriptors: 100/1024 2026-03-29 01:19:50.873694 | orchestrator | 2026-03-29 01:19:50 | INFO  | [testbed-node-0] Sockets: 54/832 2026-03-29 01:19:50.873708 | orchestrator | 2026-03-29 01:19:50 | INFO  | [testbed-node-1] Connecting to RabbitMQ Management API at 192.168.16.11:15672 as openstack... 2026-03-29 01:19:50.935006 | orchestrator | 2026-03-29 01:19:50 | INFO  | [testbed-node-1] RabbitMQ Version: 3.13.7 2026-03-29 01:19:50.935120 | orchestrator | 2026-03-29 01:19:50 | INFO  | [testbed-node-1] Erlang Version: 26.2.5.15 2026-03-29 01:19:50.935131 | orchestrator | 2026-03-29 01:19:50 | INFO  | [testbed-node-1] Cluster Name: rabbit@testbed-node-1 2026-03-29 01:19:50.935140 | orchestrator | 2026-03-29 01:19:50 | INFO  | [testbed-node-1] Cluster Size: 3 2026-03-29 01:19:50.935158 | orchestrator | 2026-03-29 01:19:50 | INFO  | [testbed-node-1] Nodes: rabbit@testbed-node-0, rabbit@testbed-node-1, rabbit@testbed-node-2 2026-03-29 01:19:50.935168 | orchestrator | 2026-03-29 01:19:50 | INFO  | [testbed-node-1] Running Nodes: rabbit@testbed-node-0, rabbit@testbed-node-1, rabbit@testbed-node-2 2026-03-29 01:19:50.935291 | orchestrator | 2026-03-29 01:19:50 | INFO  | [testbed-node-1] Partitions: None (healthy) 2026-03-29 01:19:50.935306 | orchestrator | 2026-03-29 01:19:50 | INFO  | [testbed-node-1] Connections: 206, Channels: 205, Queues: 173 2026-03-29 01:19:50.935467 | orchestrator | 2026-03-29 01:19:50 | INFO  | [testbed-node-1] Messages: 233 total, 232 ready, 1 unacked 2026-03-29 01:19:50.935582 | orchestrator | 2026-03-29 01:19:50 | INFO  | [testbed-node-1] Message Rates: 6.6/s publish, 7.2/s deliver 2026-03-29 01:19:50.935595 | orchestrator | 2026-03-29 01:19:50 | INFO  | [testbed-node-1] Disk Free: 58.3 GB (limit: 0.0 GB) 2026-03-29 01:19:50.935871 | orchestrator | 2026-03-29 01:19:50 | INFO  | [testbed-node-1] Memory Used: 0.18 GB (limit: 12.54 GB) 2026-03-29 01:19:50.935884 | orchestrator | 2026-03-29 01:19:50 | INFO  | [testbed-node-1] File Descriptors: 130/1024 2026-03-29 01:19:50.936128 | orchestrator | 2026-03-29 01:19:50 | INFO  | [testbed-node-1] Sockets: 84/832 2026-03-29 01:19:50.936142 | orchestrator | 2026-03-29 01:19:50 | INFO  | [testbed-node-2] Connecting to RabbitMQ Management API at 192.168.16.12:15672 as openstack... 2026-03-29 01:19:50.997340 | orchestrator | 2026-03-29 01:19:50 | INFO  | [testbed-node-2] RabbitMQ Version: 3.13.7 2026-03-29 01:19:50.997384 | orchestrator | 2026-03-29 01:19:50 | INFO  | [testbed-node-2] Erlang Version: 26.2.5.15 2026-03-29 01:19:50.997531 | orchestrator | 2026-03-29 01:19:50 | INFO  | [testbed-node-2] Cluster Name: rabbit@testbed-node-2 2026-03-29 01:19:50.997540 | orchestrator | 2026-03-29 01:19:50 | INFO  | [testbed-node-2] Cluster Size: 3 2026-03-29 01:19:50.997545 | orchestrator | 2026-03-29 01:19:50 | INFO  | [testbed-node-2] Nodes: rabbit@testbed-node-0, rabbit@testbed-node-1, rabbit@testbed-node-2 2026-03-29 01:19:50.997568 | orchestrator | 2026-03-29 01:19:50 | INFO  | [testbed-node-2] Running Nodes: rabbit@testbed-node-0, rabbit@testbed-node-1, rabbit@testbed-node-2 2026-03-29 01:19:50.997739 | orchestrator | 2026-03-29 01:19:50 | INFO  | [testbed-node-2] Partitions: None (healthy) 2026-03-29 01:19:50.997747 | orchestrator | 2026-03-29 01:19:51 | INFO  | [testbed-node-2] Connections: 206, Channels: 205, Queues: 173 2026-03-29 01:19:50.998379 | orchestrator | 2026-03-29 01:19:51 | INFO  | [testbed-node-2] Messages: 233 total, 232 ready, 1 unacked 2026-03-29 01:19:50.998388 | orchestrator | 2026-03-29 01:19:51 | INFO  | [testbed-node-2] Message Rates: 6.6/s publish, 7.2/s deliver 2026-03-29 01:19:50.998392 | orchestrator | 2026-03-29 01:19:51 | INFO  | [testbed-node-2] Disk Free: 58.3 GB (limit: 0.0 GB) 2026-03-29 01:19:50.998462 | orchestrator | 2026-03-29 01:19:51 | INFO  | [testbed-node-2] Memory Used: 0.17 GB (limit: 12.54 GB) 2026-03-29 01:19:50.998598 | orchestrator | 2026-03-29 01:19:51 | INFO  | [testbed-node-2] File Descriptors: 116/1024 2026-03-29 01:19:50.998738 | orchestrator | 2026-03-29 01:19:51 | INFO  | [testbed-node-2] Sockets: 68/832 2026-03-29 01:19:50.999195 | orchestrator | 2026-03-29 01:19:51 | INFO  | RabbitMQ Cluster validation PASSED 2026-03-29 01:19:51.243549 | orchestrator | 2026-03-29 01:19:51.243613 | orchestrator | # Status of Redis 2026-03-29 01:19:51.243623 | orchestrator | 2026-03-29 01:19:51.243631 | orchestrator | + echo 2026-03-29 01:19:51.243638 | orchestrator | + echo '# Status of Redis' 2026-03-29 01:19:51.243644 | orchestrator | + echo 2026-03-29 01:19:51.243649 | orchestrator | + /usr/lib/nagios/plugins/check_tcp -H 192.168.16.10 -p 6379 -A -E -s 'AUTH QHNA1SZRlOKzLADhUd5ZDgpHfQe6dNfr3bwEdY24\r\nPING\r\nINFO replication\r\nQUIT\r\n' -e PONG -e role:master -e slave0:ip=192.168.16.1 -e,port=6379 -j 2026-03-29 01:19:51.249043 | orchestrator | TCP OK - 0.001 second response time on 192.168.16.10 port 6379|time=0.001404s;;;0.000000;10.000000 2026-03-29 01:19:51.249334 | orchestrator | + popd 2026-03-29 01:19:51.249351 | orchestrator | + echo 2026-03-29 01:19:51.249356 | orchestrator | 2026-03-29 01:19:51.249361 | orchestrator | # Create backup of MariaDB database 2026-03-29 01:19:51.249366 | orchestrator | 2026-03-29 01:19:51.249370 | orchestrator | + echo '# Create backup of MariaDB database' 2026-03-29 01:19:51.249374 | orchestrator | + echo 2026-03-29 01:19:51.249379 | orchestrator | + osism apply mariadb_backup -e mariadb_backup_type=full 2026-03-29 01:19:52.551123 | orchestrator | 2026-03-29 01:19:52 | INFO  | Prepare task for execution of mariadb_backup. 2026-03-29 01:19:52.612081 | orchestrator | 2026-03-29 01:19:52 | INFO  | Task 3df7b020-5346-4035-86d9-9a80dfc630bf (mariadb_backup) was prepared for execution. 2026-03-29 01:19:52.612151 | orchestrator | 2026-03-29 01:19:52 | INFO  | It takes a moment until task 3df7b020-5346-4035-86d9-9a80dfc630bf (mariadb_backup) has been started and output is visible here. 2026-03-29 01:21:09.399107 | orchestrator | 2026-03-29 01:21:09.399191 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-29 01:21:09.399199 | orchestrator | 2026-03-29 01:21:09.399204 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-29 01:21:09.399208 | orchestrator | Sunday 29 March 2026 01:19:55 +0000 (0:00:00.233) 0:00:00.233 ********** 2026-03-29 01:21:09.399212 | orchestrator | ok: [testbed-node-0] 2026-03-29 01:21:09.399218 | orchestrator | ok: [testbed-node-1] 2026-03-29 01:21:09.399222 | orchestrator | ok: [testbed-node-2] 2026-03-29 01:21:09.399226 | orchestrator | 2026-03-29 01:21:09.399230 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-29 01:21:09.399234 | orchestrator | Sunday 29 March 2026 01:19:56 +0000 (0:00:00.325) 0:00:00.559 ********** 2026-03-29 01:21:09.399238 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2026-03-29 01:21:09.399243 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2026-03-29 01:21:09.399247 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2026-03-29 01:21:09.399267 | orchestrator | 2026-03-29 01:21:09.399271 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2026-03-29 01:21:09.399275 | orchestrator | 2026-03-29 01:21:09.399279 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2026-03-29 01:21:09.399283 | orchestrator | Sunday 29 March 2026 01:19:56 +0000 (0:00:00.397) 0:00:00.956 ********** 2026-03-29 01:21:09.399287 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-29 01:21:09.399291 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-03-29 01:21:09.399295 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-03-29 01:21:09.399299 | orchestrator | 2026-03-29 01:21:09.399303 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-03-29 01:21:09.399316 | orchestrator | Sunday 29 March 2026 01:19:56 +0000 (0:00:00.385) 0:00:01.342 ********** 2026-03-29 01:21:09.399321 | orchestrator | included: /ansible/roles/mariadb/tasks/backup.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 01:21:09.399326 | orchestrator | 2026-03-29 01:21:09.399330 | orchestrator | TASK [mariadb : Get MariaDB container facts] *********************************** 2026-03-29 01:21:09.399333 | orchestrator | Sunday 29 March 2026 01:19:57 +0000 (0:00:00.674) 0:00:02.017 ********** 2026-03-29 01:21:09.399337 | orchestrator | ok: [testbed-node-1] 2026-03-29 01:21:09.399341 | orchestrator | ok: [testbed-node-0] 2026-03-29 01:21:09.399345 | orchestrator | ok: [testbed-node-2] 2026-03-29 01:21:09.399349 | orchestrator | 2026-03-29 01:21:09.399353 | orchestrator | TASK [mariadb : Taking full database backup via Mariabackup] ******************* 2026-03-29 01:21:09.399356 | orchestrator | Sunday 29 March 2026 01:20:00 +0000 (0:00:03.314) 0:00:05.332 ********** 2026-03-29 01:21:09.399360 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:21:09.399365 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:21:09.399369 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:21:09.399373 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2026-03-29 01:21:09.399377 | orchestrator | 2026-03-29 01:21:09.399381 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-03-29 01:21:09.399385 | orchestrator | skipping: no hosts matched 2026-03-29 01:21:09.399389 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_start 2026-03-29 01:21:09.399393 | orchestrator | 2026-03-29 01:21:09.399397 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-03-29 01:21:09.399401 | orchestrator | skipping: no hosts matched 2026-03-29 01:21:09.399405 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-03-29 01:21:09.399410 | orchestrator | mariadb_bootstrap_restart 2026-03-29 01:21:09.399416 | orchestrator | 2026-03-29 01:21:09.399424 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2026-03-29 01:21:09.399434 | orchestrator | skipping: no hosts matched 2026-03-29 01:21:09.399439 | orchestrator | 2026-03-29 01:21:09.399445 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2026-03-29 01:21:09.399451 | orchestrator | 2026-03-29 01:21:09.399457 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2026-03-29 01:21:09.399463 | orchestrator | Sunday 29 March 2026 01:21:08 +0000 (0:01:07.647) 0:01:12.979 ********** 2026-03-29 01:21:09.399557 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:21:09.399567 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:21:09.399574 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:21:09.399580 | orchestrator | 2026-03-29 01:21:09.399587 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2026-03-29 01:21:09.399605 | orchestrator | Sunday 29 March 2026 01:21:08 +0000 (0:00:00.307) 0:01:13.287 ********** 2026-03-29 01:21:09.399612 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:21:09.399624 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:21:09.399628 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:21:09.399632 | orchestrator | 2026-03-29 01:21:09.399636 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 01:21:09.399649 | orchestrator | testbed-node-0 : ok=6  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-29 01:21:09.399654 | orchestrator | testbed-node-1 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-29 01:21:09.399659 | orchestrator | testbed-node-2 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-29 01:21:09.399662 | orchestrator | 2026-03-29 01:21:09.399666 | orchestrator | 2026-03-29 01:21:09.399670 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-29 01:21:09.399676 | orchestrator | Sunday 29 March 2026 01:21:09 +0000 (0:00:00.215) 0:01:13.502 ********** 2026-03-29 01:21:09.399682 | orchestrator | =============================================================================== 2026-03-29 01:21:09.399688 | orchestrator | mariadb : Taking full database backup via Mariabackup ------------------ 67.65s 2026-03-29 01:21:09.399710 | orchestrator | mariadb : Get MariaDB container facts ----------------------------------- 3.32s 2026-03-29 01:21:09.399718 | orchestrator | mariadb : include_tasks ------------------------------------------------- 0.67s 2026-03-29 01:21:09.399724 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.40s 2026-03-29 01:21:09.399731 | orchestrator | mariadb : Group MariaDB hosts based on shards --------------------------- 0.39s 2026-03-29 01:21:09.399737 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.33s 2026-03-29 01:21:09.399743 | orchestrator | Include mariadb post-deploy.yml ----------------------------------------- 0.31s 2026-03-29 01:21:09.399749 | orchestrator | Include mariadb post-upgrade.yml ---------------------------------------- 0.22s 2026-03-29 01:21:09.577400 | orchestrator | + sh -c /opt/configuration/scripts/check/300-openstack.sh 2026-03-29 01:21:09.587185 | orchestrator | + set -e 2026-03-29 01:21:09.587301 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-29 01:21:09.587311 | orchestrator | ++ export INTERACTIVE=false 2026-03-29 01:21:09.587318 | orchestrator | ++ INTERACTIVE=false 2026-03-29 01:21:09.587387 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-29 01:21:09.587393 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-29 01:21:09.587409 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-03-29 01:21:09.588902 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-03-29 01:21:09.593945 | orchestrator | 2026-03-29 01:21:09.594049 | orchestrator | # OpenStack endpoints 2026-03-29 01:21:09.594063 | orchestrator | ++ export MANAGER_VERSION=latest 2026-03-29 01:21:09.594070 | orchestrator | ++ MANAGER_VERSION=latest 2026-03-29 01:21:09.594077 | orchestrator | + export OS_CLOUD=admin 2026-03-29 01:21:09.594132 | orchestrator | + OS_CLOUD=admin 2026-03-29 01:21:09.594139 | orchestrator | + echo 2026-03-29 01:21:09.594146 | orchestrator | + echo '# OpenStack endpoints' 2026-03-29 01:21:09.594152 | orchestrator | 2026-03-29 01:21:09.594158 | orchestrator | + echo 2026-03-29 01:21:09.594168 | orchestrator | + openstack endpoint list 2026-03-29 01:21:12.960614 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2026-03-29 01:21:12.960663 | orchestrator | | ID | Region | Service Name | Service Type | Enabled | Interface | URL | 2026-03-29 01:21:12.960671 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2026-03-29 01:21:12.960678 | orchestrator | | 0ab977d0b68d4f91ae4f98d511f993bf | RegionOne | placement | placement | True | internal | https://api-int.testbed.osism.xyz:8780 | 2026-03-29 01:21:12.960685 | orchestrator | | 0ed8872cdbcc4c9b8b02d94d12e1a94d | RegionOne | neutron | network | True | internal | https://api-int.testbed.osism.xyz:9696 | 2026-03-29 01:21:12.960701 | orchestrator | | 12a724f5e4c0473b86e1ea28d4ad2a14 | RegionOne | cinderv3 | volumev3 | True | public | https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2026-03-29 01:21:12.960719 | orchestrator | | 28403a0bc3794f7ab17d3e2f2efeaaf0 | RegionOne | barbican | key-manager | True | internal | https://api-int.testbed.osism.xyz:9311 | 2026-03-29 01:21:12.960725 | orchestrator | | 3002eb8e2eb84488aebfd4635323881a | RegionOne | neutron | network | True | public | https://api.testbed.osism.xyz:9696 | 2026-03-29 01:21:12.960731 | orchestrator | | 3a55363abb944d9480d03a1585d995bf | RegionOne | nova | compute | True | internal | https://api-int.testbed.osism.xyz:8774/v2.1 | 2026-03-29 01:21:12.960737 | orchestrator | | 4dafd8b2149048f5a109e2fa1f6264c9 | RegionOne | glance | image | True | internal | https://api-int.testbed.osism.xyz:9292 | 2026-03-29 01:21:12.960743 | orchestrator | | 56b3d61d339d4acaa117b0037c919060 | RegionOne | barbican | key-manager | True | public | https://api.testbed.osism.xyz:9311 | 2026-03-29 01:21:12.960749 | orchestrator | | 5b66fc321ad44a34ade8ecb549a41c96 | RegionOne | swift | object-store | True | internal | https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2026-03-29 01:21:12.960755 | orchestrator | | 61ba29fee2834cb6afb5faeb82812004 | RegionOne | octavia | load-balancer | True | public | https://api.testbed.osism.xyz:9876 | 2026-03-29 01:21:12.960761 | orchestrator | | 720e6522ef764ec2933153530c10c483 | RegionOne | designate | dns | True | public | https://api.testbed.osism.xyz:9001 | 2026-03-29 01:21:12.960767 | orchestrator | | 720fda0641a8455d9fcc3d6ed9795436 | RegionOne | placement | placement | True | public | https://api.testbed.osism.xyz:8780 | 2026-03-29 01:21:12.960774 | orchestrator | | 78cb4cc5e21045d9b428ca76c360b205 | RegionOne | magnum | container-infra | True | public | https://api.testbed.osism.xyz:9511/v1 | 2026-03-29 01:21:12.960781 | orchestrator | | 7ec1dfb00361482bbcd873a1f45a7304 | RegionOne | keystone | identity | True | public | https://api.testbed.osism.xyz:5000 | 2026-03-29 01:21:12.960788 | orchestrator | | ac668ebdbd224e06b3114d0cf5477030 | RegionOne | swift | object-store | True | public | https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2026-03-29 01:21:12.960795 | orchestrator | | bd2545c8d63d4d858c86bbfe69e26ea1 | RegionOne | glance | image | True | public | https://api.testbed.osism.xyz:9292 | 2026-03-29 01:21:12.960800 | orchestrator | | bea4a903e15d422788426a116555eae2 | RegionOne | octavia | load-balancer | True | internal | https://api-int.testbed.osism.xyz:9876 | 2026-03-29 01:21:12.960804 | orchestrator | | c4b0fdf5bf5c48b99718dc2501df8cb1 | RegionOne | cinderv3 | volumev3 | True | internal | https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2026-03-29 01:21:12.960807 | orchestrator | | f047339202064550a2067c25d83d549e | RegionOne | magnum | container-infra | True | internal | https://api-int.testbed.osism.xyz:9511/v1 | 2026-03-29 01:21:12.960811 | orchestrator | | f2cb031bf15042928e6c2a46683f903b | RegionOne | keystone | identity | True | internal | https://api-int.testbed.osism.xyz:5000 | 2026-03-29 01:21:12.960829 | orchestrator | | f63afff9598b4081b15cfb00e97b2e39 | RegionOne | nova | compute | True | public | https://api.testbed.osism.xyz:8774/v2.1 | 2026-03-29 01:21:12.960838 | orchestrator | | ff1deb878b684e43b70b969c1ad1447e | RegionOne | designate | dns | True | internal | https://api-int.testbed.osism.xyz:9001 | 2026-03-29 01:21:12.960843 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2026-03-29 01:21:13.214607 | orchestrator | 2026-03-29 01:21:13.214663 | orchestrator | # Cinder 2026-03-29 01:21:13.214673 | orchestrator | 2026-03-29 01:21:13.214679 | orchestrator | + echo 2026-03-29 01:21:13.214684 | orchestrator | + echo '# Cinder' 2026-03-29 01:21:13.214690 | orchestrator | + echo 2026-03-29 01:21:13.214696 | orchestrator | + openstack volume service list 2026-03-29 01:21:15.695169 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2026-03-29 01:21:15.695225 | orchestrator | | Binary | Host | Zone | Status | State | Updated At | 2026-03-29 01:21:15.695231 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2026-03-29 01:21:15.695246 | orchestrator | | cinder-scheduler | testbed-node-0 | internal | enabled | up | 2026-03-29T01:21:11.000000 | 2026-03-29 01:21:15.695251 | orchestrator | | cinder-scheduler | testbed-node-2 | internal | enabled | up | 2026-03-29T01:21:11.000000 | 2026-03-29 01:21:15.695255 | orchestrator | | cinder-scheduler | testbed-node-1 | internal | enabled | up | 2026-03-29T01:21:10.000000 | 2026-03-29 01:21:15.695260 | orchestrator | | cinder-volume | testbed-node-0@rbd-volumes | nova | enabled | up | 2026-03-29T01:21:10.000000 | 2026-03-29 01:21:15.695264 | orchestrator | | cinder-volume | testbed-node-2@rbd-volumes | nova | enabled | up | 2026-03-29T01:21:13.000000 | 2026-03-29 01:21:15.695269 | orchestrator | | cinder-volume | testbed-node-1@rbd-volumes | nova | enabled | up | 2026-03-29T01:21:15.000000 | 2026-03-29 01:21:15.695274 | orchestrator | | cinder-backup | testbed-node-0 | nova | enabled | up | 2026-03-29T01:21:06.000000 | 2026-03-29 01:21:15.695278 | orchestrator | | cinder-backup | testbed-node-2 | nova | enabled | up | 2026-03-29T01:21:09.000000 | 2026-03-29 01:21:15.695283 | orchestrator | | cinder-backup | testbed-node-1 | nova | enabled | up | 2026-03-29T01:21:09.000000 | 2026-03-29 01:21:15.695287 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2026-03-29 01:21:15.945239 | orchestrator | 2026-03-29 01:21:15.945286 | orchestrator | # Neutron 2026-03-29 01:21:15.945291 | orchestrator | 2026-03-29 01:21:15.945295 | orchestrator | + echo 2026-03-29 01:21:15.945299 | orchestrator | + echo '# Neutron' 2026-03-29 01:21:15.945303 | orchestrator | + echo 2026-03-29 01:21:15.945307 | orchestrator | + openstack network agent list 2026-03-29 01:21:18.736338 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2026-03-29 01:21:18.736430 | orchestrator | | ID | Agent Type | Host | Availability Zone | Alive | State | Binary | 2026-03-29 01:21:18.736439 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2026-03-29 01:21:18.736447 | orchestrator | | testbed-node-0 | OVN Controller Gateway agent | testbed-node-0 | nova | :-) | UP | ovn-controller | 2026-03-29 01:21:18.736455 | orchestrator | | testbed-node-2 | OVN Controller Gateway agent | testbed-node-2 | nova | :-) | UP | ovn-controller | 2026-03-29 01:21:18.736463 | orchestrator | | testbed-node-5 | OVN Controller agent | testbed-node-5 | | :-) | UP | ovn-controller | 2026-03-29 01:21:18.736469 | orchestrator | | testbed-node-4 | OVN Controller agent | testbed-node-4 | | :-) | UP | ovn-controller | 2026-03-29 01:21:18.736476 | orchestrator | | testbed-node-3 | OVN Controller agent | testbed-node-3 | | :-) | UP | ovn-controller | 2026-03-29 01:21:18.736483 | orchestrator | | testbed-node-1 | OVN Controller Gateway agent | testbed-node-1 | nova | :-) | UP | ovn-controller | 2026-03-29 01:21:18.736489 | orchestrator | | e645415a-98f5-5758-8cd1-c47af282b5c0 | OVN Metadata agent | testbed-node-3 | | :-) | UP | neutron-ovn-metadata-agent | 2026-03-29 01:21:18.736631 | orchestrator | | 36b9d21c-9928-5c0a-9b27-73ac7a3e770c | OVN Metadata agent | testbed-node-5 | | :-) | UP | neutron-ovn-metadata-agent | 2026-03-29 01:21:18.736640 | orchestrator | | 4939696e-6092-5a33-bb73-b850064684df | OVN Metadata agent | testbed-node-4 | | :-) | UP | neutron-ovn-metadata-agent | 2026-03-29 01:21:18.736647 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2026-03-29 01:21:18.974791 | orchestrator | + openstack network service provider list 2026-03-29 01:21:21.597649 | orchestrator | +---------------+------+---------+ 2026-03-29 01:21:21.597747 | orchestrator | | Service Type | Name | Default | 2026-03-29 01:21:21.597759 | orchestrator | +---------------+------+---------+ 2026-03-29 01:21:21.597763 | orchestrator | | L3_ROUTER_NAT | ovn | True | 2026-03-29 01:21:21.597768 | orchestrator | +---------------+------+---------+ 2026-03-29 01:21:21.912903 | orchestrator | 2026-03-29 01:21:21.912983 | orchestrator | # Nova 2026-03-29 01:21:21.912990 | orchestrator | 2026-03-29 01:21:21.912995 | orchestrator | + echo 2026-03-29 01:21:21.912999 | orchestrator | + echo '# Nova' 2026-03-29 01:21:21.913003 | orchestrator | + echo 2026-03-29 01:21:21.913007 | orchestrator | + openstack compute service list 2026-03-29 01:21:25.160201 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2026-03-29 01:21:25.160264 | orchestrator | | ID | Binary | Host | Zone | Status | State | Updated At | 2026-03-29 01:21:25.160276 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2026-03-29 01:21:25.160285 | orchestrator | | 6203b0b6-a621-46aa-b5fb-fc6ab5d4a0fc | nova-scheduler | testbed-node-0 | internal | enabled | up | 2026-03-29T01:21:17.000000 | 2026-03-29 01:21:25.160293 | orchestrator | | e88a7b7e-845d-4de6-acb0-0b6210f00ba5 | nova-scheduler | testbed-node-2 | internal | enabled | up | 2026-03-29T01:21:18.000000 | 2026-03-29 01:21:25.160312 | orchestrator | | 7c26f880-8208-4881-93ed-febfa25be559 | nova-scheduler | testbed-node-1 | internal | enabled | up | 2026-03-29T01:21:21.000000 | 2026-03-29 01:21:25.160321 | orchestrator | | c555fc42-d96d-4783-adb7-bd56dc43203b | nova-conductor | testbed-node-0 | internal | enabled | up | 2026-03-29T01:21:24.000000 | 2026-03-29 01:21:25.160328 | orchestrator | | 8105474c-9b93-4ca7-b5a2-4c48a2a0811f | nova-conductor | testbed-node-1 | internal | enabled | up | 2026-03-29T01:21:19.000000 | 2026-03-29 01:21:25.160336 | orchestrator | | 281d5fb0-79ce-4005-b0cf-1fe9812813d6 | nova-conductor | testbed-node-2 | internal | enabled | up | 2026-03-29T01:21:20.000000 | 2026-03-29 01:21:25.160345 | orchestrator | | 9569d765-45ff-4410-b7af-48d5b0843f38 | nova-compute | testbed-node-3 | nova | enabled | up | 2026-03-29T01:21:17.000000 | 2026-03-29 01:21:25.160354 | orchestrator | | 340605d5-1cd5-4ed1-955b-723c4165ffa6 | nova-compute | testbed-node-4 | nova | enabled | up | 2026-03-29T01:21:19.000000 | 2026-03-29 01:21:25.160362 | orchestrator | | a645f5d0-c284-46ec-b79c-5bbb6970eefe | nova-compute | testbed-node-5 | nova | enabled | up | 2026-03-29T01:21:19.000000 | 2026-03-29 01:21:25.160371 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2026-03-29 01:21:25.442228 | orchestrator | + openstack hypervisor list 2026-03-29 01:21:28.142545 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2026-03-29 01:21:28.142658 | orchestrator | | ID | Hypervisor Hostname | Hypervisor Type | Host IP | State | 2026-03-29 01:21:28.142677 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2026-03-29 01:21:28.142683 | orchestrator | | 7d710155-ea61-486e-b275-e38fdb8788b8 | testbed-node-3 | QEMU | 192.168.16.13 | up | 2026-03-29 01:21:28.142690 | orchestrator | | fb7724a0-7723-41a2-b9e2-092a96eafae5 | testbed-node-4 | QEMU | 192.168.16.14 | up | 2026-03-29 01:21:28.142724 | orchestrator | | 1efb7665-96cb-4654-b14b-cea4e94de9ad | testbed-node-5 | QEMU | 192.168.16.15 | up | 2026-03-29 01:21:28.142731 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2026-03-29 01:21:28.399071 | orchestrator | 2026-03-29 01:21:28.399137 | orchestrator | # Run OpenStack test play 2026-03-29 01:21:28.399143 | orchestrator | 2026-03-29 01:21:28.399148 | orchestrator | + echo 2026-03-29 01:21:28.399153 | orchestrator | + echo '# Run OpenStack test play' 2026-03-29 01:21:28.399158 | orchestrator | + echo 2026-03-29 01:21:28.399163 | orchestrator | + osism apply --environment openstack test 2026-03-29 01:21:29.675890 | orchestrator | 2026-03-29 01:21:29 | INFO  | Trying to run play test in environment openstack 2026-03-29 01:21:29.704025 | orchestrator | 2026-03-29 01:21:29 | INFO  | Prepare task for execution of test. 2026-03-29 01:21:29.770409 | orchestrator | 2026-03-29 01:21:29 | INFO  | Task 5f4967c1-b9cb-4b9b-8a41-a6bb99d9835a (test) was prepared for execution. 2026-03-29 01:21:29.770481 | orchestrator | 2026-03-29 01:21:29 | INFO  | It takes a moment until task 5f4967c1-b9cb-4b9b-8a41-a6bb99d9835a (test) has been started and output is visible here. 2026-03-29 01:24:04.377311 | orchestrator | 2026-03-29 01:24:04.377430 | orchestrator | PLAY [Create test project] ***************************************************** 2026-03-29 01:24:04.377443 | orchestrator | 2026-03-29 01:24:04.377450 | orchestrator | TASK [Create test domain] ****************************************************** 2026-03-29 01:24:04.377458 | orchestrator | Sunday 29 March 2026 01:21:33 +0000 (0:00:00.107) 0:00:00.107 ********** 2026-03-29 01:24:04.377464 | orchestrator | changed: [localhost] 2026-03-29 01:24:04.377472 | orchestrator | 2026-03-29 01:24:04.377478 | orchestrator | TASK [Create test-admin user] ************************************************** 2026-03-29 01:24:04.377484 | orchestrator | Sunday 29 March 2026 01:21:36 +0000 (0:00:03.806) 0:00:03.914 ********** 2026-03-29 01:24:04.377491 | orchestrator | changed: [localhost] 2026-03-29 01:24:04.377497 | orchestrator | 2026-03-29 01:24:04.377504 | orchestrator | TASK [Add manager role to user test-admin] ************************************* 2026-03-29 01:24:04.377511 | orchestrator | Sunday 29 March 2026 01:21:41 +0000 (0:00:04.189) 0:00:08.103 ********** 2026-03-29 01:24:04.377517 | orchestrator | changed: [localhost] 2026-03-29 01:24:04.377524 | orchestrator | 2026-03-29 01:24:04.377530 | orchestrator | TASK [Create test project] ***************************************************** 2026-03-29 01:24:04.377537 | orchestrator | Sunday 29 March 2026 01:21:47 +0000 (0:00:06.742) 0:00:14.846 ********** 2026-03-29 01:24:04.377543 | orchestrator | changed: [localhost] 2026-03-29 01:24:04.377550 | orchestrator | 2026-03-29 01:24:04.377556 | orchestrator | TASK [Create test user] ******************************************************** 2026-03-29 01:24:04.377560 | orchestrator | Sunday 29 March 2026 01:21:51 +0000 (0:00:03.876) 0:00:18.722 ********** 2026-03-29 01:24:04.377564 | orchestrator | changed: [localhost] 2026-03-29 01:24:04.377568 | orchestrator | 2026-03-29 01:24:04.377572 | orchestrator | TASK [Add member roles to user test] ******************************************* 2026-03-29 01:24:04.377577 | orchestrator | Sunday 29 March 2026 01:21:56 +0000 (0:00:04.411) 0:00:23.134 ********** 2026-03-29 01:24:04.377581 | orchestrator | changed: [localhost] => (item=load-balancer_member) 2026-03-29 01:24:04.377587 | orchestrator | changed: [localhost] => (item=member) 2026-03-29 01:24:04.377592 | orchestrator | changed: [localhost] => (item=creator) 2026-03-29 01:24:04.377596 | orchestrator | 2026-03-29 01:24:04.377600 | orchestrator | TASK [Create test server group] ************************************************ 2026-03-29 01:24:04.377605 | orchestrator | Sunday 29 March 2026 01:22:07 +0000 (0:00:11.485) 0:00:34.620 ********** 2026-03-29 01:24:04.377609 | orchestrator | changed: [localhost] 2026-03-29 01:24:04.377612 | orchestrator | 2026-03-29 01:24:04.377616 | orchestrator | TASK [Create ssh security group] *********************************************** 2026-03-29 01:24:04.377621 | orchestrator | Sunday 29 March 2026 01:22:11 +0000 (0:00:04.396) 0:00:39.016 ********** 2026-03-29 01:24:04.377643 | orchestrator | changed: [localhost] 2026-03-29 01:24:04.377647 | orchestrator | 2026-03-29 01:24:04.377651 | orchestrator | TASK [Add rule to ssh security group] ****************************************** 2026-03-29 01:24:04.377655 | orchestrator | Sunday 29 March 2026 01:22:16 +0000 (0:00:04.605) 0:00:43.621 ********** 2026-03-29 01:24:04.377659 | orchestrator | changed: [localhost] 2026-03-29 01:24:04.377663 | orchestrator | 2026-03-29 01:24:04.377667 | orchestrator | TASK [Create icmp security group] ********************************************** 2026-03-29 01:24:04.377671 | orchestrator | Sunday 29 March 2026 01:22:20 +0000 (0:00:04.469) 0:00:48.090 ********** 2026-03-29 01:24:04.377675 | orchestrator | changed: [localhost] 2026-03-29 01:24:04.377678 | orchestrator | 2026-03-29 01:24:04.377682 | orchestrator | TASK [Add rule to icmp security group] ***************************************** 2026-03-29 01:24:04.377686 | orchestrator | Sunday 29 March 2026 01:22:24 +0000 (0:00:03.791) 0:00:51.882 ********** 2026-03-29 01:24:04.377690 | orchestrator | changed: [localhost] 2026-03-29 01:24:04.377694 | orchestrator | 2026-03-29 01:24:04.377697 | orchestrator | TASK [Create test keypair] ***************************************************** 2026-03-29 01:24:04.377701 | orchestrator | Sunday 29 March 2026 01:22:28 +0000 (0:00:04.119) 0:00:56.001 ********** 2026-03-29 01:24:04.377705 | orchestrator | changed: [localhost] 2026-03-29 01:24:04.377709 | orchestrator | 2026-03-29 01:24:04.377712 | orchestrator | TASK [Create test network] ***************************************************** 2026-03-29 01:24:04.377716 | orchestrator | Sunday 29 March 2026 01:22:32 +0000 (0:00:04.006) 0:01:00.008 ********** 2026-03-29 01:24:04.377720 | orchestrator | changed: [localhost] 2026-03-29 01:24:04.377724 | orchestrator | 2026-03-29 01:24:04.377728 | orchestrator | TASK [Create test subnet] ****************************************************** 2026-03-29 01:24:04.377731 | orchestrator | Sunday 29 March 2026 01:22:37 +0000 (0:00:04.656) 0:01:04.664 ********** 2026-03-29 01:24:04.377735 | orchestrator | changed: [localhost] 2026-03-29 01:24:04.377739 | orchestrator | 2026-03-29 01:24:04.377742 | orchestrator | TASK [Create test router] ****************************************************** 2026-03-29 01:24:04.377747 | orchestrator | Sunday 29 March 2026 01:22:43 +0000 (0:00:05.715) 0:01:10.379 ********** 2026-03-29 01:24:04.377751 | orchestrator | changed: [localhost] 2026-03-29 01:24:04.377754 | orchestrator | 2026-03-29 01:24:04.377758 | orchestrator | PLAY [Manage test instances and volumes] *************************************** 2026-03-29 01:24:04.377762 | orchestrator | 2026-03-29 01:24:04.377766 | orchestrator | TASK [Get test server group] *************************************************** 2026-03-29 01:24:04.377769 | orchestrator | Sunday 29 March 2026 01:22:54 +0000 (0:00:10.800) 0:01:21.179 ********** 2026-03-29 01:24:04.377773 | orchestrator | ok: [localhost] 2026-03-29 01:24:04.377777 | orchestrator | 2026-03-29 01:24:04.377781 | orchestrator | TASK [Detach test volume] ****************************************************** 2026-03-29 01:24:04.377785 | orchestrator | Sunday 29 March 2026 01:22:58 +0000 (0:00:04.210) 0:01:25.390 ********** 2026-03-29 01:24:04.377789 | orchestrator | skipping: [localhost] 2026-03-29 01:24:04.377793 | orchestrator | 2026-03-29 01:24:04.377797 | orchestrator | TASK [Delete test volume] ****************************************************** 2026-03-29 01:24:04.377801 | orchestrator | Sunday 29 March 2026 01:22:58 +0000 (0:00:00.055) 0:01:25.445 ********** 2026-03-29 01:24:04.377804 | orchestrator | skipping: [localhost] 2026-03-29 01:24:04.377808 | orchestrator | 2026-03-29 01:24:04.377812 | orchestrator | TASK [Delete test instances] *************************************************** 2026-03-29 01:24:04.377816 | orchestrator | Sunday 29 March 2026 01:22:58 +0000 (0:00:00.097) 0:01:25.542 ********** 2026-03-29 01:24:04.377820 | orchestrator | skipping: [localhost] => (item=test-4)  2026-03-29 01:24:04.377824 | orchestrator | skipping: [localhost] => (item=test-3)  2026-03-29 01:24:04.377840 | orchestrator | skipping: [localhost] => (item=test-2)  2026-03-29 01:24:04.377844 | orchestrator | skipping: [localhost] => (item=test-1)  2026-03-29 01:24:04.377848 | orchestrator | skipping: [localhost] => (item=test)  2026-03-29 01:24:04.377852 | orchestrator | skipping: [localhost] 2026-03-29 01:24:04.377896 | orchestrator | 2026-03-29 01:24:04.377915 | orchestrator | TASK [Wait for instance deletion to complete] ********************************** 2026-03-29 01:24:04.377924 | orchestrator | Sunday 29 March 2026 01:22:58 +0000 (0:00:00.158) 0:01:25.700 ********** 2026-03-29 01:24:04.377929 | orchestrator | skipping: [localhost] 2026-03-29 01:24:04.377934 | orchestrator | 2026-03-29 01:24:04.377938 | orchestrator | TASK [Create test instances] *************************************************** 2026-03-29 01:24:04.377943 | orchestrator | Sunday 29 March 2026 01:22:58 +0000 (0:00:00.143) 0:01:25.843 ********** 2026-03-29 01:24:04.377947 | orchestrator | changed: [localhost] => (item=test) 2026-03-29 01:24:04.377952 | orchestrator | changed: [localhost] => (item=test-1) 2026-03-29 01:24:04.377956 | orchestrator | changed: [localhost] => (item=test-2) 2026-03-29 01:24:04.377961 | orchestrator | changed: [localhost] => (item=test-3) 2026-03-29 01:24:04.377965 | orchestrator | changed: [localhost] => (item=test-4) 2026-03-29 01:24:04.377969 | orchestrator | 2026-03-29 01:24:04.377974 | orchestrator | TASK [Wait for instance creation to complete] ********************************** 2026-03-29 01:24:04.377978 | orchestrator | Sunday 29 March 2026 01:23:03 +0000 (0:00:05.137) 0:01:30.981 ********** 2026-03-29 01:24:04.377983 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (60 retries left). 2026-03-29 01:24:04.377988 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (59 retries left). 2026-03-29 01:24:04.377993 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (58 retries left). 2026-03-29 01:24:04.377997 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (57 retries left). 2026-03-29 01:24:04.378004 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j692031563218.2734', 'results_file': '/ansible/.ansible_async/j692031563218.2734', 'changed': True, 'item': 'test', 'ansible_loop_var': 'item'}) 2026-03-29 01:24:04.378048 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j453481867906.2759', 'results_file': '/ansible/.ansible_async/j453481867906.2759', 'changed': True, 'item': 'test-1', 'ansible_loop_var': 'item'}) 2026-03-29 01:24:04.378053 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j211432941698.2784', 'results_file': '/ansible/.ansible_async/j211432941698.2784', 'changed': True, 'item': 'test-2', 'ansible_loop_var': 'item'}) 2026-03-29 01:24:04.378058 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j620798448714.2809', 'results_file': '/ansible/.ansible_async/j620798448714.2809', 'changed': True, 'item': 'test-3', 'ansible_loop_var': 'item'}) 2026-03-29 01:24:04.378062 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j687251128950.2834', 'results_file': '/ansible/.ansible_async/j687251128950.2834', 'changed': True, 'item': 'test-4', 'ansible_loop_var': 'item'}) 2026-03-29 01:24:04.378067 | orchestrator | 2026-03-29 01:24:04.378072 | orchestrator | TASK [Add metadata to instances] *********************************************** 2026-03-29 01:24:04.378076 | orchestrator | Sunday 29 March 2026 01:23:50 +0000 (0:00:47.106) 0:02:18.087 ********** 2026-03-29 01:24:04.378081 | orchestrator | changed: [localhost] => (item=test) 2026-03-29 01:24:04.378085 | orchestrator | changed: [localhost] => (item=test-1) 2026-03-29 01:24:04.378089 | orchestrator | changed: [localhost] => (item=test-2) 2026-03-29 01:24:04.378094 | orchestrator | changed: [localhost] => (item=test-3) 2026-03-29 01:24:04.378098 | orchestrator | changed: [localhost] => (item=test-4) 2026-03-29 01:24:04.378103 | orchestrator | 2026-03-29 01:24:04.378107 | orchestrator | TASK [Wait for metadata to be added] ******************************************* 2026-03-29 01:24:04.378111 | orchestrator | Sunday 29 March 2026 01:23:55 +0000 (0:00:04.482) 0:02:22.570 ********** 2026-03-29 01:24:04.378116 | orchestrator | FAILED - RETRYING: [localhost]: Wait for metadata to be added (30 retries left). 2026-03-29 01:24:04.378122 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j367250577999.2938', 'results_file': '/ansible/.ansible_async/j367250577999.2938', 'changed': True, 'item': 'test', 'ansible_loop_var': 'item'}) 2026-03-29 01:24:04.378130 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j537612978577.2963', 'results_file': '/ansible/.ansible_async/j537612978577.2963', 'changed': True, 'item': 'test-1', 'ansible_loop_var': 'item'}) 2026-03-29 01:24:04.378135 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j83259844762.2988', 'results_file': '/ansible/.ansible_async/j83259844762.2988', 'changed': True, 'item': 'test-2', 'ansible_loop_var': 'item'}) 2026-03-29 01:24:04.378144 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j968805216344.3013', 'results_file': '/ansible/.ansible_async/j968805216344.3013', 'changed': True, 'item': 'test-3', 'ansible_loop_var': 'item'}) 2026-03-29 01:24:45.892245 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j5996584471.3038', 'results_file': '/ansible/.ansible_async/j5996584471.3038', 'changed': True, 'item': 'test-4', 'ansible_loop_var': 'item'}) 2026-03-29 01:24:45.892345 | orchestrator | 2026-03-29 01:24:45.892357 | orchestrator | TASK [Add tag to instances] **************************************************** 2026-03-29 01:24:45.892365 | orchestrator | Sunday 29 March 2026 01:24:05 +0000 (0:00:09.677) 0:02:32.247 ********** 2026-03-29 01:24:45.892372 | orchestrator | changed: [localhost] => (item=test) 2026-03-29 01:24:45.892380 | orchestrator | changed: [localhost] => (item=test-1) 2026-03-29 01:24:45.892387 | orchestrator | changed: [localhost] => (item=test-2) 2026-03-29 01:24:45.892393 | orchestrator | changed: [localhost] => (item=test-3) 2026-03-29 01:24:45.892400 | orchestrator | changed: [localhost] => (item=test-4) 2026-03-29 01:24:45.892407 | orchestrator | 2026-03-29 01:24:45.892414 | orchestrator | TASK [Wait for tags to be added] *********************************************** 2026-03-29 01:24:45.892422 | orchestrator | Sunday 29 March 2026 01:24:09 +0000 (0:00:04.360) 0:02:36.608 ********** 2026-03-29 01:24:45.892429 | orchestrator | FAILED - RETRYING: [localhost]: Wait for tags to be added (30 retries left). 2026-03-29 01:24:45.892437 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j412928516354.3114', 'results_file': '/ansible/.ansible_async/j412928516354.3114', 'changed': True, 'item': 'test', 'ansible_loop_var': 'item'}) 2026-03-29 01:24:45.892445 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j737547181508.3139', 'results_file': '/ansible/.ansible_async/j737547181508.3139', 'changed': True, 'item': 'test-1', 'ansible_loop_var': 'item'}) 2026-03-29 01:24:45.892466 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j525104963849.3165', 'results_file': '/ansible/.ansible_async/j525104963849.3165', 'changed': True, 'item': 'test-2', 'ansible_loop_var': 'item'}) 2026-03-29 01:24:45.892472 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j77477250147.3191', 'results_file': '/ansible/.ansible_async/j77477250147.3191', 'changed': True, 'item': 'test-3', 'ansible_loop_var': 'item'}) 2026-03-29 01:24:45.892479 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j84149693781.3217', 'results_file': '/ansible/.ansible_async/j84149693781.3217', 'changed': True, 'item': 'test-4', 'ansible_loop_var': 'item'}) 2026-03-29 01:24:45.892486 | orchestrator | 2026-03-29 01:24:45.892492 | orchestrator | TASK [Create test volume] ****************************************************** 2026-03-29 01:24:45.892499 | orchestrator | Sunday 29 March 2026 01:24:19 +0000 (0:00:10.103) 0:02:46.711 ********** 2026-03-29 01:24:45.892505 | orchestrator | changed: [localhost] 2026-03-29 01:24:45.892511 | orchestrator | 2026-03-29 01:24:45.892519 | orchestrator | TASK [Attach test volume] ****************************************************** 2026-03-29 01:24:45.892545 | orchestrator | Sunday 29 March 2026 01:24:26 +0000 (0:00:06.952) 0:02:53.663 ********** 2026-03-29 01:24:45.892551 | orchestrator | changed: [localhost] 2026-03-29 01:24:45.892557 | orchestrator | 2026-03-29 01:24:45.892563 | orchestrator | TASK [Create floating ip address] ********************************************** 2026-03-29 01:24:45.892569 | orchestrator | Sunday 29 March 2026 01:24:40 +0000 (0:00:13.773) 0:03:07.437 ********** 2026-03-29 01:24:45.892575 | orchestrator | ok: [localhost] 2026-03-29 01:24:45.892581 | orchestrator | 2026-03-29 01:24:45.892588 | orchestrator | TASK [Print floating ip address] *********************************************** 2026-03-29 01:24:45.892594 | orchestrator | Sunday 29 March 2026 01:24:45 +0000 (0:00:05.308) 0:03:12.746 ********** 2026-03-29 01:24:45.892601 | orchestrator | ok: [localhost] => { 2026-03-29 01:24:45.892606 | orchestrator |  "msg": "192.168.112.114" 2026-03-29 01:24:45.892613 | orchestrator | } 2026-03-29 01:24:45.892620 | orchestrator | 2026-03-29 01:24:45.892626 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 01:24:45.892633 | orchestrator | localhost : ok=26  changed=23  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-29 01:24:45.892640 | orchestrator | 2026-03-29 01:24:45.892646 | orchestrator | 2026-03-29 01:24:45.892651 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-29 01:24:45.892657 | orchestrator | Sunday 29 March 2026 01:24:45 +0000 (0:00:00.043) 0:03:12.789 ********** 2026-03-29 01:24:45.892662 | orchestrator | =============================================================================== 2026-03-29 01:24:45.892668 | orchestrator | Wait for instance creation to complete --------------------------------- 47.11s 2026-03-29 01:24:45.892673 | orchestrator | Attach test volume ----------------------------------------------------- 13.77s 2026-03-29 01:24:45.892679 | orchestrator | Add member roles to user test ------------------------------------------ 11.49s 2026-03-29 01:24:45.892685 | orchestrator | Create test router ----------------------------------------------------- 10.80s 2026-03-29 01:24:45.892691 | orchestrator | Wait for tags to be added ---------------------------------------------- 10.10s 2026-03-29 01:24:45.892697 | orchestrator | Wait for metadata to be added ------------------------------------------- 9.68s 2026-03-29 01:24:45.892704 | orchestrator | Create test volume ------------------------------------------------------ 6.95s 2026-03-29 01:24:45.892726 | orchestrator | Add manager role to user test-admin ------------------------------------- 6.74s 2026-03-29 01:24:45.892732 | orchestrator | Create test subnet ------------------------------------------------------ 5.72s 2026-03-29 01:24:45.892739 | orchestrator | Create floating ip address ---------------------------------------------- 5.31s 2026-03-29 01:24:45.892746 | orchestrator | Create test instances --------------------------------------------------- 5.14s 2026-03-29 01:24:45.892753 | orchestrator | Create test network ----------------------------------------------------- 4.66s 2026-03-29 01:24:45.892760 | orchestrator | Create ssh security group ----------------------------------------------- 4.61s 2026-03-29 01:24:45.892767 | orchestrator | Add metadata to instances ----------------------------------------------- 4.48s 2026-03-29 01:24:45.892773 | orchestrator | Add rule to ssh security group ------------------------------------------ 4.47s 2026-03-29 01:24:45.892779 | orchestrator | Create test user -------------------------------------------------------- 4.41s 2026-03-29 01:24:45.892785 | orchestrator | Create test server group ------------------------------------------------ 4.40s 2026-03-29 01:24:45.892791 | orchestrator | Add tag to instances ---------------------------------------------------- 4.36s 2026-03-29 01:24:45.892796 | orchestrator | Get test server group --------------------------------------------------- 4.21s 2026-03-29 01:24:45.892803 | orchestrator | Create test-admin user -------------------------------------------------- 4.19s 2026-03-29 01:24:46.078125 | orchestrator | + server_list 2026-03-29 01:24:46.078210 | orchestrator | + openstack --os-cloud test server list 2026-03-29 01:24:49.490058 | orchestrator | +--------------------------------------+--------+--------+---------------------------------------+--------------------------+----------+ 2026-03-29 01:24:49.490128 | orchestrator | | ID | Name | Status | Networks | Image | Flavor | 2026-03-29 01:24:49.490134 | orchestrator | +--------------------------------------+--------+--------+---------------------------------------+--------------------------+----------+ 2026-03-29 01:24:49.490145 | orchestrator | | 17f69f5f-d8fe-49ba-9379-755b2c8f04f8 | test-4 | ACTIVE | test=192.168.112.158, 192.168.200.165 | N/A (booted from volume) | SCS-1L-1 | 2026-03-29 01:24:49.490148 | orchestrator | | 9879f342-fa4f-461a-a0d6-7f864d96b625 | test-3 | ACTIVE | test=192.168.112.122, 192.168.200.170 | N/A (booted from volume) | SCS-1L-1 | 2026-03-29 01:24:49.490152 | orchestrator | | 2a114126-22c1-45ae-bff6-0a3f9f5bab92 | test-2 | ACTIVE | test=192.168.112.116, 192.168.200.188 | N/A (booted from volume) | SCS-1L-1 | 2026-03-29 01:24:49.490155 | orchestrator | | 4de0955e-c25d-4bbc-967a-3b3b6a0894f6 | test-1 | ACTIVE | test=192.168.112.123, 192.168.200.93 | N/A (booted from volume) | SCS-1L-1 | 2026-03-29 01:24:49.490158 | orchestrator | | 4abd36fc-851b-4802-9b25-835953847ff8 | test | ACTIVE | test=192.168.112.114, 192.168.200.149 | N/A (booted from volume) | SCS-1L-1 | 2026-03-29 01:24:49.490161 | orchestrator | +--------------------------------------+--------+--------+---------------------------------------+--------------------------+----------+ 2026-03-29 01:24:49.745583 | orchestrator | + openstack --os-cloud test server show test 2026-03-29 01:24:52.701812 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-29 01:24:52.701904 | orchestrator | | Field | Value | 2026-03-29 01:24:52.701920 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-29 01:24:52.701932 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-03-29 01:24:52.701971 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-03-29 01:24:52.701983 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-03-29 01:24:52.702012 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test | 2026-03-29 01:24:52.702094 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-03-29 01:24:52.702112 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-03-29 01:24:52.702140 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-03-29 01:24:52.702152 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-03-29 01:24:52.702164 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-03-29 01:24:52.702175 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-03-29 01:24:52.702186 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-03-29 01:24:52.702198 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-03-29 01:24:52.702209 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-03-29 01:24:52.702228 | orchestrator | | OS-EXT-STS:task_state | None | 2026-03-29 01:24:52.702239 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-03-29 01:24:52.702250 | orchestrator | | OS-SRV-USG:launched_at | 2026-03-29T01:23:35.000000 | 2026-03-29 01:24:52.702268 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-03-29 01:24:52.702280 | orchestrator | | accessIPv4 | | 2026-03-29 01:24:52.702291 | orchestrator | | accessIPv6 | | 2026-03-29 01:24:52.702302 | orchestrator | | addresses | test=192.168.112.114, 192.168.200.149 | 2026-03-29 01:24:52.702313 | orchestrator | | config_drive | | 2026-03-29 01:24:52.702337 | orchestrator | | created | 2026-03-29T01:23:08Z | 2026-03-29 01:24:52.702367 | orchestrator | | description | None | 2026-03-29 01:24:52.702387 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='True', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-03-29 01:24:52.702414 | orchestrator | | hostId | 13e7fe1081ebbb273b64055fe4b000b45c9d871cd9993a6fabf3c6a6 | 2026-03-29 01:24:52.702435 | orchestrator | | host_status | None | 2026-03-29 01:24:52.702465 | orchestrator | | id | 4abd36fc-851b-4802-9b25-835953847ff8 | 2026-03-29 01:24:52.702484 | orchestrator | | image | N/A (booted from volume) | 2026-03-29 01:24:52.702503 | orchestrator | | key_name | test | 2026-03-29 01:24:52.702522 | orchestrator | | locked | False | 2026-03-29 01:24:52.702543 | orchestrator | | locked_reason | None | 2026-03-29 01:24:52.702572 | orchestrator | | name | test | 2026-03-29 01:24:52.702592 | orchestrator | | pinned_availability_zone | None | 2026-03-29 01:24:52.702610 | orchestrator | | progress | 0 | 2026-03-29 01:24:52.702637 | orchestrator | | project_id | 3d0e4fe97c334e3f9050b524d0851e66 | 2026-03-29 01:24:52.702658 | orchestrator | | properties | hostname='test' | 2026-03-29 01:24:52.702686 | orchestrator | | security_groups | name='icmp' | 2026-03-29 01:24:52.702705 | orchestrator | | | name='ssh' | 2026-03-29 01:24:52.702724 | orchestrator | | server_groups | None | 2026-03-29 01:24:52.702742 | orchestrator | | status | ACTIVE | 2026-03-29 01:24:52.702773 | orchestrator | | tags | test | 2026-03-29 01:24:52.702792 | orchestrator | | trusted_image_certificates | None | 2026-03-29 01:24:52.702812 | orchestrator | | updated | 2026-03-29T01:23:56Z | 2026-03-29 01:24:52.702832 | orchestrator | | user_id | 04ca45812db54b7b985775c855a79038 | 2026-03-29 01:24:52.702866 | orchestrator | | volumes_attached | delete_on_termination='True', id='cca63deb-2e65-4d24-8133-2a84b17ca0e0' | 2026-03-29 01:24:52.702888 | orchestrator | | | delete_on_termination='False', id='58b67bc5-1569-42ab-8e66-2e050c97a519' | 2026-03-29 01:24:52.704345 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-29 01:24:52.946761 | orchestrator | + openstack --os-cloud test server show test-1 2026-03-29 01:24:55.982219 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-29 01:24:55.982337 | orchestrator | | Field | Value | 2026-03-29 01:24:55.982372 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-29 01:24:55.982377 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-03-29 01:24:55.982382 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-03-29 01:24:55.982386 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-03-29 01:24:55.982390 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-1 | 2026-03-29 01:24:55.982405 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-03-29 01:24:55.983163 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-03-29 01:24:55.983227 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-03-29 01:24:55.983235 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-03-29 01:24:55.983239 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-03-29 01:24:55.983256 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-03-29 01:24:55.983261 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-03-29 01:24:55.983265 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-03-29 01:24:55.983270 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-03-29 01:24:55.983274 | orchestrator | | OS-EXT-STS:task_state | None | 2026-03-29 01:24:55.983285 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-03-29 01:24:55.983290 | orchestrator | | OS-SRV-USG:launched_at | 2026-03-29T01:23:35.000000 | 2026-03-29 01:24:55.983308 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-03-29 01:24:55.983312 | orchestrator | | accessIPv4 | | 2026-03-29 01:24:55.983321 | orchestrator | | accessIPv6 | | 2026-03-29 01:24:55.983328 | orchestrator | | addresses | test=192.168.112.123, 192.168.200.93 | 2026-03-29 01:24:55.983341 | orchestrator | | config_drive | | 2026-03-29 01:24:55.983351 | orchestrator | | created | 2026-03-29T01:23:09Z | 2026-03-29 01:24:55.983359 | orchestrator | | description | None | 2026-03-29 01:24:55.983364 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='True', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-03-29 01:24:55.983370 | orchestrator | | hostId | 13e7fe1081ebbb273b64055fe4b000b45c9d871cd9993a6fabf3c6a6 | 2026-03-29 01:24:55.983377 | orchestrator | | host_status | None | 2026-03-29 01:24:55.983389 | orchestrator | | id | 4de0955e-c25d-4bbc-967a-3b3b6a0894f6 | 2026-03-29 01:24:55.983408 | orchestrator | | image | N/A (booted from volume) | 2026-03-29 01:24:55.983414 | orchestrator | | key_name | test | 2026-03-29 01:24:55.983421 | orchestrator | | locked | False | 2026-03-29 01:24:55.983427 | orchestrator | | locked_reason | None | 2026-03-29 01:24:55.983433 | orchestrator | | name | test-1 | 2026-03-29 01:24:55.983439 | orchestrator | | pinned_availability_zone | None | 2026-03-29 01:24:55.983445 | orchestrator | | progress | 0 | 2026-03-29 01:24:55.983455 | orchestrator | | project_id | 3d0e4fe97c334e3f9050b524d0851e66 | 2026-03-29 01:24:55.983461 | orchestrator | | properties | hostname='test-1' | 2026-03-29 01:24:55.983478 | orchestrator | | security_groups | name='icmp' | 2026-03-29 01:24:55.983485 | orchestrator | | | name='ssh' | 2026-03-29 01:24:55.983491 | orchestrator | | server_groups | None | 2026-03-29 01:24:55.983497 | orchestrator | | status | ACTIVE | 2026-03-29 01:24:55.983503 | orchestrator | | tags | test | 2026-03-29 01:24:55.983509 | orchestrator | | trusted_image_certificates | None | 2026-03-29 01:24:55.983515 | orchestrator | | updated | 2026-03-29T01:23:57Z | 2026-03-29 01:24:55.983521 | orchestrator | | user_id | 04ca45812db54b7b985775c855a79038 | 2026-03-29 01:24:55.983530 | orchestrator | | volumes_attached | delete_on_termination='True', id='63dfd307-5862-451d-bd0e-9db504be2642' | 2026-03-29 01:24:55.985896 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-29 01:24:56.252553 | orchestrator | + openstack --os-cloud test server show test-2 2026-03-29 01:24:59.263072 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-29 01:24:59.263153 | orchestrator | | Field | Value | 2026-03-29 01:24:59.263166 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-29 01:24:59.263171 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-03-29 01:24:59.263176 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-03-29 01:24:59.263180 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-03-29 01:24:59.263184 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-2 | 2026-03-29 01:24:59.263193 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-03-29 01:24:59.263197 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-03-29 01:24:59.263226 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-03-29 01:24:59.263233 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-03-29 01:24:59.263244 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-03-29 01:24:59.263252 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-03-29 01:24:59.263258 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-03-29 01:24:59.263264 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-03-29 01:24:59.263270 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-03-29 01:24:59.263275 | orchestrator | | OS-EXT-STS:task_state | None | 2026-03-29 01:24:59.263285 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-03-29 01:24:59.263296 | orchestrator | | OS-SRV-USG:launched_at | 2026-03-29T01:23:35.000000 | 2026-03-29 01:24:59.263307 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-03-29 01:24:59.263313 | orchestrator | | accessIPv4 | | 2026-03-29 01:24:59.263319 | orchestrator | | accessIPv6 | | 2026-03-29 01:24:59.263325 | orchestrator | | addresses | test=192.168.112.116, 192.168.200.188 | 2026-03-29 01:24:59.263331 | orchestrator | | config_drive | | 2026-03-29 01:24:59.263337 | orchestrator | | created | 2026-03-29T01:23:09Z | 2026-03-29 01:24:59.263343 | orchestrator | | description | None | 2026-03-29 01:24:59.263349 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='True', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-03-29 01:24:59.263363 | orchestrator | | hostId | f665b3aa67dcc241d1488c0443f74e1a11ace7c61fe43d1d78d0b56b | 2026-03-29 01:24:59.263367 | orchestrator | | host_status | None | 2026-03-29 01:24:59.263375 | orchestrator | | id | 2a114126-22c1-45ae-bff6-0a3f9f5bab92 | 2026-03-29 01:24:59.263379 | orchestrator | | image | N/A (booted from volume) | 2026-03-29 01:24:59.263383 | orchestrator | | key_name | test | 2026-03-29 01:24:59.263387 | orchestrator | | locked | False | 2026-03-29 01:24:59.263391 | orchestrator | | locked_reason | None | 2026-03-29 01:24:59.263395 | orchestrator | | name | test-2 | 2026-03-29 01:24:59.263398 | orchestrator | | pinned_availability_zone | None | 2026-03-29 01:24:59.263402 | orchestrator | | progress | 0 | 2026-03-29 01:24:59.263409 | orchestrator | | project_id | 3d0e4fe97c334e3f9050b524d0851e66 | 2026-03-29 01:24:59.263414 | orchestrator | | properties | hostname='test-2' | 2026-03-29 01:24:59.263421 | orchestrator | | security_groups | name='icmp' | 2026-03-29 01:24:59.263426 | orchestrator | | | name='ssh' | 2026-03-29 01:24:59.263439 | orchestrator | | server_groups | None | 2026-03-29 01:24:59.263445 | orchestrator | | status | ACTIVE | 2026-03-29 01:24:59.263455 | orchestrator | | tags | test | 2026-03-29 01:24:59.263466 | orchestrator | | trusted_image_certificates | None | 2026-03-29 01:24:59.263473 | orchestrator | | updated | 2026-03-29T01:23:57Z | 2026-03-29 01:24:59.263485 | orchestrator | | user_id | 04ca45812db54b7b985775c855a79038 | 2026-03-29 01:24:59.263494 | orchestrator | | volumes_attached | delete_on_termination='True', id='9280088e-3f4b-4e12-880e-68ab91dbd00c' | 2026-03-29 01:24:59.268033 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-29 01:24:59.568167 | orchestrator | + openstack --os-cloud test server show test-3 2026-03-29 01:25:02.253882 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-29 01:25:02.253945 | orchestrator | | Field | Value | 2026-03-29 01:25:02.253995 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-29 01:25:02.254003 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-03-29 01:25:02.254010 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-03-29 01:25:02.254047 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-03-29 01:25:02.254072 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-3 | 2026-03-29 01:25:02.254080 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-03-29 01:25:02.254099 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-03-29 01:25:02.254118 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-03-29 01:25:02.254124 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-03-29 01:25:02.254131 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-03-29 01:25:02.254144 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-03-29 01:25:02.254151 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-03-29 01:25:02.254157 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-03-29 01:25:02.254169 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-03-29 01:25:02.254175 | orchestrator | | OS-EXT-STS:task_state | None | 2026-03-29 01:25:02.254182 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-03-29 01:25:02.254192 | orchestrator | | OS-SRV-USG:launched_at | 2026-03-29T01:23:33.000000 | 2026-03-29 01:25:02.254209 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-03-29 01:25:02.254215 | orchestrator | | accessIPv4 | | 2026-03-29 01:25:02.254219 | orchestrator | | accessIPv6 | | 2026-03-29 01:25:02.254223 | orchestrator | | addresses | test=192.168.112.122, 192.168.200.170 | 2026-03-29 01:25:02.254227 | orchestrator | | config_drive | | 2026-03-29 01:25:02.254234 | orchestrator | | created | 2026-03-29T01:23:10Z | 2026-03-29 01:25:02.254238 | orchestrator | | description | None | 2026-03-29 01:25:02.254242 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='True', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-03-29 01:25:02.254248 | orchestrator | | hostId | f665b3aa67dcc241d1488c0443f74e1a11ace7c61fe43d1d78d0b56b | 2026-03-29 01:25:02.254252 | orchestrator | | host_status | None | 2026-03-29 01:25:02.254260 | orchestrator | | id | 9879f342-fa4f-461a-a0d6-7f864d96b625 | 2026-03-29 01:25:02.254264 | orchestrator | | image | N/A (booted from volume) | 2026-03-29 01:25:02.254268 | orchestrator | | key_name | test | 2026-03-29 01:25:02.254272 | orchestrator | | locked | False | 2026-03-29 01:25:02.254275 | orchestrator | | locked_reason | None | 2026-03-29 01:25:02.254282 | orchestrator | | name | test-3 | 2026-03-29 01:25:02.254286 | orchestrator | | pinned_availability_zone | None | 2026-03-29 01:25:02.254290 | orchestrator | | progress | 0 | 2026-03-29 01:25:02.254296 | orchestrator | | project_id | 3d0e4fe97c334e3f9050b524d0851e66 | 2026-03-29 01:25:02.254300 | orchestrator | | properties | hostname='test-3' | 2026-03-29 01:25:02.254306 | orchestrator | | security_groups | name='icmp' | 2026-03-29 01:25:02.254311 | orchestrator | | | name='ssh' | 2026-03-29 01:25:02.254315 | orchestrator | | server_groups | None | 2026-03-29 01:25:02.254319 | orchestrator | | status | ACTIVE | 2026-03-29 01:25:02.254325 | orchestrator | | tags | test | 2026-03-29 01:25:02.254329 | orchestrator | | trusted_image_certificates | None | 2026-03-29 01:25:02.254333 | orchestrator | | updated | 2026-03-29T01:23:58Z | 2026-03-29 01:25:02.254337 | orchestrator | | user_id | 04ca45812db54b7b985775c855a79038 | 2026-03-29 01:25:02.254343 | orchestrator | | volumes_attached | delete_on_termination='True', id='20c0df77-3372-47c6-99c1-d37505b8ee2c' | 2026-03-29 01:25:02.257820 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-29 01:25:02.530812 | orchestrator | + openstack --os-cloud test server show test-4 2026-03-29 01:25:05.693358 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-29 01:25:05.693432 | orchestrator | | Field | Value | 2026-03-29 01:25:05.693438 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-29 01:25:05.693459 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-03-29 01:25:05.693464 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-03-29 01:25:05.693468 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-03-29 01:25:05.693472 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-4 | 2026-03-29 01:25:05.693476 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-03-29 01:25:05.693481 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-03-29 01:25:05.693495 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-03-29 01:25:05.693499 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-03-29 01:25:05.693503 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-03-29 01:25:05.693512 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-03-29 01:25:05.693516 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-03-29 01:25:05.693520 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-03-29 01:25:05.693524 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-03-29 01:25:05.693528 | orchestrator | | OS-EXT-STS:task_state | None | 2026-03-29 01:25:05.693800 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-03-29 01:25:05.693808 | orchestrator | | OS-SRV-USG:launched_at | 2026-03-29T01:23:37.000000 | 2026-03-29 01:25:05.693817 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-03-29 01:25:05.693821 | orchestrator | | accessIPv4 | | 2026-03-29 01:25:05.693829 | orchestrator | | accessIPv6 | | 2026-03-29 01:25:05.693833 | orchestrator | | addresses | test=192.168.112.158, 192.168.200.165 | 2026-03-29 01:25:05.693837 | orchestrator | | config_drive | | 2026-03-29 01:25:05.693841 | orchestrator | | created | 2026-03-29T01:23:11Z | 2026-03-29 01:25:05.693848 | orchestrator | | description | None | 2026-03-29 01:25:05.693852 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='True', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-03-29 01:25:05.693856 | orchestrator | | hostId | f665b3aa67dcc241d1488c0443f74e1a11ace7c61fe43d1d78d0b56b | 2026-03-29 01:25:05.693860 | orchestrator | | host_status | None | 2026-03-29 01:25:05.693869 | orchestrator | | id | 17f69f5f-d8fe-49ba-9379-755b2c8f04f8 | 2026-03-29 01:25:05.693873 | orchestrator | | image | N/A (booted from volume) | 2026-03-29 01:25:05.693880 | orchestrator | | key_name | test | 2026-03-29 01:25:05.693884 | orchestrator | | locked | False | 2026-03-29 01:25:05.693888 | orchestrator | | locked_reason | None | 2026-03-29 01:25:05.693892 | orchestrator | | name | test-4 | 2026-03-29 01:25:05.693898 | orchestrator | | pinned_availability_zone | None | 2026-03-29 01:25:05.693902 | orchestrator | | progress | 0 | 2026-03-29 01:25:05.693906 | orchestrator | | project_id | 3d0e4fe97c334e3f9050b524d0851e66 | 2026-03-29 01:25:05.693910 | orchestrator | | properties | hostname='test-4' | 2026-03-29 01:25:05.693918 | orchestrator | | security_groups | name='icmp' | 2026-03-29 01:25:05.693926 | orchestrator | | | name='ssh' | 2026-03-29 01:25:05.693931 | orchestrator | | server_groups | None | 2026-03-29 01:25:05.693935 | orchestrator | | status | ACTIVE | 2026-03-29 01:25:05.693940 | orchestrator | | tags | test | 2026-03-29 01:25:05.693946 | orchestrator | | trusted_image_certificates | None | 2026-03-29 01:25:05.694108 | orchestrator | | updated | 2026-03-29T01:23:59Z | 2026-03-29 01:25:05.694129 | orchestrator | | user_id | 04ca45812db54b7b985775c855a79038 | 2026-03-29 01:25:05.694137 | orchestrator | | volumes_attached | delete_on_termination='True', id='94806e0c-e25f-45de-a5ab-49c8a2e1a5ce' | 2026-03-29 01:25:05.698799 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-29 01:25:05.930905 | orchestrator | + server_ping 2026-03-29 01:25:05.932879 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2026-03-29 01:25:05.932943 | orchestrator | ++ tr -d '\r' 2026-03-29 01:25:08.684480 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-29 01:25:08.684550 | orchestrator | + ping -c3 192.168.112.123 2026-03-29 01:25:08.695862 | orchestrator | PING 192.168.112.123 (192.168.112.123) 56(84) bytes of data. 2026-03-29 01:25:08.695924 | orchestrator | 64 bytes from 192.168.112.123: icmp_seq=1 ttl=63 time=4.90 ms 2026-03-29 01:25:09.694430 | orchestrator | 64 bytes from 192.168.112.123: icmp_seq=2 ttl=63 time=2.39 ms 2026-03-29 01:25:10.695891 | orchestrator | 64 bytes from 192.168.112.123: icmp_seq=3 ttl=63 time=1.45 ms 2026-03-29 01:25:10.695956 | orchestrator | 2026-03-29 01:25:10.695998 | orchestrator | --- 192.168.112.123 ping statistics --- 2026-03-29 01:25:10.696007 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-03-29 01:25:10.696013 | orchestrator | rtt min/avg/max/mdev = 1.447/2.911/4.897/1.455 ms 2026-03-29 01:25:10.696590 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-29 01:25:10.696644 | orchestrator | + ping -c3 192.168.112.158 2026-03-29 01:25:10.704892 | orchestrator | PING 192.168.112.158 (192.168.112.158) 56(84) bytes of data. 2026-03-29 01:25:10.705020 | orchestrator | 64 bytes from 192.168.112.158: icmp_seq=1 ttl=63 time=4.32 ms 2026-03-29 01:25:11.704618 | orchestrator | 64 bytes from 192.168.112.158: icmp_seq=2 ttl=63 time=2.31 ms 2026-03-29 01:25:12.705743 | orchestrator | 64 bytes from 192.168.112.158: icmp_seq=3 ttl=63 time=1.79 ms 2026-03-29 01:25:12.705815 | orchestrator | 2026-03-29 01:25:12.705822 | orchestrator | --- 192.168.112.158 ping statistics --- 2026-03-29 01:25:12.705828 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-03-29 01:25:12.705833 | orchestrator | rtt min/avg/max/mdev = 1.788/2.805/4.315/1.088 ms 2026-03-29 01:25:12.707072 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-29 01:25:12.707118 | orchestrator | + ping -c3 192.168.112.116 2026-03-29 01:25:12.717960 | orchestrator | PING 192.168.112.116 (192.168.112.116) 56(84) bytes of data. 2026-03-29 01:25:12.718089 | orchestrator | 64 bytes from 192.168.112.116: icmp_seq=1 ttl=63 time=6.94 ms 2026-03-29 01:25:13.714863 | orchestrator | 64 bytes from 192.168.112.116: icmp_seq=2 ttl=63 time=2.35 ms 2026-03-29 01:25:14.715012 | orchestrator | 64 bytes from 192.168.112.116: icmp_seq=3 ttl=63 time=1.62 ms 2026-03-29 01:25:14.715424 | orchestrator | 2026-03-29 01:25:14.715453 | orchestrator | --- 192.168.112.116 ping statistics --- 2026-03-29 01:25:14.715462 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2026-03-29 01:25:14.715469 | orchestrator | rtt min/avg/max/mdev = 1.624/3.638/6.943/2.355 ms 2026-03-29 01:25:14.715773 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-29 01:25:14.715792 | orchestrator | + ping -c3 192.168.112.122 2026-03-29 01:25:14.725020 | orchestrator | PING 192.168.112.122 (192.168.112.122) 56(84) bytes of data. 2026-03-29 01:25:14.725105 | orchestrator | 64 bytes from 192.168.112.122: icmp_seq=1 ttl=63 time=4.68 ms 2026-03-29 01:25:15.723474 | orchestrator | 64 bytes from 192.168.112.122: icmp_seq=2 ttl=63 time=2.04 ms 2026-03-29 01:25:16.724588 | orchestrator | 64 bytes from 192.168.112.122: icmp_seq=3 ttl=63 time=1.54 ms 2026-03-29 01:25:16.724683 | orchestrator | 2026-03-29 01:25:16.724693 | orchestrator | --- 192.168.112.122 ping statistics --- 2026-03-29 01:25:16.724699 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2026-03-29 01:25:16.724703 | orchestrator | rtt min/avg/max/mdev = 1.544/2.752/4.676/1.375 ms 2026-03-29 01:25:16.724708 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-29 01:25:16.724714 | orchestrator | + ping -c3 192.168.112.114 2026-03-29 01:25:16.736346 | orchestrator | PING 192.168.112.114 (192.168.112.114) 56(84) bytes of data. 2026-03-29 01:25:16.736454 | orchestrator | 64 bytes from 192.168.112.114: icmp_seq=1 ttl=63 time=6.65 ms 2026-03-29 01:25:17.733914 | orchestrator | 64 bytes from 192.168.112.114: icmp_seq=2 ttl=63 time=1.23 ms 2026-03-29 01:25:18.735874 | orchestrator | 64 bytes from 192.168.112.114: icmp_seq=3 ttl=63 time=1.75 ms 2026-03-29 01:25:18.735952 | orchestrator | 2026-03-29 01:25:18.735958 | orchestrator | --- 192.168.112.114 ping statistics --- 2026-03-29 01:25:18.735964 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2026-03-29 01:25:18.735970 | orchestrator | rtt min/avg/max/mdev = 1.227/3.209/6.649/2.441 ms 2026-03-29 01:25:18.736053 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-03-29 01:25:18.736082 | orchestrator | + compute_list 2026-03-29 01:25:18.736090 | orchestrator | + osism manage compute list testbed-node-3 2026-03-29 01:25:20.328875 | orchestrator | 2026-03-29 01:25:20 | ERROR  | Unable to get ansible vault password 2026-03-29 01:25:20.328944 | orchestrator | 2026-03-29 01:25:20 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-03-29 01:25:20.328952 | orchestrator | 2026-03-29 01:25:20 | ERROR  | Dropping encrypted entries 2026-03-29 01:25:23.921830 | orchestrator | +--------------------------------------+--------+----------+ 2026-03-29 01:25:23.921932 | orchestrator | | ID | Name | Status | 2026-03-29 01:25:23.921939 | orchestrator | |--------------------------------------+--------+----------| 2026-03-29 01:25:23.921943 | orchestrator | | 17f69f5f-d8fe-49ba-9379-755b2c8f04f8 | test-4 | ACTIVE | 2026-03-29 01:25:23.921948 | orchestrator | | 9879f342-fa4f-461a-a0d6-7f864d96b625 | test-3 | ACTIVE | 2026-03-29 01:25:23.921952 | orchestrator | | 2a114126-22c1-45ae-bff6-0a3f9f5bab92 | test-2 | ACTIVE | 2026-03-29 01:25:23.921956 | orchestrator | +--------------------------------------+--------+----------+ 2026-03-29 01:25:24.236388 | orchestrator | + osism manage compute list testbed-node-4 2026-03-29 01:25:25.805356 | orchestrator | 2026-03-29 01:25:25 | ERROR  | Unable to get ansible vault password 2026-03-29 01:25:25.805403 | orchestrator | 2026-03-29 01:25:25 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-03-29 01:25:25.805409 | orchestrator | 2026-03-29 01:25:25 | ERROR  | Dropping encrypted entries 2026-03-29 01:25:27.167768 | orchestrator | +--------------------------------------+--------+----------+ 2026-03-29 01:25:27.167819 | orchestrator | | ID | Name | Status | 2026-03-29 01:25:27.167824 | orchestrator | |--------------------------------------+--------+----------| 2026-03-29 01:25:27.167827 | orchestrator | | 4de0955e-c25d-4bbc-967a-3b3b6a0894f6 | test-1 | ACTIVE | 2026-03-29 01:25:27.167831 | orchestrator | | 4abd36fc-851b-4802-9b25-835953847ff8 | test | ACTIVE | 2026-03-29 01:25:27.167834 | orchestrator | +--------------------------------------+--------+----------+ 2026-03-29 01:25:27.457543 | orchestrator | + osism manage compute list testbed-node-5 2026-03-29 01:25:29.013536 | orchestrator | 2026-03-29 01:25:29 | ERROR  | Unable to get ansible vault password 2026-03-29 01:25:29.013584 | orchestrator | 2026-03-29 01:25:29 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-03-29 01:25:29.013591 | orchestrator | 2026-03-29 01:25:29 | ERROR  | Dropping encrypted entries 2026-03-29 01:25:30.028057 | orchestrator | +------+--------+----------+ 2026-03-29 01:25:30.028121 | orchestrator | | ID | Name | Status | 2026-03-29 01:25:30.028130 | orchestrator | |------+--------+----------| 2026-03-29 01:25:30.028137 | orchestrator | +------+--------+----------+ 2026-03-29 01:25:30.372484 | orchestrator | + osism manage compute migrate --yes --target testbed-node-3 testbed-node-4 2026-03-29 01:25:31.938660 | orchestrator | 2026-03-29 01:25:31 | ERROR  | Unable to get ansible vault password 2026-03-29 01:25:31.938720 | orchestrator | 2026-03-29 01:25:31 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-03-29 01:25:31.938727 | orchestrator | 2026-03-29 01:25:31 | ERROR  | Dropping encrypted entries 2026-03-29 01:25:33.273324 | orchestrator | 2026-03-29 01:25:33 | INFO  | Live migrating server 4de0955e-c25d-4bbc-967a-3b3b6a0894f6 2026-03-29 01:25:46.314663 | orchestrator | 2026-03-29 01:25:46 | INFO  | Live migration of 4de0955e-c25d-4bbc-967a-3b3b6a0894f6 (test-1) is still in progress 2026-03-29 01:25:48.720397 | orchestrator | 2026-03-29 01:25:48 | INFO  | Live migration of 4de0955e-c25d-4bbc-967a-3b3b6a0894f6 (test-1) is still in progress 2026-03-29 01:25:51.341351 | orchestrator | 2026-03-29 01:25:51 | INFO  | Live migration of 4de0955e-c25d-4bbc-967a-3b3b6a0894f6 (test-1) is still in progress 2026-03-29 01:25:53.632913 | orchestrator | 2026-03-29 01:25:53 | INFO  | Live migration of 4de0955e-c25d-4bbc-967a-3b3b6a0894f6 (test-1) is still in progress 2026-03-29 01:25:56.200466 | orchestrator | 2026-03-29 01:25:56 | INFO  | Live migration of 4de0955e-c25d-4bbc-967a-3b3b6a0894f6 (test-1) is still in progress 2026-03-29 01:25:58.525441 | orchestrator | 2026-03-29 01:25:58 | INFO  | Live migration of 4de0955e-c25d-4bbc-967a-3b3b6a0894f6 (test-1) is still in progress 2026-03-29 01:26:00.748066 | orchestrator | 2026-03-29 01:26:00 | INFO  | Live migration of 4de0955e-c25d-4bbc-967a-3b3b6a0894f6 (test-1) is still in progress 2026-03-29 01:26:02.945204 | orchestrator | 2026-03-29 01:26:02 | INFO  | Live migration of 4de0955e-c25d-4bbc-967a-3b3b6a0894f6 (test-1) is still in progress 2026-03-29 01:26:05.429654 | orchestrator | 2026-03-29 01:26:05 | INFO  | Live migration of 4de0955e-c25d-4bbc-967a-3b3b6a0894f6 (test-1) completed with status ACTIVE 2026-03-29 01:26:05.429726 | orchestrator | 2026-03-29 01:26:05 | INFO  | Live migrating server 4abd36fc-851b-4802-9b25-835953847ff8 2026-03-29 01:26:17.839120 | orchestrator | 2026-03-29 01:26:17 | INFO  | Live migration of 4abd36fc-851b-4802-9b25-835953847ff8 (test) is still in progress 2026-03-29 01:26:20.116526 | orchestrator | 2026-03-29 01:26:20 | INFO  | Live migration of 4abd36fc-851b-4802-9b25-835953847ff8 (test) is still in progress 2026-03-29 01:26:22.485980 | orchestrator | 2026-03-29 01:26:22 | INFO  | Live migration of 4abd36fc-851b-4802-9b25-835953847ff8 (test) is still in progress 2026-03-29 01:26:24.870128 | orchestrator | 2026-03-29 01:26:24 | INFO  | Live migration of 4abd36fc-851b-4802-9b25-835953847ff8 (test) is still in progress 2026-03-29 01:26:27.134155 | orchestrator | 2026-03-29 01:26:27 | INFO  | Live migration of 4abd36fc-851b-4802-9b25-835953847ff8 (test) is still in progress 2026-03-29 01:26:29.409650 | orchestrator | 2026-03-29 01:26:29 | INFO  | Live migration of 4abd36fc-851b-4802-9b25-835953847ff8 (test) is still in progress 2026-03-29 01:26:31.734971 | orchestrator | 2026-03-29 01:26:31 | INFO  | Live migration of 4abd36fc-851b-4802-9b25-835953847ff8 (test) is still in progress 2026-03-29 01:26:33.949491 | orchestrator | 2026-03-29 01:26:33 | INFO  | Live migration of 4abd36fc-851b-4802-9b25-835953847ff8 (test) is still in progress 2026-03-29 01:26:36.193380 | orchestrator | 2026-03-29 01:26:36 | INFO  | Live migration of 4abd36fc-851b-4802-9b25-835953847ff8 (test) is still in progress 2026-03-29 01:26:38.419610 | orchestrator | 2026-03-29 01:26:38 | INFO  | Live migration of 4abd36fc-851b-4802-9b25-835953847ff8 (test) is still in progress 2026-03-29 01:26:40.713887 | orchestrator | 2026-03-29 01:26:40 | INFO  | Live migration of 4abd36fc-851b-4802-9b25-835953847ff8 (test) is still in progress 2026-03-29 01:26:42.916468 | orchestrator | 2026-03-29 01:26:42 | INFO  | Live migration of 4abd36fc-851b-4802-9b25-835953847ff8 (test) completed with status ACTIVE 2026-03-29 01:26:43.260877 | orchestrator | + compute_list 2026-03-29 01:26:43.260921 | orchestrator | + osism manage compute list testbed-node-3 2026-03-29 01:26:44.919718 | orchestrator | 2026-03-29 01:26:44 | ERROR  | Unable to get ansible vault password 2026-03-29 01:26:44.919848 | orchestrator | 2026-03-29 01:26:44 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-03-29 01:26:44.919869 | orchestrator | 2026-03-29 01:26:44 | ERROR  | Dropping encrypted entries 2026-03-29 01:26:46.967081 | orchestrator | +--------------------------------------+--------+----------+ 2026-03-29 01:26:46.967187 | orchestrator | | ID | Name | Status | 2026-03-29 01:26:46.967197 | orchestrator | |--------------------------------------+--------+----------| 2026-03-29 01:26:46.967205 | orchestrator | | 17f69f5f-d8fe-49ba-9379-755b2c8f04f8 | test-4 | ACTIVE | 2026-03-29 01:26:46.967211 | orchestrator | | 9879f342-fa4f-461a-a0d6-7f864d96b625 | test-3 | ACTIVE | 2026-03-29 01:26:46.967218 | orchestrator | | 2a114126-22c1-45ae-bff6-0a3f9f5bab92 | test-2 | ACTIVE | 2026-03-29 01:26:46.967226 | orchestrator | | 4de0955e-c25d-4bbc-967a-3b3b6a0894f6 | test-1 | ACTIVE | 2026-03-29 01:26:46.967233 | orchestrator | | 4abd36fc-851b-4802-9b25-835953847ff8 | test | ACTIVE | 2026-03-29 01:26:46.967240 | orchestrator | +--------------------------------------+--------+----------+ 2026-03-29 01:26:47.252286 | orchestrator | + osism manage compute list testbed-node-4 2026-03-29 01:26:48.835479 | orchestrator | 2026-03-29 01:26:48 | ERROR  | Unable to get ansible vault password 2026-03-29 01:26:48.835569 | orchestrator | 2026-03-29 01:26:48 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-03-29 01:26:48.835797 | orchestrator | 2026-03-29 01:26:48 | ERROR  | Dropping encrypted entries 2026-03-29 01:26:50.104574 | orchestrator | +------+--------+----------+ 2026-03-29 01:26:50.104663 | orchestrator | | ID | Name | Status | 2026-03-29 01:26:50.104674 | orchestrator | |------+--------+----------| 2026-03-29 01:26:50.104681 | orchestrator | +------+--------+----------+ 2026-03-29 01:26:50.501408 | orchestrator | + osism manage compute list testbed-node-5 2026-03-29 01:26:52.153728 | orchestrator | 2026-03-29 01:26:52 | ERROR  | Unable to get ansible vault password 2026-03-29 01:26:52.153808 | orchestrator | 2026-03-29 01:26:52 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-03-29 01:26:52.153823 | orchestrator | 2026-03-29 01:26:52 | ERROR  | Dropping encrypted entries 2026-03-29 01:26:53.252273 | orchestrator | +------+--------+----------+ 2026-03-29 01:26:53.252392 | orchestrator | | ID | Name | Status | 2026-03-29 01:26:53.252399 | orchestrator | |------+--------+----------| 2026-03-29 01:26:53.252425 | orchestrator | +------+--------+----------+ 2026-03-29 01:26:53.560988 | orchestrator | + server_ping 2026-03-29 01:26:53.562899 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2026-03-29 01:26:53.562969 | orchestrator | ++ tr -d '\r' 2026-03-29 01:26:56.471912 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-29 01:26:56.471981 | orchestrator | + ping -c3 192.168.112.123 2026-03-29 01:26:56.485091 | orchestrator | PING 192.168.112.123 (192.168.112.123) 56(84) bytes of data. 2026-03-29 01:26:56.485223 | orchestrator | 64 bytes from 192.168.112.123: icmp_seq=1 ttl=63 time=9.18 ms 2026-03-29 01:26:57.478920 | orchestrator | 64 bytes from 192.168.112.123: icmp_seq=2 ttl=63 time=1.64 ms 2026-03-29 01:26:58.480347 | orchestrator | 64 bytes from 192.168.112.123: icmp_seq=3 ttl=63 time=1.20 ms 2026-03-29 01:26:58.480399 | orchestrator | 2026-03-29 01:26:58.480405 | orchestrator | --- 192.168.112.123 ping statistics --- 2026-03-29 01:26:58.480420 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-03-29 01:26:58.480428 | orchestrator | rtt min/avg/max/mdev = 1.204/4.008/9.176/3.658 ms 2026-03-29 01:26:58.480860 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-29 01:26:58.480877 | orchestrator | + ping -c3 192.168.112.158 2026-03-29 01:26:58.489999 | orchestrator | PING 192.168.112.158 (192.168.112.158) 56(84) bytes of data. 2026-03-29 01:26:58.490082 | orchestrator | 64 bytes from 192.168.112.158: icmp_seq=1 ttl=63 time=3.92 ms 2026-03-29 01:26:59.490266 | orchestrator | 64 bytes from 192.168.112.158: icmp_seq=2 ttl=63 time=2.51 ms 2026-03-29 01:27:00.492096 | orchestrator | 64 bytes from 192.168.112.158: icmp_seq=3 ttl=63 time=1.76 ms 2026-03-29 01:27:00.492253 | orchestrator | 2026-03-29 01:27:00.492265 | orchestrator | --- 192.168.112.158 ping statistics --- 2026-03-29 01:27:00.492273 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-03-29 01:27:00.492281 | orchestrator | rtt min/avg/max/mdev = 1.757/2.728/3.919/0.896 ms 2026-03-29 01:27:00.492971 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-29 01:27:00.493033 | orchestrator | + ping -c3 192.168.112.116 2026-03-29 01:27:00.502524 | orchestrator | PING 192.168.112.116 (192.168.112.116) 56(84) bytes of data. 2026-03-29 01:27:00.502595 | orchestrator | 64 bytes from 192.168.112.116: icmp_seq=1 ttl=63 time=6.89 ms 2026-03-29 01:27:01.498348 | orchestrator | 64 bytes from 192.168.112.116: icmp_seq=2 ttl=63 time=1.62 ms 2026-03-29 01:27:02.500974 | orchestrator | 64 bytes from 192.168.112.116: icmp_seq=3 ttl=63 time=1.52 ms 2026-03-29 01:27:02.501041 | orchestrator | 2026-03-29 01:27:02.501056 | orchestrator | --- 192.168.112.116 ping statistics --- 2026-03-29 01:27:02.501066 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-03-29 01:27:02.501075 | orchestrator | rtt min/avg/max/mdev = 1.516/3.338/6.885/2.507 ms 2026-03-29 01:27:02.501084 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-29 01:27:02.501094 | orchestrator | + ping -c3 192.168.112.122 2026-03-29 01:27:02.509754 | orchestrator | PING 192.168.112.122 (192.168.112.122) 56(84) bytes of data. 2026-03-29 01:27:02.509823 | orchestrator | 64 bytes from 192.168.112.122: icmp_seq=1 ttl=63 time=4.15 ms 2026-03-29 01:27:03.508778 | orchestrator | 64 bytes from 192.168.112.122: icmp_seq=2 ttl=63 time=1.58 ms 2026-03-29 01:27:04.510241 | orchestrator | 64 bytes from 192.168.112.122: icmp_seq=3 ttl=63 time=1.80 ms 2026-03-29 01:27:04.510319 | orchestrator | 2026-03-29 01:27:04.510327 | orchestrator | --- 192.168.112.122 ping statistics --- 2026-03-29 01:27:04.510333 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2026-03-29 01:27:04.510337 | orchestrator | rtt min/avg/max/mdev = 1.575/2.509/4.150/1.163 ms 2026-03-29 01:27:04.510342 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-29 01:27:04.510347 | orchestrator | + ping -c3 192.168.112.114 2026-03-29 01:27:04.519567 | orchestrator | PING 192.168.112.114 (192.168.112.114) 56(84) bytes of data. 2026-03-29 01:27:04.519657 | orchestrator | 64 bytes from 192.168.112.114: icmp_seq=1 ttl=63 time=5.07 ms 2026-03-29 01:27:05.518538 | orchestrator | 64 bytes from 192.168.112.114: icmp_seq=2 ttl=63 time=2.68 ms 2026-03-29 01:27:06.520187 | orchestrator | 64 bytes from 192.168.112.114: icmp_seq=3 ttl=63 time=1.95 ms 2026-03-29 01:27:06.520271 | orchestrator | 2026-03-29 01:27:06.520279 | orchestrator | --- 192.168.112.114 ping statistics --- 2026-03-29 01:27:06.520285 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-03-29 01:27:06.520290 | orchestrator | rtt min/avg/max/mdev = 1.949/3.235/5.072/1.333 ms 2026-03-29 01:27:06.520777 | orchestrator | + osism manage compute migrate --yes --target testbed-node-3 testbed-node-5 2026-03-29 01:27:08.190225 | orchestrator | 2026-03-29 01:27:08 | ERROR  | Unable to get ansible vault password 2026-03-29 01:27:08.190292 | orchestrator | 2026-03-29 01:27:08 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-03-29 01:27:08.190301 | orchestrator | 2026-03-29 01:27:08 | ERROR  | Dropping encrypted entries 2026-03-29 01:27:09.348457 | orchestrator | 2026-03-29 01:27:09 | INFO  | No migratable instances found on node testbed-node-5 2026-03-29 01:27:09.659387 | orchestrator | + compute_list 2026-03-29 01:27:09.659478 | orchestrator | + osism manage compute list testbed-node-3 2026-03-29 01:27:11.245315 | orchestrator | 2026-03-29 01:27:11 | ERROR  | Unable to get ansible vault password 2026-03-29 01:27:11.245382 | orchestrator | 2026-03-29 01:27:11 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-03-29 01:27:11.245411 | orchestrator | 2026-03-29 01:27:11 | ERROR  | Dropping encrypted entries 2026-03-29 01:27:12.616492 | orchestrator | +--------------------------------------+--------+----------+ 2026-03-29 01:27:12.616541 | orchestrator | | ID | Name | Status | 2026-03-29 01:27:12.616546 | orchestrator | |--------------------------------------+--------+----------| 2026-03-29 01:27:12.616561 | orchestrator | | 17f69f5f-d8fe-49ba-9379-755b2c8f04f8 | test-4 | ACTIVE | 2026-03-29 01:27:12.616565 | orchestrator | | 9879f342-fa4f-461a-a0d6-7f864d96b625 | test-3 | ACTIVE | 2026-03-29 01:27:12.616569 | orchestrator | | 2a114126-22c1-45ae-bff6-0a3f9f5bab92 | test-2 | ACTIVE | 2026-03-29 01:27:12.616573 | orchestrator | | 4de0955e-c25d-4bbc-967a-3b3b6a0894f6 | test-1 | ACTIVE | 2026-03-29 01:27:12.616577 | orchestrator | | 4abd36fc-851b-4802-9b25-835953847ff8 | test | ACTIVE | 2026-03-29 01:27:12.616582 | orchestrator | +--------------------------------------+--------+----------+ 2026-03-29 01:27:12.928418 | orchestrator | + osism manage compute list testbed-node-4 2026-03-29 01:27:14.497334 | orchestrator | 2026-03-29 01:27:14 | ERROR  | Unable to get ansible vault password 2026-03-29 01:27:14.497381 | orchestrator | 2026-03-29 01:27:14 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-03-29 01:27:14.497388 | orchestrator | 2026-03-29 01:27:14 | ERROR  | Dropping encrypted entries 2026-03-29 01:27:15.575606 | orchestrator | +------+--------+----------+ 2026-03-29 01:27:15.575672 | orchestrator | | ID | Name | Status | 2026-03-29 01:27:15.575683 | orchestrator | |------+--------+----------| 2026-03-29 01:27:15.575692 | orchestrator | +------+--------+----------+ 2026-03-29 01:27:15.873712 | orchestrator | + osism manage compute list testbed-node-5 2026-03-29 01:27:17.490724 | orchestrator | 2026-03-29 01:27:17 | ERROR  | Unable to get ansible vault password 2026-03-29 01:27:17.490773 | orchestrator | 2026-03-29 01:27:17 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-03-29 01:27:17.490780 | orchestrator | 2026-03-29 01:27:17 | ERROR  | Dropping encrypted entries 2026-03-29 01:27:18.468087 | orchestrator | +------+--------+----------+ 2026-03-29 01:27:18.468156 | orchestrator | | ID | Name | Status | 2026-03-29 01:27:18.468166 | orchestrator | |------+--------+----------| 2026-03-29 01:27:18.468172 | orchestrator | +------+--------+----------+ 2026-03-29 01:27:18.764732 | orchestrator | + server_ping 2026-03-29 01:27:18.765243 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2026-03-29 01:27:18.765587 | orchestrator | ++ tr -d '\r' 2026-03-29 01:27:21.638900 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-29 01:27:21.639029 | orchestrator | + ping -c3 192.168.112.123 2026-03-29 01:27:21.648371 | orchestrator | PING 192.168.112.123 (192.168.112.123) 56(84) bytes of data. 2026-03-29 01:27:21.648481 | orchestrator | 64 bytes from 192.168.112.123: icmp_seq=1 ttl=63 time=7.25 ms 2026-03-29 01:27:22.645120 | orchestrator | 64 bytes from 192.168.112.123: icmp_seq=2 ttl=63 time=2.28 ms 2026-03-29 01:27:23.646225 | orchestrator | 64 bytes from 192.168.112.123: icmp_seq=3 ttl=63 time=1.71 ms 2026-03-29 01:27:23.646422 | orchestrator | 2026-03-29 01:27:23.646439 | orchestrator | --- 192.168.112.123 ping statistics --- 2026-03-29 01:27:23.646447 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-03-29 01:27:23.646455 | orchestrator | rtt min/avg/max/mdev = 1.711/3.745/7.250/2.488 ms 2026-03-29 01:27:23.646474 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-29 01:27:23.646482 | orchestrator | + ping -c3 192.168.112.158 2026-03-29 01:27:23.656531 | orchestrator | PING 192.168.112.158 (192.168.112.158) 56(84) bytes of data. 2026-03-29 01:27:23.656605 | orchestrator | 64 bytes from 192.168.112.158: icmp_seq=1 ttl=63 time=4.99 ms 2026-03-29 01:27:24.655069 | orchestrator | 64 bytes from 192.168.112.158: icmp_seq=2 ttl=63 time=2.18 ms 2026-03-29 01:27:25.655847 | orchestrator | 64 bytes from 192.168.112.158: icmp_seq=3 ttl=63 time=1.15 ms 2026-03-29 01:27:25.655898 | orchestrator | 2026-03-29 01:27:25.655909 | orchestrator | --- 192.168.112.158 ping statistics --- 2026-03-29 01:27:25.655916 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-03-29 01:27:25.655922 | orchestrator | rtt min/avg/max/mdev = 1.146/2.771/4.988/1.623 ms 2026-03-29 01:27:25.655930 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-29 01:27:25.655936 | orchestrator | + ping -c3 192.168.112.116 2026-03-29 01:27:25.664267 | orchestrator | PING 192.168.112.116 (192.168.112.116) 56(84) bytes of data. 2026-03-29 01:27:25.664316 | orchestrator | 64 bytes from 192.168.112.116: icmp_seq=1 ttl=63 time=4.03 ms 2026-03-29 01:27:26.663356 | orchestrator | 64 bytes from 192.168.112.116: icmp_seq=2 ttl=63 time=1.76 ms 2026-03-29 01:27:27.665011 | orchestrator | 64 bytes from 192.168.112.116: icmp_seq=3 ttl=63 time=1.16 ms 2026-03-29 01:27:27.665065 | orchestrator | 2026-03-29 01:27:27.665074 | orchestrator | --- 192.168.112.116 ping statistics --- 2026-03-29 01:27:27.665082 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-03-29 01:27:27.665087 | orchestrator | rtt min/avg/max/mdev = 1.161/2.316/4.026/1.233 ms 2026-03-29 01:27:27.665367 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-29 01:27:27.665379 | orchestrator | + ping -c3 192.168.112.122 2026-03-29 01:27:27.673007 | orchestrator | PING 192.168.112.122 (192.168.112.122) 56(84) bytes of data. 2026-03-29 01:27:27.673048 | orchestrator | 64 bytes from 192.168.112.122: icmp_seq=1 ttl=63 time=3.20 ms 2026-03-29 01:27:28.673834 | orchestrator | 64 bytes from 192.168.112.122: icmp_seq=2 ttl=63 time=2.25 ms 2026-03-29 01:27:29.675626 | orchestrator | 64 bytes from 192.168.112.122: icmp_seq=3 ttl=63 time=1.70 ms 2026-03-29 01:27:29.675729 | orchestrator | 2026-03-29 01:27:29.675736 | orchestrator | --- 192.168.112.122 ping statistics --- 2026-03-29 01:27:29.675742 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-03-29 01:27:29.675747 | orchestrator | rtt min/avg/max/mdev = 1.701/2.383/3.201/0.619 ms 2026-03-29 01:27:29.675753 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-29 01:27:29.675758 | orchestrator | + ping -c3 192.168.112.114 2026-03-29 01:27:29.685266 | orchestrator | PING 192.168.112.114 (192.168.112.114) 56(84) bytes of data. 2026-03-29 01:27:29.685398 | orchestrator | 64 bytes from 192.168.112.114: icmp_seq=1 ttl=63 time=5.06 ms 2026-03-29 01:27:30.682630 | orchestrator | 64 bytes from 192.168.112.114: icmp_seq=2 ttl=63 time=1.28 ms 2026-03-29 01:27:31.683655 | orchestrator | 64 bytes from 192.168.112.114: icmp_seq=3 ttl=63 time=0.918 ms 2026-03-29 01:27:31.683735 | orchestrator | 2026-03-29 01:27:31.683743 | orchestrator | --- 192.168.112.114 ping statistics --- 2026-03-29 01:27:31.683749 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2026-03-29 01:27:31.683755 | orchestrator | rtt min/avg/max/mdev = 0.918/2.420/5.059/1.871 ms 2026-03-29 01:27:31.683761 | orchestrator | + osism manage compute migrate --yes --target testbed-node-4 testbed-node-3 2026-03-29 01:27:33.253655 | orchestrator | 2026-03-29 01:27:33 | ERROR  | Unable to get ansible vault password 2026-03-29 01:27:33.253729 | orchestrator | 2026-03-29 01:27:33 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-03-29 01:27:33.253738 | orchestrator | 2026-03-29 01:27:33 | ERROR  | Dropping encrypted entries 2026-03-29 01:27:34.886375 | orchestrator | 2026-03-29 01:27:34 | INFO  | Live migrating server 17f69f5f-d8fe-49ba-9379-755b2c8f04f8 2026-03-29 01:27:46.930254 | orchestrator | 2026-03-29 01:27:46 | INFO  | Live migration of 17f69f5f-d8fe-49ba-9379-755b2c8f04f8 (test-4) is still in progress 2026-03-29 01:27:49.202368 | orchestrator | 2026-03-29 01:27:49 | INFO  | Live migration of 17f69f5f-d8fe-49ba-9379-755b2c8f04f8 (test-4) is still in progress 2026-03-29 01:27:51.473898 | orchestrator | 2026-03-29 01:27:51 | INFO  | Live migration of 17f69f5f-d8fe-49ba-9379-755b2c8f04f8 (test-4) is still in progress 2026-03-29 01:27:53.735629 | orchestrator | 2026-03-29 01:27:53 | INFO  | Live migration of 17f69f5f-d8fe-49ba-9379-755b2c8f04f8 (test-4) is still in progress 2026-03-29 01:27:55.936977 | orchestrator | 2026-03-29 01:27:55 | INFO  | Live migration of 17f69f5f-d8fe-49ba-9379-755b2c8f04f8 (test-4) is still in progress 2026-03-29 01:27:58.144679 | orchestrator | 2026-03-29 01:27:58 | INFO  | Live migration of 17f69f5f-d8fe-49ba-9379-755b2c8f04f8 (test-4) is still in progress 2026-03-29 01:28:00.575319 | orchestrator | 2026-03-29 01:28:00 | INFO  | Live migration of 17f69f5f-d8fe-49ba-9379-755b2c8f04f8 (test-4) is still in progress 2026-03-29 01:28:02.792290 | orchestrator | 2026-03-29 01:28:02 | INFO  | Live migration of 17f69f5f-d8fe-49ba-9379-755b2c8f04f8 (test-4) is still in progress 2026-03-29 01:28:05.131076 | orchestrator | 2026-03-29 01:28:05 | INFO  | Live migration of 17f69f5f-d8fe-49ba-9379-755b2c8f04f8 (test-4) completed with status ACTIVE 2026-03-29 01:28:05.131255 | orchestrator | 2026-03-29 01:28:05 | INFO  | Live migrating server 9879f342-fa4f-461a-a0d6-7f864d96b625 2026-03-29 01:28:16.162177 | orchestrator | 2026-03-29 01:28:16 | INFO  | Live migration of 9879f342-fa4f-461a-a0d6-7f864d96b625 (test-3) is still in progress 2026-03-29 01:28:18.493104 | orchestrator | 2026-03-29 01:28:18 | INFO  | Live migration of 9879f342-fa4f-461a-a0d6-7f864d96b625 (test-3) is still in progress 2026-03-29 01:28:20.877990 | orchestrator | 2026-03-29 01:28:20 | INFO  | Live migration of 9879f342-fa4f-461a-a0d6-7f864d96b625 (test-3) is still in progress 2026-03-29 01:28:23.130380 | orchestrator | 2026-03-29 01:28:23 | INFO  | Live migration of 9879f342-fa4f-461a-a0d6-7f864d96b625 (test-3) is still in progress 2026-03-29 01:28:25.351741 | orchestrator | 2026-03-29 01:28:25 | INFO  | Live migration of 9879f342-fa4f-461a-a0d6-7f864d96b625 (test-3) is still in progress 2026-03-29 01:28:27.584272 | orchestrator | 2026-03-29 01:28:27 | INFO  | Live migration of 9879f342-fa4f-461a-a0d6-7f864d96b625 (test-3) is still in progress 2026-03-29 01:28:29.805805 | orchestrator | 2026-03-29 01:28:29 | INFO  | Live migration of 9879f342-fa4f-461a-a0d6-7f864d96b625 (test-3) is still in progress 2026-03-29 01:28:32.208011 | orchestrator | 2026-03-29 01:28:32 | INFO  | Live migration of 9879f342-fa4f-461a-a0d6-7f864d96b625 (test-3) is still in progress 2026-03-29 01:28:34.652348 | orchestrator | 2026-03-29 01:28:34 | INFO  | Live migration of 9879f342-fa4f-461a-a0d6-7f864d96b625 (test-3) completed with status ACTIVE 2026-03-29 01:28:34.652423 | orchestrator | 2026-03-29 01:28:34 | INFO  | Live migrating server 2a114126-22c1-45ae-bff6-0a3f9f5bab92 2026-03-29 01:28:45.572593 | orchestrator | 2026-03-29 01:28:45 | INFO  | Live migration of 2a114126-22c1-45ae-bff6-0a3f9f5bab92 (test-2) is still in progress 2026-03-29 01:28:47.837270 | orchestrator | 2026-03-29 01:28:47 | INFO  | Live migration of 2a114126-22c1-45ae-bff6-0a3f9f5bab92 (test-2) is still in progress 2026-03-29 01:28:50.126651 | orchestrator | 2026-03-29 01:28:50 | INFO  | Live migration of 2a114126-22c1-45ae-bff6-0a3f9f5bab92 (test-2) is still in progress 2026-03-29 01:28:52.414441 | orchestrator | 2026-03-29 01:28:52 | INFO  | Live migration of 2a114126-22c1-45ae-bff6-0a3f9f5bab92 (test-2) is still in progress 2026-03-29 01:28:54.737110 | orchestrator | 2026-03-29 01:28:54 | INFO  | Live migration of 2a114126-22c1-45ae-bff6-0a3f9f5bab92 (test-2) is still in progress 2026-03-29 01:28:57.098178 | orchestrator | 2026-03-29 01:28:57 | INFO  | Live migration of 2a114126-22c1-45ae-bff6-0a3f9f5bab92 (test-2) is still in progress 2026-03-29 01:28:59.380536 | orchestrator | 2026-03-29 01:28:59 | INFO  | Live migration of 2a114126-22c1-45ae-bff6-0a3f9f5bab92 (test-2) is still in progress 2026-03-29 01:29:01.568504 | orchestrator | 2026-03-29 01:29:01 | INFO  | Live migration of 2a114126-22c1-45ae-bff6-0a3f9f5bab92 (test-2) is still in progress 2026-03-29 01:29:03.790096 | orchestrator | 2026-03-29 01:29:03 | INFO  | Live migration of 2a114126-22c1-45ae-bff6-0a3f9f5bab92 (test-2) completed with status ACTIVE 2026-03-29 01:29:03.790143 | orchestrator | 2026-03-29 01:29:03 | INFO  | Live migrating server 4de0955e-c25d-4bbc-967a-3b3b6a0894f6 2026-03-29 01:29:14.378039 | orchestrator | 2026-03-29 01:29:14 | INFO  | Live migration of 4de0955e-c25d-4bbc-967a-3b3b6a0894f6 (test-1) is still in progress 2026-03-29 01:29:16.664570 | orchestrator | 2026-03-29 01:29:16 | INFO  | Live migration of 4de0955e-c25d-4bbc-967a-3b3b6a0894f6 (test-1) is still in progress 2026-03-29 01:29:19.033991 | orchestrator | 2026-03-29 01:29:19 | INFO  | Live migration of 4de0955e-c25d-4bbc-967a-3b3b6a0894f6 (test-1) is still in progress 2026-03-29 01:29:21.398671 | orchestrator | 2026-03-29 01:29:21 | INFO  | Live migration of 4de0955e-c25d-4bbc-967a-3b3b6a0894f6 (test-1) is still in progress 2026-03-29 01:29:23.618598 | orchestrator | 2026-03-29 01:29:23 | INFO  | Live migration of 4de0955e-c25d-4bbc-967a-3b3b6a0894f6 (test-1) is still in progress 2026-03-29 01:29:25.831503 | orchestrator | 2026-03-29 01:29:25 | INFO  | Live migration of 4de0955e-c25d-4bbc-967a-3b3b6a0894f6 (test-1) is still in progress 2026-03-29 01:29:28.058983 | orchestrator | 2026-03-29 01:29:28 | INFO  | Live migration of 4de0955e-c25d-4bbc-967a-3b3b6a0894f6 (test-1) is still in progress 2026-03-29 01:29:30.409122 | orchestrator | 2026-03-29 01:29:30 | INFO  | Live migration of 4de0955e-c25d-4bbc-967a-3b3b6a0894f6 (test-1) is still in progress 2026-03-29 01:29:32.822454 | orchestrator | 2026-03-29 01:29:32 | INFO  | Live migration of 4de0955e-c25d-4bbc-967a-3b3b6a0894f6 (test-1) completed with status ACTIVE 2026-03-29 01:29:32.822529 | orchestrator | 2026-03-29 01:29:32 | INFO  | Live migrating server 4abd36fc-851b-4802-9b25-835953847ff8 2026-03-29 01:29:43.058809 | orchestrator | 2026-03-29 01:29:43 | INFO  | Live migration of 4abd36fc-851b-4802-9b25-835953847ff8 (test) is still in progress 2026-03-29 01:29:45.498627 | orchestrator | 2026-03-29 01:29:45 | INFO  | Live migration of 4abd36fc-851b-4802-9b25-835953847ff8 (test) is still in progress 2026-03-29 01:29:47.776967 | orchestrator | 2026-03-29 01:29:47 | INFO  | Live migration of 4abd36fc-851b-4802-9b25-835953847ff8 (test) is still in progress 2026-03-29 01:29:50.195544 | orchestrator | 2026-03-29 01:29:50 | INFO  | Live migration of 4abd36fc-851b-4802-9b25-835953847ff8 (test) is still in progress 2026-03-29 01:29:52.598832 | orchestrator | 2026-03-29 01:29:52 | INFO  | Live migration of 4abd36fc-851b-4802-9b25-835953847ff8 (test) is still in progress 2026-03-29 01:29:54.911204 | orchestrator | 2026-03-29 01:29:54 | INFO  | Live migration of 4abd36fc-851b-4802-9b25-835953847ff8 (test) is still in progress 2026-03-29 01:29:57.185860 | orchestrator | 2026-03-29 01:29:57 | INFO  | Live migration of 4abd36fc-851b-4802-9b25-835953847ff8 (test) is still in progress 2026-03-29 01:29:59.458415 | orchestrator | 2026-03-29 01:29:59 | INFO  | Live migration of 4abd36fc-851b-4802-9b25-835953847ff8 (test) is still in progress 2026-03-29 01:30:01.684129 | orchestrator | 2026-03-29 01:30:01 | INFO  | Live migration of 4abd36fc-851b-4802-9b25-835953847ff8 (test) is still in progress 2026-03-29 01:30:03.980215 | orchestrator | 2026-03-29 01:30:03 | INFO  | Live migration of 4abd36fc-851b-4802-9b25-835953847ff8 (test) completed with status ACTIVE 2026-03-29 01:30:04.267599 | orchestrator | + compute_list 2026-03-29 01:30:04.267653 | orchestrator | + osism manage compute list testbed-node-3 2026-03-29 01:30:05.839324 | orchestrator | 2026-03-29 01:30:05 | ERROR  | Unable to get ansible vault password 2026-03-29 01:30:05.839387 | orchestrator | 2026-03-29 01:30:05 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-03-29 01:30:05.839401 | orchestrator | 2026-03-29 01:30:05 | ERROR  | Dropping encrypted entries 2026-03-29 01:30:07.017064 | orchestrator | +------+--------+----------+ 2026-03-29 01:30:07.017145 | orchestrator | | ID | Name | Status | 2026-03-29 01:30:07.017156 | orchestrator | |------+--------+----------| 2026-03-29 01:30:07.017162 | orchestrator | +------+--------+----------+ 2026-03-29 01:30:07.326084 | orchestrator | + osism manage compute list testbed-node-4 2026-03-29 01:30:08.889885 | orchestrator | 2026-03-29 01:30:08 | ERROR  | Unable to get ansible vault password 2026-03-29 01:30:08.889938 | orchestrator | 2026-03-29 01:30:08 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-03-29 01:30:08.889945 | orchestrator | 2026-03-29 01:30:08 | ERROR  | Dropping encrypted entries 2026-03-29 01:30:10.359541 | orchestrator | +--------------------------------------+--------+----------+ 2026-03-29 01:30:10.359599 | orchestrator | | ID | Name | Status | 2026-03-29 01:30:10.359608 | orchestrator | |--------------------------------------+--------+----------| 2026-03-29 01:30:10.359613 | orchestrator | | 17f69f5f-d8fe-49ba-9379-755b2c8f04f8 | test-4 | ACTIVE | 2026-03-29 01:30:10.359619 | orchestrator | | 9879f342-fa4f-461a-a0d6-7f864d96b625 | test-3 | ACTIVE | 2026-03-29 01:30:10.359622 | orchestrator | | 2a114126-22c1-45ae-bff6-0a3f9f5bab92 | test-2 | ACTIVE | 2026-03-29 01:30:10.359625 | orchestrator | | 4de0955e-c25d-4bbc-967a-3b3b6a0894f6 | test-1 | ACTIVE | 2026-03-29 01:30:10.359629 | orchestrator | | 4abd36fc-851b-4802-9b25-835953847ff8 | test | ACTIVE | 2026-03-29 01:30:10.359632 | orchestrator | +--------------------------------------+--------+----------+ 2026-03-29 01:30:10.661824 | orchestrator | + osism manage compute list testbed-node-5 2026-03-29 01:30:12.223657 | orchestrator | 2026-03-29 01:30:12 | ERROR  | Unable to get ansible vault password 2026-03-29 01:30:12.223713 | orchestrator | 2026-03-29 01:30:12 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-03-29 01:30:12.223722 | orchestrator | 2026-03-29 01:30:12 | ERROR  | Dropping encrypted entries 2026-03-29 01:30:13.227577 | orchestrator | +------+--------+----------+ 2026-03-29 01:30:13.227634 | orchestrator | | ID | Name | Status | 2026-03-29 01:30:13.227643 | orchestrator | |------+--------+----------| 2026-03-29 01:30:13.227649 | orchestrator | +------+--------+----------+ 2026-03-29 01:30:13.511942 | orchestrator | + server_ping 2026-03-29 01:30:13.513508 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2026-03-29 01:30:13.513554 | orchestrator | ++ tr -d '\r' 2026-03-29 01:30:16.249502 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-29 01:30:16.249562 | orchestrator | + ping -c3 192.168.112.123 2026-03-29 01:30:16.259059 | orchestrator | PING 192.168.112.123 (192.168.112.123) 56(84) bytes of data. 2026-03-29 01:30:16.259113 | orchestrator | 64 bytes from 192.168.112.123: icmp_seq=1 ttl=63 time=5.74 ms 2026-03-29 01:30:17.257169 | orchestrator | 64 bytes from 192.168.112.123: icmp_seq=2 ttl=63 time=2.39 ms 2026-03-29 01:30:18.258409 | orchestrator | 64 bytes from 192.168.112.123: icmp_seq=3 ttl=63 time=1.93 ms 2026-03-29 01:30:18.259005 | orchestrator | 2026-03-29 01:30:18.259059 | orchestrator | --- 192.168.112.123 ping statistics --- 2026-03-29 01:30:18.259068 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-03-29 01:30:18.259073 | orchestrator | rtt min/avg/max/mdev = 1.933/3.351/5.735/1.695 ms 2026-03-29 01:30:18.259543 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-29 01:30:18.259568 | orchestrator | + ping -c3 192.168.112.158 2026-03-29 01:30:18.270974 | orchestrator | PING 192.168.112.158 (192.168.112.158) 56(84) bytes of data. 2026-03-29 01:30:18.271089 | orchestrator | 64 bytes from 192.168.112.158: icmp_seq=1 ttl=63 time=6.87 ms 2026-03-29 01:30:19.268874 | orchestrator | 64 bytes from 192.168.112.158: icmp_seq=2 ttl=63 time=3.57 ms 2026-03-29 01:30:20.270630 | orchestrator | 64 bytes from 192.168.112.158: icmp_seq=3 ttl=63 time=3.28 ms 2026-03-29 01:30:20.270719 | orchestrator | 2026-03-29 01:30:20.270729 | orchestrator | --- 192.168.112.158 ping statistics --- 2026-03-29 01:30:20.270738 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-03-29 01:30:20.270744 | orchestrator | rtt min/avg/max/mdev = 3.282/4.575/6.873/1.629 ms 2026-03-29 01:30:20.270827 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-29 01:30:20.270837 | orchestrator | + ping -c3 192.168.112.116 2026-03-29 01:30:20.280433 | orchestrator | PING 192.168.112.116 (192.168.112.116) 56(84) bytes of data. 2026-03-29 01:30:20.280523 | orchestrator | 64 bytes from 192.168.112.116: icmp_seq=1 ttl=63 time=6.84 ms 2026-03-29 01:30:21.277414 | orchestrator | 64 bytes from 192.168.112.116: icmp_seq=2 ttl=63 time=1.89 ms 2026-03-29 01:30:22.279187 | orchestrator | 64 bytes from 192.168.112.116: icmp_seq=3 ttl=63 time=2.27 ms 2026-03-29 01:30:22.279341 | orchestrator | 2026-03-29 01:30:22.279357 | orchestrator | --- 192.168.112.116 ping statistics --- 2026-03-29 01:30:22.279367 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-03-29 01:30:22.279374 | orchestrator | rtt min/avg/max/mdev = 1.893/3.665/6.839/2.249 ms 2026-03-29 01:30:22.279716 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-29 01:30:22.279747 | orchestrator | + ping -c3 192.168.112.122 2026-03-29 01:30:22.293010 | orchestrator | PING 192.168.112.122 (192.168.112.122) 56(84) bytes of data. 2026-03-29 01:30:22.293095 | orchestrator | 64 bytes from 192.168.112.122: icmp_seq=1 ttl=63 time=9.06 ms 2026-03-29 01:30:23.287020 | orchestrator | 64 bytes from 192.168.112.122: icmp_seq=2 ttl=63 time=1.77 ms 2026-03-29 01:30:24.289917 | orchestrator | 64 bytes from 192.168.112.122: icmp_seq=3 ttl=63 time=1.88 ms 2026-03-29 01:30:24.290009 | orchestrator | 2026-03-29 01:30:24.290055 | orchestrator | --- 192.168.112.122 ping statistics --- 2026-03-29 01:30:24.290063 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-03-29 01:30:24.290070 | orchestrator | rtt min/avg/max/mdev = 1.773/4.235/9.059/3.410 ms 2026-03-29 01:30:24.290094 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-29 01:30:24.290101 | orchestrator | + ping -c3 192.168.112.114 2026-03-29 01:30:24.298551 | orchestrator | PING 192.168.112.114 (192.168.112.114) 56(84) bytes of data. 2026-03-29 01:30:24.298619 | orchestrator | 64 bytes from 192.168.112.114: icmp_seq=1 ttl=63 time=5.65 ms 2026-03-29 01:30:25.297766 | orchestrator | 64 bytes from 192.168.112.114: icmp_seq=2 ttl=63 time=2.74 ms 2026-03-29 01:30:26.298465 | orchestrator | 64 bytes from 192.168.112.114: icmp_seq=3 ttl=63 time=1.19 ms 2026-03-29 01:30:26.298527 | orchestrator | 2026-03-29 01:30:26.298537 | orchestrator | --- 192.168.112.114 ping statistics --- 2026-03-29 01:30:26.298546 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2026-03-29 01:30:26.298553 | orchestrator | rtt min/avg/max/mdev = 1.192/3.194/5.653/1.849 ms 2026-03-29 01:30:26.298611 | orchestrator | + osism manage compute migrate --yes --target testbed-node-5 testbed-node-4 2026-03-29 01:30:27.902877 | orchestrator | 2026-03-29 01:30:27 | ERROR  | Unable to get ansible vault password 2026-03-29 01:30:27.902938 | orchestrator | 2026-03-29 01:30:27 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-03-29 01:30:27.902947 | orchestrator | 2026-03-29 01:30:27 | ERROR  | Dropping encrypted entries 2026-03-29 01:30:29.417327 | orchestrator | 2026-03-29 01:30:29 | INFO  | Live migrating server 17f69f5f-d8fe-49ba-9379-755b2c8f04f8 2026-03-29 01:30:42.252775 | orchestrator | 2026-03-29 01:30:42 | INFO  | Live migration of 17f69f5f-d8fe-49ba-9379-755b2c8f04f8 (test-4) is still in progress 2026-03-29 01:30:44.694711 | orchestrator | 2026-03-29 01:30:44 | INFO  | Live migration of 17f69f5f-d8fe-49ba-9379-755b2c8f04f8 (test-4) is still in progress 2026-03-29 01:30:47.113794 | orchestrator | 2026-03-29 01:30:47 | INFO  | Live migration of 17f69f5f-d8fe-49ba-9379-755b2c8f04f8 (test-4) is still in progress 2026-03-29 01:30:49.358750 | orchestrator | 2026-03-29 01:30:49 | INFO  | Live migration of 17f69f5f-d8fe-49ba-9379-755b2c8f04f8 (test-4) is still in progress 2026-03-29 01:30:51.628654 | orchestrator | 2026-03-29 01:30:51 | INFO  | Live migration of 17f69f5f-d8fe-49ba-9379-755b2c8f04f8 (test-4) is still in progress 2026-03-29 01:30:53.945513 | orchestrator | 2026-03-29 01:30:53 | INFO  | Live migration of 17f69f5f-d8fe-49ba-9379-755b2c8f04f8 (test-4) is still in progress 2026-03-29 01:30:56.338409 | orchestrator | 2026-03-29 01:30:56 | INFO  | Live migration of 17f69f5f-d8fe-49ba-9379-755b2c8f04f8 (test-4) is still in progress 2026-03-29 01:30:58.676611 | orchestrator | 2026-03-29 01:30:58 | INFO  | Live migration of 17f69f5f-d8fe-49ba-9379-755b2c8f04f8 (test-4) is still in progress 2026-03-29 01:31:00.877262 | orchestrator | 2026-03-29 01:31:00 | INFO  | Live migration of 17f69f5f-d8fe-49ba-9379-755b2c8f04f8 (test-4) is still in progress 2026-03-29 01:31:03.182524 | orchestrator | 2026-03-29 01:31:03 | INFO  | Live migration of 17f69f5f-d8fe-49ba-9379-755b2c8f04f8 (test-4) is still in progress 2026-03-29 01:31:05.505918 | orchestrator | 2026-03-29 01:31:05 | INFO  | Live migration of 17f69f5f-d8fe-49ba-9379-755b2c8f04f8 (test-4) is still in progress 2026-03-29 01:31:07.761272 | orchestrator | 2026-03-29 01:31:07 | INFO  | Live migration of 17f69f5f-d8fe-49ba-9379-755b2c8f04f8 (test-4) completed with status ACTIVE 2026-03-29 01:31:07.761360 | orchestrator | 2026-03-29 01:31:07 | INFO  | Live migrating server 9879f342-fa4f-461a-a0d6-7f864d96b625 2026-03-29 01:31:18.627971 | orchestrator | 2026-03-29 01:31:18 | INFO  | Live migration of 9879f342-fa4f-461a-a0d6-7f864d96b625 (test-3) is still in progress 2026-03-29 01:31:21.110682 | orchestrator | 2026-03-29 01:31:21 | INFO  | Live migration of 9879f342-fa4f-461a-a0d6-7f864d96b625 (test-3) is still in progress 2026-03-29 01:31:23.388191 | orchestrator | 2026-03-29 01:31:23 | INFO  | Live migration of 9879f342-fa4f-461a-a0d6-7f864d96b625 (test-3) is still in progress 2026-03-29 01:31:25.642134 | orchestrator | 2026-03-29 01:31:25 | INFO  | Live migration of 9879f342-fa4f-461a-a0d6-7f864d96b625 (test-3) is still in progress 2026-03-29 01:31:27.935708 | orchestrator | 2026-03-29 01:31:27 | INFO  | Live migration of 9879f342-fa4f-461a-a0d6-7f864d96b625 (test-3) is still in progress 2026-03-29 01:31:30.203190 | orchestrator | 2026-03-29 01:31:30 | INFO  | Live migration of 9879f342-fa4f-461a-a0d6-7f864d96b625 (test-3) is still in progress 2026-03-29 01:31:32.560184 | orchestrator | 2026-03-29 01:31:32 | INFO  | Live migration of 9879f342-fa4f-461a-a0d6-7f864d96b625 (test-3) is still in progress 2026-03-29 01:31:34.887963 | orchestrator | 2026-03-29 01:31:34 | INFO  | Live migration of 9879f342-fa4f-461a-a0d6-7f864d96b625 (test-3) is still in progress 2026-03-29 01:31:37.185266 | orchestrator | 2026-03-29 01:31:37 | INFO  | Live migration of 9879f342-fa4f-461a-a0d6-7f864d96b625 (test-3) completed with status ACTIVE 2026-03-29 01:31:37.185331 | orchestrator | 2026-03-29 01:31:37 | INFO  | Live migrating server 2a114126-22c1-45ae-bff6-0a3f9f5bab92 2026-03-29 01:31:46.689248 | orchestrator | 2026-03-29 01:31:46 | INFO  | Live migration of 2a114126-22c1-45ae-bff6-0a3f9f5bab92 (test-2) is still in progress 2026-03-29 01:31:48.962533 | orchestrator | 2026-03-29 01:31:48 | INFO  | Live migration of 2a114126-22c1-45ae-bff6-0a3f9f5bab92 (test-2) is still in progress 2026-03-29 01:31:51.412762 | orchestrator | 2026-03-29 01:31:51 | INFO  | Live migration of 2a114126-22c1-45ae-bff6-0a3f9f5bab92 (test-2) is still in progress 2026-03-29 01:31:53.607645 | orchestrator | 2026-03-29 01:31:53 | INFO  | Live migration of 2a114126-22c1-45ae-bff6-0a3f9f5bab92 (test-2) is still in progress 2026-03-29 01:31:55.938071 | orchestrator | 2026-03-29 01:31:55 | INFO  | Live migration of 2a114126-22c1-45ae-bff6-0a3f9f5bab92 (test-2) is still in progress 2026-03-29 01:31:58.393086 | orchestrator | 2026-03-29 01:31:58 | INFO  | Live migration of 2a114126-22c1-45ae-bff6-0a3f9f5bab92 (test-2) is still in progress 2026-03-29 01:32:00.626659 | orchestrator | 2026-03-29 01:32:00 | INFO  | Live migration of 2a114126-22c1-45ae-bff6-0a3f9f5bab92 (test-2) is still in progress 2026-03-29 01:32:02.851855 | orchestrator | 2026-03-29 01:32:02 | INFO  | Live migration of 2a114126-22c1-45ae-bff6-0a3f9f5bab92 (test-2) is still in progress 2026-03-29 01:32:05.241586 | orchestrator | 2026-03-29 01:32:05 | INFO  | Live migration of 2a114126-22c1-45ae-bff6-0a3f9f5bab92 (test-2) completed with status ACTIVE 2026-03-29 01:32:05.241672 | orchestrator | 2026-03-29 01:32:05 | INFO  | Live migrating server 4de0955e-c25d-4bbc-967a-3b3b6a0894f6 2026-03-29 01:32:15.036431 | orchestrator | 2026-03-29 01:32:15 | INFO  | Live migration of 4de0955e-c25d-4bbc-967a-3b3b6a0894f6 (test-1) is still in progress 2026-03-29 01:32:17.418644 | orchestrator | 2026-03-29 01:32:17 | INFO  | Live migration of 4de0955e-c25d-4bbc-967a-3b3b6a0894f6 (test-1) is still in progress 2026-03-29 01:32:19.694142 | orchestrator | 2026-03-29 01:32:19 | INFO  | Live migration of 4de0955e-c25d-4bbc-967a-3b3b6a0894f6 (test-1) is still in progress 2026-03-29 01:32:21.972269 | orchestrator | 2026-03-29 01:32:21 | INFO  | Live migration of 4de0955e-c25d-4bbc-967a-3b3b6a0894f6 (test-1) is still in progress 2026-03-29 01:32:24.186211 | orchestrator | 2026-03-29 01:32:24 | INFO  | Live migration of 4de0955e-c25d-4bbc-967a-3b3b6a0894f6 (test-1) is still in progress 2026-03-29 01:32:26.401852 | orchestrator | 2026-03-29 01:32:26 | INFO  | Live migration of 4de0955e-c25d-4bbc-967a-3b3b6a0894f6 (test-1) is still in progress 2026-03-29 01:32:28.693910 | orchestrator | 2026-03-29 01:32:28 | INFO  | Live migration of 4de0955e-c25d-4bbc-967a-3b3b6a0894f6 (test-1) is still in progress 2026-03-29 01:32:30.997364 | orchestrator | 2026-03-29 01:32:30 | INFO  | Live migration of 4de0955e-c25d-4bbc-967a-3b3b6a0894f6 (test-1) is still in progress 2026-03-29 01:32:33.375927 | orchestrator | 2026-03-29 01:32:33 | INFO  | Live migration of 4de0955e-c25d-4bbc-967a-3b3b6a0894f6 (test-1) completed with status ACTIVE 2026-03-29 01:32:33.376054 | orchestrator | 2026-03-29 01:32:33 | INFO  | Live migrating server 4abd36fc-851b-4802-9b25-835953847ff8 2026-03-29 01:32:43.078698 | orchestrator | 2026-03-29 01:32:43 | INFO  | Live migration of 4abd36fc-851b-4802-9b25-835953847ff8 (test) is still in progress 2026-03-29 01:32:45.462146 | orchestrator | 2026-03-29 01:32:45 | INFO  | Live migration of 4abd36fc-851b-4802-9b25-835953847ff8 (test) is still in progress 2026-03-29 01:32:47.857787 | orchestrator | 2026-03-29 01:32:47 | INFO  | Live migration of 4abd36fc-851b-4802-9b25-835953847ff8 (test) is still in progress 2026-03-29 01:32:50.151464 | orchestrator | 2026-03-29 01:32:50 | INFO  | Live migration of 4abd36fc-851b-4802-9b25-835953847ff8 (test) is still in progress 2026-03-29 01:32:52.432306 | orchestrator | 2026-03-29 01:32:52 | INFO  | Live migration of 4abd36fc-851b-4802-9b25-835953847ff8 (test) is still in progress 2026-03-29 01:32:54.721384 | orchestrator | 2026-03-29 01:32:54 | INFO  | Live migration of 4abd36fc-851b-4802-9b25-835953847ff8 (test) is still in progress 2026-03-29 01:32:57.014195 | orchestrator | 2026-03-29 01:32:57 | INFO  | Live migration of 4abd36fc-851b-4802-9b25-835953847ff8 (test) is still in progress 2026-03-29 01:32:59.406984 | orchestrator | 2026-03-29 01:32:59 | INFO  | Live migration of 4abd36fc-851b-4802-9b25-835953847ff8 (test) is still in progress 2026-03-29 01:33:01.687961 | orchestrator | 2026-03-29 01:33:01 | INFO  | Live migration of 4abd36fc-851b-4802-9b25-835953847ff8 (test) is still in progress 2026-03-29 01:33:03.967046 | orchestrator | 2026-03-29 01:33:03 | INFO  | Live migration of 4abd36fc-851b-4802-9b25-835953847ff8 (test) is still in progress 2026-03-29 01:33:06.350644 | orchestrator | 2026-03-29 01:33:06 | INFO  | Live migration of 4abd36fc-851b-4802-9b25-835953847ff8 (test) completed with status ACTIVE 2026-03-29 01:33:06.699415 | orchestrator | + compute_list 2026-03-29 01:33:06.699483 | orchestrator | + osism manage compute list testbed-node-3 2026-03-29 01:33:08.275851 | orchestrator | 2026-03-29 01:33:08 | ERROR  | Unable to get ansible vault password 2026-03-29 01:33:08.275942 | orchestrator | 2026-03-29 01:33:08 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-03-29 01:33:08.275955 | orchestrator | 2026-03-29 01:33:08 | ERROR  | Dropping encrypted entries 2026-03-29 01:33:09.539605 | orchestrator | +------+--------+----------+ 2026-03-29 01:33:09.539695 | orchestrator | | ID | Name | Status | 2026-03-29 01:33:09.539706 | orchestrator | |------+--------+----------| 2026-03-29 01:33:09.539712 | orchestrator | +------+--------+----------+ 2026-03-29 01:33:09.815122 | orchestrator | + osism manage compute list testbed-node-4 2026-03-29 01:33:11.347186 | orchestrator | 2026-03-29 01:33:11 | ERROR  | Unable to get ansible vault password 2026-03-29 01:33:11.347279 | orchestrator | 2026-03-29 01:33:11 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-03-29 01:33:11.347291 | orchestrator | 2026-03-29 01:33:11 | ERROR  | Dropping encrypted entries 2026-03-29 01:33:12.517324 | orchestrator | +------+--------+----------+ 2026-03-29 01:33:12.517428 | orchestrator | | ID | Name | Status | 2026-03-29 01:33:12.517438 | orchestrator | |------+--------+----------| 2026-03-29 01:33:12.517442 | orchestrator | +------+--------+----------+ 2026-03-29 01:33:12.849107 | orchestrator | + osism manage compute list testbed-node-5 2026-03-29 01:33:14.412798 | orchestrator | 2026-03-29 01:33:14 | ERROR  | Unable to get ansible vault password 2026-03-29 01:33:14.412846 | orchestrator | 2026-03-29 01:33:14 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-03-29 01:33:14.412853 | orchestrator | 2026-03-29 01:33:14 | ERROR  | Dropping encrypted entries 2026-03-29 01:33:15.899800 | orchestrator | +--------------------------------------+--------+----------+ 2026-03-29 01:33:15.899883 | orchestrator | | ID | Name | Status | 2026-03-29 01:33:15.899890 | orchestrator | |--------------------------------------+--------+----------| 2026-03-29 01:33:15.899894 | orchestrator | | 17f69f5f-d8fe-49ba-9379-755b2c8f04f8 | test-4 | ACTIVE | 2026-03-29 01:33:15.899899 | orchestrator | | 9879f342-fa4f-461a-a0d6-7f864d96b625 | test-3 | ACTIVE | 2026-03-29 01:33:15.899903 | orchestrator | | 2a114126-22c1-45ae-bff6-0a3f9f5bab92 | test-2 | ACTIVE | 2026-03-29 01:33:15.899907 | orchestrator | | 4de0955e-c25d-4bbc-967a-3b3b6a0894f6 | test-1 | ACTIVE | 2026-03-29 01:33:15.899911 | orchestrator | | 4abd36fc-851b-4802-9b25-835953847ff8 | test | ACTIVE | 2026-03-29 01:33:15.899915 | orchestrator | +--------------------------------------+--------+----------+ 2026-03-29 01:33:16.178867 | orchestrator | + server_ping 2026-03-29 01:33:16.179721 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2026-03-29 01:33:16.180314 | orchestrator | ++ tr -d '\r' 2026-03-29 01:33:19.163439 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-29 01:33:19.163514 | orchestrator | + ping -c3 192.168.112.123 2026-03-29 01:33:19.171997 | orchestrator | PING 192.168.112.123 (192.168.112.123) 56(84) bytes of data. 2026-03-29 01:33:19.172122 | orchestrator | 64 bytes from 192.168.112.123: icmp_seq=1 ttl=63 time=4.88 ms 2026-03-29 01:33:20.170558 | orchestrator | 64 bytes from 192.168.112.123: icmp_seq=2 ttl=63 time=2.07 ms 2026-03-29 01:33:21.171674 | orchestrator | 64 bytes from 192.168.112.123: icmp_seq=3 ttl=63 time=2.00 ms 2026-03-29 01:33:21.171760 | orchestrator | 2026-03-29 01:33:21.171770 | orchestrator | --- 192.168.112.123 ping statistics --- 2026-03-29 01:33:21.171778 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2026-03-29 01:33:21.171786 | orchestrator | rtt min/avg/max/mdev = 2.004/2.984/4.875/1.337 ms 2026-03-29 01:33:21.171794 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-29 01:33:21.171801 | orchestrator | + ping -c3 192.168.112.158 2026-03-29 01:33:21.183782 | orchestrator | PING 192.168.112.158 (192.168.112.158) 56(84) bytes of data. 2026-03-29 01:33:21.183855 | orchestrator | 64 bytes from 192.168.112.158: icmp_seq=1 ttl=63 time=7.04 ms 2026-03-29 01:33:22.180132 | orchestrator | 64 bytes from 192.168.112.158: icmp_seq=2 ttl=63 time=2.14 ms 2026-03-29 01:33:23.181460 | orchestrator | 64 bytes from 192.168.112.158: icmp_seq=3 ttl=63 time=1.97 ms 2026-03-29 01:33:23.181563 | orchestrator | 2026-03-29 01:33:23.181577 | orchestrator | --- 192.168.112.158 ping statistics --- 2026-03-29 01:33:23.181585 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-03-29 01:33:23.181592 | orchestrator | rtt min/avg/max/mdev = 1.965/3.713/7.037/2.351 ms 2026-03-29 01:33:23.181843 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-29 01:33:23.181858 | orchestrator | + ping -c3 192.168.112.116 2026-03-29 01:33:23.193418 | orchestrator | PING 192.168.112.116 (192.168.112.116) 56(84) bytes of data. 2026-03-29 01:33:23.193503 | orchestrator | 64 bytes from 192.168.112.116: icmp_seq=1 ttl=63 time=6.88 ms 2026-03-29 01:33:24.192889 | orchestrator | 64 bytes from 192.168.112.116: icmp_seq=2 ttl=63 time=4.02 ms 2026-03-29 01:33:25.191497 | orchestrator | 64 bytes from 192.168.112.116: icmp_seq=3 ttl=63 time=1.93 ms 2026-03-29 01:33:25.191570 | orchestrator | 2026-03-29 01:33:25.191576 | orchestrator | --- 192.168.112.116 ping statistics --- 2026-03-29 01:33:25.191581 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2026-03-29 01:33:25.191586 | orchestrator | rtt min/avg/max/mdev = 1.927/4.276/6.880/2.030 ms 2026-03-29 01:33:25.192104 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-29 01:33:25.192131 | orchestrator | + ping -c3 192.168.112.122 2026-03-29 01:33:25.204618 | orchestrator | PING 192.168.112.122 (192.168.112.122) 56(84) bytes of data. 2026-03-29 01:33:25.204688 | orchestrator | 64 bytes from 192.168.112.122: icmp_seq=1 ttl=63 time=7.79 ms 2026-03-29 01:33:26.201570 | orchestrator | 64 bytes from 192.168.112.122: icmp_seq=2 ttl=63 time=2.17 ms 2026-03-29 01:33:27.202795 | orchestrator | 64 bytes from 192.168.112.122: icmp_seq=3 ttl=63 time=1.70 ms 2026-03-29 01:33:27.202865 | orchestrator | 2026-03-29 01:33:27.202872 | orchestrator | --- 192.168.112.122 ping statistics --- 2026-03-29 01:33:27.202877 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2026-03-29 01:33:27.202882 | orchestrator | rtt min/avg/max/mdev = 1.695/3.886/7.793/2.769 ms 2026-03-29 01:33:27.202887 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-29 01:33:27.202892 | orchestrator | + ping -c3 192.168.112.114 2026-03-29 01:33:27.213289 | orchestrator | PING 192.168.112.114 (192.168.112.114) 56(84) bytes of data. 2026-03-29 01:33:27.213400 | orchestrator | 64 bytes from 192.168.112.114: icmp_seq=1 ttl=63 time=5.65 ms 2026-03-29 01:33:28.211870 | orchestrator | 64 bytes from 192.168.112.114: icmp_seq=2 ttl=63 time=2.35 ms 2026-03-29 01:33:29.212987 | orchestrator | 64 bytes from 192.168.112.114: icmp_seq=3 ttl=63 time=1.73 ms 2026-03-29 01:33:29.213077 | orchestrator | 2026-03-29 01:33:29.213087 | orchestrator | --- 192.168.112.114 ping statistics --- 2026-03-29 01:33:29.213095 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2026-03-29 01:33:29.213102 | orchestrator | rtt min/avg/max/mdev = 1.727/3.239/5.647/1.720 ms 2026-03-29 01:33:29.698299 | orchestrator | ok: Runtime: 0:16:37.332868 2026-03-29 01:33:29.752681 | 2026-03-29 01:33:29.752815 | TASK [Run tempest] 2026-03-29 01:33:30.499459 | orchestrator | 2026-03-29 01:33:30.919075 | orchestrator | # Tempest 2026-03-29 01:33:30.919138 | orchestrator | 2026-03-29 01:33:30.919146 | orchestrator | + set -e 2026-03-29 01:33:30.919164 | orchestrator | + source /opt/manager-vars.sh 2026-03-29 01:33:30.919174 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-03-29 01:33:30.919184 | orchestrator | ++ NUMBER_OF_NODES=6 2026-03-29 01:33:30.919209 | orchestrator | ++ export CEPH_VERSION=reef 2026-03-29 01:33:30.919221 | orchestrator | ++ CEPH_VERSION=reef 2026-03-29 01:33:30.919229 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-03-29 01:33:30.919236 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-03-29 01:33:30.919247 | orchestrator | ++ export MANAGER_VERSION=latest 2026-03-29 01:33:30.919255 | orchestrator | ++ MANAGER_VERSION=latest 2026-03-29 01:33:30.919261 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-03-29 01:33:30.919270 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-03-29 01:33:30.919275 | orchestrator | ++ export ARA=false 2026-03-29 01:33:30.919280 | orchestrator | ++ ARA=false 2026-03-29 01:33:30.919290 | orchestrator | ++ export DEPLOY_MODE=manager 2026-03-29 01:33:30.919296 | orchestrator | ++ DEPLOY_MODE=manager 2026-03-29 01:33:30.919300 | orchestrator | ++ export TEMPEST=true 2026-03-29 01:33:30.919309 | orchestrator | ++ TEMPEST=true 2026-03-29 01:33:30.919314 | orchestrator | ++ export IS_ZUUL=true 2026-03-29 01:33:30.919319 | orchestrator | ++ IS_ZUUL=true 2026-03-29 01:33:30.919327 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.35 2026-03-29 01:33:30.919332 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.35 2026-03-29 01:33:30.919337 | orchestrator | ++ export EXTERNAL_API=false 2026-03-29 01:33:30.919361 | orchestrator | ++ EXTERNAL_API=false 2026-03-29 01:33:30.919370 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-03-29 01:33:30.919378 | orchestrator | ++ IMAGE_USER=ubuntu 2026-03-29 01:33:30.919386 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-03-29 01:33:30.919411 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-03-29 01:33:30.919419 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-03-29 01:33:30.919437 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-03-29 01:33:30.919443 | orchestrator | + echo 2026-03-29 01:33:30.919448 | orchestrator | + echo '# Tempest' 2026-03-29 01:33:30.919454 | orchestrator | + echo 2026-03-29 01:33:30.919459 | orchestrator | + [[ ! -e /opt/tempest ]] 2026-03-29 01:33:30.919464 | orchestrator | + osism apply tempest --skip-tags run-tempest 2026-03-29 01:33:41.844717 | orchestrator | 2026-03-29 01:33:41 | INFO  | Prepare task for execution of tempest. 2026-03-29 01:33:41.922841 | orchestrator | 2026-03-29 01:33:41 | INFO  | Task 344d52d3-9c88-4e87-8972-37333908df7c (tempest) was prepared for execution. 2026-03-29 01:33:41.922915 | orchestrator | 2026-03-29 01:33:41 | INFO  | It takes a moment until task 344d52d3-9c88-4e87-8972-37333908df7c (tempest) has been started and output is visible here. 2026-03-29 01:34:57.062563 | orchestrator | 2026-03-29 01:34:57.062675 | orchestrator | PLAY [Run tempest] ************************************************************* 2026-03-29 01:34:57.062688 | orchestrator | 2026-03-29 01:34:57.062695 | orchestrator | TASK [osism.validations.tempest : Create tempest workdir] ********************** 2026-03-29 01:34:57.062713 | orchestrator | Sunday 29 March 2026 01:33:44 +0000 (0:00:00.228) 0:00:00.228 ********** 2026-03-29 01:34:57.062721 | orchestrator | changed: [testbed-manager] 2026-03-29 01:34:57.062729 | orchestrator | 2026-03-29 01:34:57.062736 | orchestrator | TASK [osism.validations.tempest : Copy tempest wrapper script] ***************** 2026-03-29 01:34:57.062743 | orchestrator | Sunday 29 March 2026 01:33:45 +0000 (0:00:00.930) 0:00:01.159 ********** 2026-03-29 01:34:57.062751 | orchestrator | changed: [testbed-manager] 2026-03-29 01:34:57.062758 | orchestrator | 2026-03-29 01:34:57.062765 | orchestrator | TASK [osism.validations.tempest : Check for existing tempest initialisation] *** 2026-03-29 01:34:57.062772 | orchestrator | Sunday 29 March 2026 01:33:46 +0000 (0:00:01.084) 0:00:02.244 ********** 2026-03-29 01:34:57.062779 | orchestrator | ok: [testbed-manager] 2026-03-29 01:34:57.062787 | orchestrator | 2026-03-29 01:34:57.062794 | orchestrator | TASK [osism.validations.tempest : Init tempest] ******************************** 2026-03-29 01:34:57.062801 | orchestrator | Sunday 29 March 2026 01:33:47 +0000 (0:00:00.382) 0:00:02.626 ********** 2026-03-29 01:34:57.062807 | orchestrator | changed: [testbed-manager] 2026-03-29 01:34:57.062813 | orchestrator | 2026-03-29 01:34:57.062819 | orchestrator | TASK [osism.validations.tempest : Resolve image IDs] *************************** 2026-03-29 01:34:57.062825 | orchestrator | Sunday 29 March 2026 01:34:06 +0000 (0:00:18.718) 0:00:21.344 ********** 2026-03-29 01:34:57.062862 | orchestrator | ok: [testbed-manager -> localhost] => (item=Cirros 0.6.3) 2026-03-29 01:34:57.062869 | orchestrator | ok: [testbed-manager -> localhost] => (item=Cirros 0.6.2) 2026-03-29 01:34:57.062878 | orchestrator | 2026-03-29 01:34:57.062885 | orchestrator | TASK [osism.validations.tempest : Assert images have been resolved] ************ 2026-03-29 01:34:57.062891 | orchestrator | Sunday 29 March 2026 01:34:13 +0000 (0:00:07.660) 0:00:29.005 ********** 2026-03-29 01:34:57.062898 | orchestrator | ok: [testbed-manager] => { 2026-03-29 01:34:57.062904 | orchestrator |  "changed": false, 2026-03-29 01:34:57.062910 | orchestrator |  "msg": "All assertions passed" 2026-03-29 01:34:57.062917 | orchestrator | } 2026-03-29 01:34:57.062924 | orchestrator | 2026-03-29 01:34:57.062930 | orchestrator | TASK [osism.validations.tempest : Get auth token] ****************************** 2026-03-29 01:34:57.062937 | orchestrator | Sunday 29 March 2026 01:34:13 +0000 (0:00:00.164) 0:00:29.169 ********** 2026-03-29 01:34:57.062943 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-29 01:34:57.062949 | orchestrator | 2026-03-29 01:34:57.062955 | orchestrator | TASK [osism.validations.tempest : Get endpoint catalog] ************************ 2026-03-29 01:34:57.062961 | orchestrator | Sunday 29 March 2026 01:34:17 +0000 (0:00:03.329) 0:00:32.499 ********** 2026-03-29 01:34:57.062968 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-29 01:34:57.062975 | orchestrator | 2026-03-29 01:34:57.062981 | orchestrator | TASK [osism.validations.tempest : Get service catalog] ************************* 2026-03-29 01:34:57.062988 | orchestrator | Sunday 29 March 2026 01:34:18 +0000 (0:00:01.739) 0:00:34.239 ********** 2026-03-29 01:34:57.062995 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-29 01:34:57.063001 | orchestrator | 2026-03-29 01:34:57.063007 | orchestrator | TASK [osism.validations.tempest : Register img_file name] ********************** 2026-03-29 01:34:57.063013 | orchestrator | Sunday 29 March 2026 01:34:22 +0000 (0:00:03.540) 0:00:37.780 ********** 2026-03-29 01:34:57.063020 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-29 01:34:57.063026 | orchestrator | 2026-03-29 01:34:57.063032 | orchestrator | TASK [osism.validations.tempest : Download img_file from image_ref] ************ 2026-03-29 01:34:57.063038 | orchestrator | Sunday 29 March 2026 01:34:22 +0000 (0:00:00.171) 0:00:37.951 ********** 2026-03-29 01:34:57.063044 | orchestrator | changed: [testbed-manager] 2026-03-29 01:34:57.063051 | orchestrator | 2026-03-29 01:34:57.063057 | orchestrator | TASK [osism.validations.tempest : Install qemu-utils package] ****************** 2026-03-29 01:34:57.063063 | orchestrator | Sunday 29 March 2026 01:34:25 +0000 (0:00:02.494) 0:00:40.446 ********** 2026-03-29 01:34:57.063069 | orchestrator | changed: [testbed-manager] 2026-03-29 01:34:57.063076 | orchestrator | 2026-03-29 01:34:57.063083 | orchestrator | TASK [osism.validations.tempest : Convert img_file to qcow2 format] ************ 2026-03-29 01:34:57.063089 | orchestrator | Sunday 29 March 2026 01:34:35 +0000 (0:00:10.717) 0:00:51.163 ********** 2026-03-29 01:34:57.063096 | orchestrator | changed: [testbed-manager] 2026-03-29 01:34:57.063102 | orchestrator | 2026-03-29 01:34:57.063109 | orchestrator | TASK [osism.validations.tempest : Get network API extensions] ****************** 2026-03-29 01:34:57.063115 | orchestrator | Sunday 29 March 2026 01:34:36 +0000 (0:00:00.788) 0:00:51.951 ********** 2026-03-29 01:34:57.063121 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-29 01:34:57.063127 | orchestrator | 2026-03-29 01:34:57.063135 | orchestrator | TASK [osism.validations.tempest : Revoke token] ******************************** 2026-03-29 01:34:57.063142 | orchestrator | Sunday 29 March 2026 01:34:38 +0000 (0:00:01.655) 0:00:53.607 ********** 2026-03-29 01:34:57.063148 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-29 01:34:57.063154 | orchestrator | 2026-03-29 01:34:57.063161 | orchestrator | TASK [osism.validations.tempest : Set fact for config option api_extensions] *** 2026-03-29 01:34:57.063169 | orchestrator | Sunday 29 March 2026 01:34:40 +0000 (0:00:01.779) 0:00:55.387 ********** 2026-03-29 01:34:57.063175 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-29 01:34:57.063181 | orchestrator | 2026-03-29 01:34:57.063188 | orchestrator | TASK [osism.validations.tempest : Set fact for config option img_file] ********* 2026-03-29 01:34:57.063201 | orchestrator | Sunday 29 March 2026 01:34:40 +0000 (0:00:00.193) 0:00:55.581 ********** 2026-03-29 01:34:57.063207 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-29 01:34:57.063213 | orchestrator | 2026-03-29 01:34:57.063227 | orchestrator | TASK [osism.validations.tempest : Resolve floating network ID] ***************** 2026-03-29 01:34:57.063234 | orchestrator | Sunday 29 March 2026 01:34:40 +0000 (0:00:00.408) 0:00:55.989 ********** 2026-03-29 01:34:57.063240 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-29 01:34:57.063247 | orchestrator | 2026-03-29 01:34:57.063254 | orchestrator | TASK [osism.validations.tempest : Assert floating network id has been resolved] *** 2026-03-29 01:34:57.063278 | orchestrator | Sunday 29 March 2026 01:34:45 +0000 (0:00:04.456) 0:01:00.446 ********** 2026-03-29 01:34:57.063285 | orchestrator | ok: [testbed-manager -> localhost] => { 2026-03-29 01:34:57.063291 | orchestrator |  "changed": false, 2026-03-29 01:34:57.063298 | orchestrator |  "msg": "All assertions passed" 2026-03-29 01:34:57.063305 | orchestrator | } 2026-03-29 01:34:57.063312 | orchestrator | 2026-03-29 01:34:57.063319 | orchestrator | TASK [osism.validations.tempest : Resolve flavor IDs] ************************** 2026-03-29 01:34:57.063326 | orchestrator | Sunday 29 March 2026 01:34:45 +0000 (0:00:00.200) 0:01:00.646 ********** 2026-03-29 01:34:57.063333 | orchestrator | skipping: [testbed-manager] => (item={'name': 'tempest-1', 'vcpus': 1, 'ram': 1024, 'disk': 1})  2026-03-29 01:34:57.063342 | orchestrator | skipping: [testbed-manager] => (item={'name': 'tempest-2', 'vcpus': 2, 'ram': 2048, 'disk': 2})  2026-03-29 01:34:57.063348 | orchestrator | skipping: [testbed-manager] 2026-03-29 01:34:57.063355 | orchestrator | 2026-03-29 01:34:57.063438 | orchestrator | TASK [osism.validations.tempest : Assert flavors have been resolved] *********** 2026-03-29 01:34:57.063446 | orchestrator | Sunday 29 March 2026 01:34:45 +0000 (0:00:00.205) 0:01:00.851 ********** 2026-03-29 01:34:57.063452 | orchestrator | skipping: [testbed-manager] 2026-03-29 01:34:57.063459 | orchestrator | 2026-03-29 01:34:57.063465 | orchestrator | TASK [osism.validations.tempest : Get stats of exclude list] ******************* 2026-03-29 01:34:57.063471 | orchestrator | Sunday 29 March 2026 01:34:45 +0000 (0:00:00.181) 0:01:01.033 ********** 2026-03-29 01:34:57.063477 | orchestrator | ok: [testbed-manager] 2026-03-29 01:34:57.063483 | orchestrator | 2026-03-29 01:34:57.063489 | orchestrator | TASK [osism.validations.tempest : Copy exclude list] *************************** 2026-03-29 01:34:57.063496 | orchestrator | Sunday 29 March 2026 01:34:46 +0000 (0:00:00.518) 0:01:01.552 ********** 2026-03-29 01:34:57.063502 | orchestrator | changed: [testbed-manager] 2026-03-29 01:34:57.063508 | orchestrator | 2026-03-29 01:34:57.063514 | orchestrator | TASK [osism.validations.tempest : Get stats of include list] ******************* 2026-03-29 01:34:57.063520 | orchestrator | Sunday 29 March 2026 01:34:47 +0000 (0:00:00.893) 0:01:02.445 ********** 2026-03-29 01:34:57.063527 | orchestrator | ok: [testbed-manager] 2026-03-29 01:34:57.063533 | orchestrator | 2026-03-29 01:34:57.063539 | orchestrator | TASK [osism.validations.tempest : Copy include list] *************************** 2026-03-29 01:34:57.063545 | orchestrator | Sunday 29 March 2026 01:34:47 +0000 (0:00:00.516) 0:01:02.961 ********** 2026-03-29 01:34:57.063552 | orchestrator | skipping: [testbed-manager] 2026-03-29 01:34:57.063558 | orchestrator | 2026-03-29 01:34:57.063564 | orchestrator | TASK [osism.validations.tempest : Create tempest flavors] ********************** 2026-03-29 01:34:57.063571 | orchestrator | Sunday 29 March 2026 01:34:47 +0000 (0:00:00.255) 0:01:03.217 ********** 2026-03-29 01:34:57.063576 | orchestrator | changed: [testbed-manager -> localhost] => (item={'name': 'tempest-1', 'vcpus': 1, 'ram': 1024, 'disk': 1}) 2026-03-29 01:34:57.063583 | orchestrator | changed: [testbed-manager -> localhost] => (item={'name': 'tempest-2', 'vcpus': 2, 'ram': 2048, 'disk': 2}) 2026-03-29 01:34:57.063589 | orchestrator | 2026-03-29 01:34:57.063595 | orchestrator | TASK [osism.validations.tempest : Copy tempest.conf file] ********************** 2026-03-29 01:34:57.063602 | orchestrator | Sunday 29 March 2026 01:34:56 +0000 (0:00:08.070) 0:01:11.288 ********** 2026-03-29 01:34:57.063608 | orchestrator | changed: [testbed-manager] 2026-03-29 01:34:57.063614 | orchestrator | 2026-03-29 01:34:57.063625 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 01:34:57.063632 | orchestrator | testbed-manager : ok=24  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-29 01:34:57.063639 | orchestrator | 2026-03-29 01:34:57.063645 | orchestrator | 2026-03-29 01:34:57.063652 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-29 01:34:57.063658 | orchestrator | Sunday 29 March 2026 01:34:57 +0000 (0:00:01.011) 0:01:12.299 ********** 2026-03-29 01:34:57.063664 | orchestrator | =============================================================================== 2026-03-29 01:34:57.063670 | orchestrator | osism.validations.tempest : Init tempest ------------------------------- 18.72s 2026-03-29 01:34:57.063676 | orchestrator | osism.validations.tempest : Install qemu-utils package ----------------- 10.72s 2026-03-29 01:34:57.063682 | orchestrator | osism.validations.tempest : Create tempest flavors ---------------------- 8.07s 2026-03-29 01:34:57.063688 | orchestrator | osism.validations.tempest : Resolve image IDs --------------------------- 7.66s 2026-03-29 01:34:57.063700 | orchestrator | osism.validations.tempest : Resolve floating network ID ----------------- 4.46s 2026-03-29 01:34:57.063706 | orchestrator | osism.validations.tempest : Get service catalog ------------------------- 3.54s 2026-03-29 01:34:57.063712 | orchestrator | osism.validations.tempest : Get auth token ------------------------------ 3.33s 2026-03-29 01:34:57.063718 | orchestrator | osism.validations.tempest : Download img_file from image_ref ------------ 2.49s 2026-03-29 01:34:57.063724 | orchestrator | osism.validations.tempest : Revoke token -------------------------------- 1.78s 2026-03-29 01:34:57.063730 | orchestrator | osism.validations.tempest : Get endpoint catalog ------------------------ 1.74s 2026-03-29 01:34:57.063736 | orchestrator | osism.validations.tempest : Get network API extensions ------------------ 1.66s 2026-03-29 01:34:57.063742 | orchestrator | osism.validations.tempest : Copy tempest wrapper script ----------------- 1.08s 2026-03-29 01:34:57.063748 | orchestrator | osism.validations.tempest : Copy tempest.conf file ---------------------- 1.01s 2026-03-29 01:34:57.063754 | orchestrator | osism.validations.tempest : Create tempest workdir ---------------------- 0.93s 2026-03-29 01:34:57.063759 | orchestrator | osism.validations.tempest : Copy exclude list --------------------------- 0.89s 2026-03-29 01:34:57.063766 | orchestrator | osism.validations.tempest : Convert img_file to qcow2 format ------------ 0.79s 2026-03-29 01:34:57.063772 | orchestrator | osism.validations.tempest : Get stats of exclude list ------------------- 0.52s 2026-03-29 01:34:57.063785 | orchestrator | osism.validations.tempest : Get stats of include list ------------------- 0.52s 2026-03-29 01:34:57.444666 | orchestrator | osism.validations.tempest : Set fact for config option img_file --------- 0.41s 2026-03-29 01:34:57.444766 | orchestrator | osism.validations.tempest : Check for existing tempest initialisation --- 0.38s 2026-03-29 01:34:57.693300 | orchestrator | + sed -i '/log_dir =/d' /opt/tempest/etc/tempest.conf 2026-03-29 01:34:57.697171 | orchestrator | + sed -i '/log_file =/d' /opt/tempest/etc/tempest.conf 2026-03-29 01:34:57.700628 | orchestrator | 2026-03-29 01:34:57.700725 | orchestrator | ## IDENTITY (API) 2026-03-29 01:34:57.700736 | orchestrator | 2026-03-29 01:34:57.700743 | orchestrator | + [[ false == \t\r\u\e ]] 2026-03-29 01:34:57.700751 | orchestrator | + echo 2026-03-29 01:34:57.700758 | orchestrator | + echo '## IDENTITY (API)' 2026-03-29 01:34:57.700765 | orchestrator | + echo 2026-03-29 01:34:57.700772 | orchestrator | + _tempest tempest.api.identity.v3 2026-03-29 01:34:57.700780 | orchestrator | + local regex=tempest.api.identity.v3 2026-03-29 01:34:57.701613 | orchestrator | + docker run --rm -v /opt/tempest:/tempest -v /etc/ssl/certs:/etc/ssl/certs:ro -e PYTHONWARNINGS=ignore::SyntaxWarning --network host --name tempest registry.osism.tech/osism/tempest:latest run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.identity.v3 --concurrency 16 2026-03-29 01:34:57.702397 | orchestrator | ++ date +%Y%m%d-%H%M 2026-03-29 01:34:57.704447 | orchestrator | + tee -a /opt/tempest/20260329-0134.log 2026-03-29 01:35:01.776289 | orchestrator | tempest: 'run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.identity.v3 --concurrency 16' is not a tempest command. See 'tempest --help'. 2026-03-29 01:35:01.776440 | orchestrator | Did you mean one of these? 2026-03-29 01:35:01.776452 | orchestrator | help 2026-03-29 01:35:01.776459 | orchestrator | init 2026-03-29 01:35:02.198171 | orchestrator | 2026-03-29 01:35:02.198224 | orchestrator | ## IMAGE (API) 2026-03-29 01:35:02.198230 | orchestrator | 2026-03-29 01:35:02.198235 | orchestrator | + echo 2026-03-29 01:35:02.198239 | orchestrator | + echo '## IMAGE (API)' 2026-03-29 01:35:02.198244 | orchestrator | + echo 2026-03-29 01:35:02.198247 | orchestrator | + _tempest tempest.api.image.v2 2026-03-29 01:35:02.198252 | orchestrator | + local regex=tempest.api.image.v2 2026-03-29 01:35:02.199521 | orchestrator | + docker run --rm -v /opt/tempest:/tempest -v /etc/ssl/certs:/etc/ssl/certs:ro -e PYTHONWARNINGS=ignore::SyntaxWarning --network host --name tempest registry.osism.tech/osism/tempest:latest run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.image.v2 --concurrency 16 2026-03-29 01:35:02.200782 | orchestrator | ++ date +%Y%m%d-%H%M 2026-03-29 01:35:02.203624 | orchestrator | + tee -a /opt/tempest/20260329-0135.log 2026-03-29 01:35:06.213712 | orchestrator | tempest: 'run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.image.v2 --concurrency 16' is not a tempest command. See 'tempest --help'. 2026-03-29 01:35:06.213870 | orchestrator | Did you mean one of these? 2026-03-29 01:35:06.213895 | orchestrator | help 2026-03-29 01:35:06.213903 | orchestrator | init 2026-03-29 01:35:06.769005 | orchestrator | 2026-03-29 01:35:06.769128 | orchestrator | ## NETWORK (API) 2026-03-29 01:35:06.769135 | orchestrator | 2026-03-29 01:35:06.769140 | orchestrator | + echo 2026-03-29 01:35:06.769144 | orchestrator | + echo '## NETWORK (API)' 2026-03-29 01:35:06.769150 | orchestrator | + echo 2026-03-29 01:35:06.769154 | orchestrator | + _tempest tempest.api.network 2026-03-29 01:35:06.769158 | orchestrator | + local regex=tempest.api.network 2026-03-29 01:35:06.769923 | orchestrator | + docker run --rm -v /opt/tempest:/tempest -v /etc/ssl/certs:/etc/ssl/certs:ro -e PYTHONWARNINGS=ignore::SyntaxWarning --network host --name tempest registry.osism.tech/osism/tempest:latest run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.network --concurrency 16 2026-03-29 01:35:06.770350 | orchestrator | ++ date +%Y%m%d-%H%M 2026-03-29 01:35:06.772676 | orchestrator | + tee -a /opt/tempest/20260329-0135.log 2026-03-29 01:35:10.740554 | orchestrator | tempest: 'run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.network --concurrency 16' is not a tempest command. See 'tempest --help'. 2026-03-29 01:35:10.740662 | orchestrator | Did you mean one of these? 2026-03-29 01:35:10.740670 | orchestrator | help 2026-03-29 01:35:10.740675 | orchestrator | init 2026-03-29 01:35:11.246514 | orchestrator | 2026-03-29 01:35:11.246606 | orchestrator | ## VOLUME (API) 2026-03-29 01:35:11.246617 | orchestrator | 2026-03-29 01:35:11.246623 | orchestrator | + echo 2026-03-29 01:35:11.246630 | orchestrator | + echo '## VOLUME (API)' 2026-03-29 01:35:11.246637 | orchestrator | + echo 2026-03-29 01:35:11.246644 | orchestrator | + _tempest tempest.api.volume 2026-03-29 01:35:11.246651 | orchestrator | + local regex=tempest.api.volume 2026-03-29 01:35:11.247647 | orchestrator | + docker run --rm -v /opt/tempest:/tempest -v /etc/ssl/certs:/etc/ssl/certs:ro -e PYTHONWARNINGS=ignore::SyntaxWarning --network host --name tempest registry.osism.tech/osism/tempest:latest run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.volume --concurrency 16 2026-03-29 01:35:11.248246 | orchestrator | ++ date +%Y%m%d-%H%M 2026-03-29 01:35:11.251820 | orchestrator | + tee -a /opt/tempest/20260329-0135.log 2026-03-29 01:35:15.091930 | orchestrator | tempest: 'run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.volume --concurrency 16' is not a tempest command. See 'tempest --help'. 2026-03-29 01:35:15.092029 | orchestrator | Did you mean one of these? 2026-03-29 01:35:15.092042 | orchestrator | help 2026-03-29 01:35:15.092049 | orchestrator | init 2026-03-29 01:35:15.770948 | orchestrator | 2026-03-29 01:35:15.771032 | orchestrator | ## COMPUTE (API) 2026-03-29 01:35:15.771047 | orchestrator | 2026-03-29 01:35:15.771093 | orchestrator | + echo 2026-03-29 01:35:15.771101 | orchestrator | + echo '## COMPUTE (API)' 2026-03-29 01:35:15.771110 | orchestrator | + echo 2026-03-29 01:35:15.771119 | orchestrator | + _tempest tempest.api.compute 2026-03-29 01:35:15.771150 | orchestrator | + local regex=tempest.api.compute 2026-03-29 01:35:15.772170 | orchestrator | + docker run --rm -v /opt/tempest:/tempest -v /etc/ssl/certs:/etc/ssl/certs:ro -e PYTHONWARNINGS=ignore::SyntaxWarning --network host --name tempest registry.osism.tech/osism/tempest:latest run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.compute --concurrency 16 2026-03-29 01:35:15.772421 | orchestrator | ++ date +%Y%m%d-%H%M 2026-03-29 01:35:15.775528 | orchestrator | + tee -a /opt/tempest/20260329-0135.log 2026-03-29 01:35:19.410548 | orchestrator | tempest: 'run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.compute --concurrency 16' is not a tempest command. See 'tempest --help'. 2026-03-29 01:35:19.410657 | orchestrator | Did you mean one of these? 2026-03-29 01:35:19.410681 | orchestrator | help 2026-03-29 01:35:19.410696 | orchestrator | init 2026-03-29 01:35:19.833158 | orchestrator | 2026-03-29 01:35:19.833237 | orchestrator | ## DNS (API) 2026-03-29 01:35:19.833247 | orchestrator | 2026-03-29 01:35:19.833255 | orchestrator | + echo 2026-03-29 01:35:19.833262 | orchestrator | + echo '## DNS (API)' 2026-03-29 01:35:19.833270 | orchestrator | + echo 2026-03-29 01:35:19.833278 | orchestrator | + _tempest designate_tempest_plugin.tests.api.v2 2026-03-29 01:35:19.833287 | orchestrator | + local regex=designate_tempest_plugin.tests.api.v2 2026-03-29 01:35:19.834108 | orchestrator | + docker run --rm -v /opt/tempest:/tempest -v /etc/ssl/certs:/etc/ssl/certs:ro -e PYTHONWARNINGS=ignore::SyntaxWarning --network host --name tempest registry.osism.tech/osism/tempest:latest run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex designate_tempest_plugin.tests.api.v2 --concurrency 16 2026-03-29 01:35:19.834587 | orchestrator | ++ date +%Y%m%d-%H%M 2026-03-29 01:35:19.837937 | orchestrator | + tee -a /opt/tempest/20260329-0135.log 2026-03-29 01:35:23.695724 | orchestrator | tempest: 'run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex designate_tempest_plugin.tests.api.v2 --concurrency 16' is not a tempest command. See 'tempest --help'. 2026-03-29 01:35:23.695834 | orchestrator | Did you mean one of these? 2026-03-29 01:35:23.695848 | orchestrator | help 2026-03-29 01:35:23.695857 | orchestrator | init 2026-03-29 01:35:24.142139 | orchestrator | 2026-03-29 01:35:24.142216 | orchestrator | ## OBJECT-STORE (API) 2026-03-29 01:35:24.142224 | orchestrator | 2026-03-29 01:35:24.142229 | orchestrator | + echo 2026-03-29 01:35:24.142233 | orchestrator | + echo '## OBJECT-STORE (API)' 2026-03-29 01:35:24.142237 | orchestrator | + echo 2026-03-29 01:35:24.142242 | orchestrator | + _tempest tempest.api.object_storage 2026-03-29 01:35:24.142247 | orchestrator | + local regex=tempest.api.object_storage 2026-03-29 01:35:24.143145 | orchestrator | + docker run --rm -v /opt/tempest:/tempest -v /etc/ssl/certs:/etc/ssl/certs:ro -e PYTHONWARNINGS=ignore::SyntaxWarning --network host --name tempest registry.osism.tech/osism/tempest:latest run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.object_storage --concurrency 16 2026-03-29 01:35:24.143954 | orchestrator | ++ date +%Y%m%d-%H%M 2026-03-29 01:35:24.147963 | orchestrator | + tee -a /opt/tempest/20260329-0135.log 2026-03-29 01:35:27.878482 | orchestrator | tempest: 'run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.object_storage --concurrency 16' is not a tempest command. See 'tempest --help'. 2026-03-29 01:35:27.878622 | orchestrator | Did you mean one of these? 2026-03-29 01:35:27.878650 | orchestrator | help 2026-03-29 01:35:27.878669 | orchestrator | init 2026-03-29 01:35:28.509970 | orchestrator | ok: Runtime: 0:01:58.210192 2026-03-29 01:35:28.533837 | 2026-03-29 01:35:28.534106 | TASK [Check prometheus alert status] 2026-03-29 01:35:29.077969 | orchestrator | skipping: Conditional result was False 2026-03-29 01:35:29.082199 | 2026-03-29 01:35:29.082409 | PLAY RECAP 2026-03-29 01:35:29.082566 | orchestrator | ok: 25 changed: 12 unreachable: 0 failed: 0 skipped: 4 rescued: 0 ignored: 0 2026-03-29 01:35:29.082640 | 2026-03-29 01:35:29.314670 | RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/deploy.yml@main] 2026-03-29 01:35:29.318210 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2026-03-29 01:35:30.174177 | 2026-03-29 01:35:30.174378 | PLAY [Post output play] 2026-03-29 01:35:30.191923 | 2026-03-29 01:35:30.192090 | LOOP [stage-output : Register sources] 2026-03-29 01:35:30.263009 | 2026-03-29 01:35:30.263345 | TASK [stage-output : Check sudo] 2026-03-29 01:35:31.174721 | orchestrator | sudo: a password is required 2026-03-29 01:35:31.304005 | orchestrator | ok: Runtime: 0:00:00.013105 2026-03-29 01:35:31.318346 | 2026-03-29 01:35:31.318506 | LOOP [stage-output : Set source and destination for files and folders] 2026-03-29 01:35:31.360330 | 2026-03-29 01:35:31.360718 | TASK [stage-output : Build a list of source, dest dictionaries] 2026-03-29 01:35:31.441655 | orchestrator | ok 2026-03-29 01:35:31.450814 | 2026-03-29 01:35:31.450981 | LOOP [stage-output : Ensure target folders exist] 2026-03-29 01:35:31.973836 | orchestrator | ok: "docs" 2026-03-29 01:35:31.974190 | 2026-03-29 01:35:32.216520 | orchestrator | ok: "artifacts" 2026-03-29 01:35:32.455419 | orchestrator | ok: "logs" 2026-03-29 01:35:32.473889 | 2026-03-29 01:35:32.474058 | LOOP [stage-output : Copy files and folders to staging folder] 2026-03-29 01:35:32.508809 | 2026-03-29 01:35:32.509054 | TASK [stage-output : Make all log files readable] 2026-03-29 01:35:32.781416 | orchestrator | ok 2026-03-29 01:35:32.791166 | 2026-03-29 01:35:32.791374 | TASK [stage-output : Rename log files that match extensions_to_txt] 2026-03-29 01:35:32.827610 | orchestrator | skipping: Conditional result was False 2026-03-29 01:35:32.844691 | 2026-03-29 01:35:32.844869 | TASK [stage-output : Discover log files for compression] 2026-03-29 01:35:32.870390 | orchestrator | skipping: Conditional result was False 2026-03-29 01:35:32.884382 | 2026-03-29 01:35:32.884553 | LOOP [stage-output : Archive everything from logs] 2026-03-29 01:35:32.932105 | 2026-03-29 01:35:32.932368 | PLAY [Post cleanup play] 2026-03-29 01:35:32.941435 | 2026-03-29 01:35:32.941540 | TASK [Set cloud fact (Zuul deployment)] 2026-03-29 01:35:33.009708 | orchestrator | ok 2026-03-29 01:35:33.021546 | 2026-03-29 01:35:33.021675 | TASK [Set cloud fact (local deployment)] 2026-03-29 01:35:33.056410 | orchestrator | skipping: Conditional result was False 2026-03-29 01:35:33.072833 | 2026-03-29 01:35:33.072978 | TASK [Clean the cloud environment] 2026-03-29 01:35:34.678528 | orchestrator | 2026-03-29 01:35:34 - clean up servers 2026-03-29 01:35:35.460664 | orchestrator | 2026-03-29 01:35:35 - testbed-manager 2026-03-29 01:35:35.547252 | orchestrator | 2026-03-29 01:35:35 - testbed-node-1 2026-03-29 01:35:35.633126 | orchestrator | 2026-03-29 01:35:35 - testbed-node-2 2026-03-29 01:35:35.716503 | orchestrator | 2026-03-29 01:35:35 - testbed-node-4 2026-03-29 01:35:36.065640 | orchestrator | 2026-03-29 01:35:36 - testbed-node-3 2026-03-29 01:35:36.155014 | orchestrator | 2026-03-29 01:35:36 - testbed-node-5 2026-03-29 01:35:36.256975 | orchestrator | 2026-03-29 01:35:36 - testbed-node-0 2026-03-29 01:35:36.343993 | orchestrator | 2026-03-29 01:35:36 - clean up keypairs 2026-03-29 01:35:36.359821 | orchestrator | 2026-03-29 01:35:36 - testbed 2026-03-29 01:35:36.385353 | orchestrator | 2026-03-29 01:35:36 - wait for servers to be gone 2026-03-29 01:35:47.199195 | orchestrator | 2026-03-29 01:35:47 - clean up ports 2026-03-29 01:35:47.369394 | orchestrator | 2026-03-29 01:35:47 - 00fe3420-6557-41ef-8c79-be5f90caafc7 2026-03-29 01:35:47.639583 | orchestrator | 2026-03-29 01:35:47 - 4122e688-2740-4d30-afc8-305412faa17d 2026-03-29 01:35:48.108833 | orchestrator | 2026-03-29 01:35:48 - 420d3772-c5ed-41b2-adb7-1e0f00aa9366 2026-03-29 01:35:48.324597 | orchestrator | 2026-03-29 01:35:48 - 9a1205b8-a615-4845-8813-3478f6c94635 2026-03-29 01:35:48.534851 | orchestrator | 2026-03-29 01:35:48 - ab39241d-199d-4402-8df3-80486326645e 2026-03-29 01:35:49.190253 | orchestrator | 2026-03-29 01:35:49 - ac18c357-4660-4cc6-b6e5-0328da0872eb 2026-03-29 01:35:49.451171 | orchestrator | 2026-03-29 01:35:49 - e181d99b-4545-4cdf-a039-63cee6db67f8 2026-03-29 01:35:49.773559 | orchestrator | 2026-03-29 01:35:49 - clean up volumes 2026-03-29 01:35:49.901156 | orchestrator | 2026-03-29 01:35:49 - testbed-volume-0-node-base 2026-03-29 01:35:49.940569 | orchestrator | 2026-03-29 01:35:49 - testbed-volume-2-node-base 2026-03-29 01:35:49.986144 | orchestrator | 2026-03-29 01:35:49 - testbed-volume-5-node-base 2026-03-29 01:35:50.023310 | orchestrator | 2026-03-29 01:35:50 - testbed-volume-3-node-base 2026-03-29 01:35:50.063079 | orchestrator | 2026-03-29 01:35:50 - testbed-volume-1-node-base 2026-03-29 01:35:50.107917 | orchestrator | 2026-03-29 01:35:50 - testbed-volume-4-node-base 2026-03-29 01:35:50.151776 | orchestrator | 2026-03-29 01:35:50 - testbed-volume-manager-base 2026-03-29 01:35:50.194688 | orchestrator | 2026-03-29 01:35:50 - testbed-volume-4-node-4 2026-03-29 01:35:50.242993 | orchestrator | 2026-03-29 01:35:50 - testbed-volume-0-node-3 2026-03-29 01:35:50.289218 | orchestrator | 2026-03-29 01:35:50 - testbed-volume-6-node-3 2026-03-29 01:35:50.337385 | orchestrator | 2026-03-29 01:35:50 - testbed-volume-7-node-4 2026-03-29 01:35:50.385523 | orchestrator | 2026-03-29 01:35:50 - testbed-volume-8-node-5 2026-03-29 01:35:50.427245 | orchestrator | 2026-03-29 01:35:50 - testbed-volume-5-node-5 2026-03-29 01:35:50.471003 | orchestrator | 2026-03-29 01:35:50 - testbed-volume-2-node-5 2026-03-29 01:35:50.512337 | orchestrator | 2026-03-29 01:35:50 - testbed-volume-1-node-4 2026-03-29 01:35:50.551663 | orchestrator | 2026-03-29 01:35:50 - testbed-volume-3-node-3 2026-03-29 01:35:50.596763 | orchestrator | 2026-03-29 01:35:50 - disconnect routers 2026-03-29 01:35:50.714989 | orchestrator | 2026-03-29 01:35:50 - testbed 2026-03-29 01:35:51.642164 | orchestrator | 2026-03-29 01:35:51 - clean up subnets 2026-03-29 01:35:51.693666 | orchestrator | 2026-03-29 01:35:51 - subnet-testbed-management 2026-03-29 01:35:51.893602 | orchestrator | 2026-03-29 01:35:51 - clean up networks 2026-03-29 01:35:52.067638 | orchestrator | 2026-03-29 01:35:52 - net-testbed-management 2026-03-29 01:35:52.342600 | orchestrator | 2026-03-29 01:35:52 - clean up security groups 2026-03-29 01:35:52.394366 | orchestrator | 2026-03-29 01:35:52 - testbed-node 2026-03-29 01:35:52.512409 | orchestrator | 2026-03-29 01:35:52 - testbed-management 2026-03-29 01:35:52.630863 | orchestrator | 2026-03-29 01:35:52 - clean up floating ips 2026-03-29 01:35:52.673559 | orchestrator | 2026-03-29 01:35:52 - 81.163.193.35 2026-03-29 01:35:53.009329 | orchestrator | 2026-03-29 01:35:53 - clean up routers 2026-03-29 01:35:53.111178 | orchestrator | 2026-03-29 01:35:53 - testbed 2026-03-29 01:35:54.625612 | orchestrator | ok: Runtime: 0:00:21.142769 2026-03-29 01:35:54.630128 | 2026-03-29 01:35:54.630318 | PLAY RECAP 2026-03-29 01:35:54.630450 | orchestrator | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 7 rescued: 0 ignored: 0 2026-03-29 01:35:54.630510 | 2026-03-29 01:35:54.772023 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2026-03-29 01:35:54.774616 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2026-03-29 01:35:55.510513 | 2026-03-29 01:35:55.510677 | PLAY [Cleanup play] 2026-03-29 01:35:55.526788 | 2026-03-29 01:35:55.526951 | TASK [Set cloud fact (Zuul deployment)] 2026-03-29 01:35:55.606462 | orchestrator | ok 2026-03-29 01:35:55.616011 | 2026-03-29 01:35:55.616155 | TASK [Set cloud fact (local deployment)] 2026-03-29 01:35:55.650739 | orchestrator | skipping: Conditional result was False 2026-03-29 01:35:55.664269 | 2026-03-29 01:35:55.664397 | TASK [Clean the cloud environment] 2026-03-29 01:35:56.842756 | orchestrator | 2026-03-29 01:35:56 - clean up servers 2026-03-29 01:35:57.313784 | orchestrator | 2026-03-29 01:35:57 - clean up keypairs 2026-03-29 01:35:57.329799 | orchestrator | 2026-03-29 01:35:57 - wait for servers to be gone 2026-03-29 01:35:57.367660 | orchestrator | 2026-03-29 01:35:57 - clean up ports 2026-03-29 01:35:57.449085 | orchestrator | 2026-03-29 01:35:57 - clean up volumes 2026-03-29 01:35:57.516509 | orchestrator | 2026-03-29 01:35:57 - disconnect routers 2026-03-29 01:35:57.540616 | orchestrator | 2026-03-29 01:35:57 - clean up subnets 2026-03-29 01:35:57.560619 | orchestrator | 2026-03-29 01:35:57 - clean up networks 2026-03-29 01:35:57.716250 | orchestrator | 2026-03-29 01:35:57 - clean up security groups 2026-03-29 01:35:57.748983 | orchestrator | 2026-03-29 01:35:57 - clean up floating ips 2026-03-29 01:35:57.772816 | orchestrator | 2026-03-29 01:35:57 - clean up routers 2026-03-29 01:35:58.199869 | orchestrator | ok: Runtime: 0:00:01.372028 2026-03-29 01:35:58.203835 | 2026-03-29 01:35:58.204039 | PLAY RECAP 2026-03-29 01:35:58.204187 | orchestrator | ok: 2 changed: 1 unreachable: 0 failed: 0 skipped: 1 rescued: 0 ignored: 0 2026-03-29 01:35:58.204508 | 2026-03-29 01:35:58.373606 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2026-03-29 01:35:58.376250 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2026-03-29 01:35:59.135539 | 2026-03-29 01:35:59.135708 | PLAY [Base post-fetch] 2026-03-29 01:35:59.151222 | 2026-03-29 01:35:59.151351 | TASK [fetch-output : Set log path for multiple nodes] 2026-03-29 01:35:59.206465 | orchestrator | skipping: Conditional result was False 2026-03-29 01:35:59.218863 | 2026-03-29 01:35:59.219052 | TASK [fetch-output : Set log path for single node] 2026-03-29 01:35:59.270250 | orchestrator | ok 2026-03-29 01:35:59.280363 | 2026-03-29 01:35:59.280507 | LOOP [fetch-output : Ensure local output dirs] 2026-03-29 01:35:59.764256 | orchestrator -> localhost | ok: "/var/lib/zuul/builds/0c9b8dad94e24d61892e6bb3a93b466e/work/logs" 2026-03-29 01:36:00.044732 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/0c9b8dad94e24d61892e6bb3a93b466e/work/artifacts" 2026-03-29 01:36:00.350223 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/0c9b8dad94e24d61892e6bb3a93b466e/work/docs" 2026-03-29 01:36:00.375535 | 2026-03-29 01:36:00.375712 | LOOP [fetch-output : Collect logs, artifacts and docs] 2026-03-29 01:36:01.349276 | orchestrator | changed: .d..t...... ./ 2026-03-29 01:36:01.349570 | orchestrator | changed: All items complete 2026-03-29 01:36:01.349617 | 2026-03-29 01:36:02.062235 | orchestrator | changed: .d..t...... ./ 2026-03-29 01:36:02.824428 | orchestrator | changed: .d..t...... ./ 2026-03-29 01:36:02.856180 | 2026-03-29 01:36:02.856389 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2026-03-29 01:36:02.891699 | orchestrator | skipping: Conditional result was False 2026-03-29 01:36:02.895847 | orchestrator | skipping: Conditional result was False 2026-03-29 01:36:02.917997 | 2026-03-29 01:36:02.918112 | PLAY RECAP 2026-03-29 01:36:02.918217 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2026-03-29 01:36:02.918264 | 2026-03-29 01:36:03.044061 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2026-03-29 01:36:03.046680 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-03-29 01:36:03.809639 | 2026-03-29 01:36:03.809807 | PLAY [Base post] 2026-03-29 01:36:03.824989 | 2026-03-29 01:36:03.825135 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2026-03-29 01:36:05.000578 | orchestrator | changed 2026-03-29 01:36:05.012091 | 2026-03-29 01:36:05.012257 | PLAY RECAP 2026-03-29 01:36:05.012338 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2026-03-29 01:36:05.012413 | 2026-03-29 01:36:05.163065 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-03-29 01:36:05.165957 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2026-03-29 01:36:05.964003 | 2026-03-29 01:36:05.964174 | PLAY [Base post-logs] 2026-03-29 01:36:05.975150 | 2026-03-29 01:36:05.975314 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2026-03-29 01:36:06.463945 | localhost | changed 2026-03-29 01:36:06.480252 | 2026-03-29 01:36:06.480500 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2026-03-29 01:36:06.528144 | localhost | ok 2026-03-29 01:36:06.532580 | 2026-03-29 01:36:06.532731 | TASK [Set zuul-log-path fact] 2026-03-29 01:36:06.559011 | localhost | ok 2026-03-29 01:36:06.567969 | 2026-03-29 01:36:06.568144 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-03-29 01:36:06.604252 | localhost | ok 2026-03-29 01:36:06.608067 | 2026-03-29 01:36:06.608230 | TASK [upload-logs : Create log directories] 2026-03-29 01:36:07.158390 | localhost | changed 2026-03-29 01:36:07.163257 | 2026-03-29 01:36:07.163437 | TASK [upload-logs : Ensure logs are readable before uploading] 2026-03-29 01:36:07.694252 | localhost -> localhost | ok: Runtime: 0:00:00.007097 2026-03-29 01:36:07.698432 | 2026-03-29 01:36:07.698548 | TASK [upload-logs : Upload logs to log server] 2026-03-29 01:36:08.251047 | localhost | Output suppressed because no_log was given 2026-03-29 01:36:08.253071 | 2026-03-29 01:36:08.253228 | LOOP [upload-logs : Compress console log and json output] 2026-03-29 01:36:08.317849 | localhost | skipping: Conditional result was False 2026-03-29 01:36:08.326297 | localhost | skipping: Conditional result was False 2026-03-29 01:36:08.330884 | 2026-03-29 01:36:08.331001 | LOOP [upload-logs : Upload compressed console log and json output] 2026-03-29 01:36:08.377040 | localhost | skipping: Conditional result was False 2026-03-29 01:36:08.377333 | 2026-03-29 01:36:08.382232 | localhost | skipping: Conditional result was False 2026-03-29 01:36:08.387359 | 2026-03-29 01:36:08.387480 | LOOP [upload-logs : Upload console log and json output]