2026-01-10 13:43:17.666677 | Job console starting 2026-01-10 13:43:17.730015 | Updating git repos 2026-01-10 13:43:17.811759 | Cloning repos into workspace 2026-01-10 13:43:18.194300 | Restoring repo states 2026-01-10 13:43:18.247480 | Merging changes 2026-01-10 13:43:18.989994 | Checking out repos 2026-01-10 13:43:19.312550 | Preparing playbooks 2026-01-10 13:43:20.172597 | Running Ansible setup 2026-01-10 13:43:26.573210 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2026-01-10 13:43:27.923353 | 2026-01-10 13:43:27.923608 | PLAY [Base pre] 2026-01-10 13:43:27.991582 | 2026-01-10 13:43:27.991770 | TASK [Setup log path fact] 2026-01-10 13:43:28.029871 | orchestrator | ok 2026-01-10 13:43:28.082162 | 2026-01-10 13:43:28.082348 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-01-10 13:43:28.160563 | orchestrator | ok 2026-01-10 13:43:28.180700 | 2026-01-10 13:43:28.180850 | TASK [emit-job-header : Print job information] 2026-01-10 13:43:28.272017 | # Job Information 2026-01-10 13:43:28.272219 | Ansible Version: 2.16.14 2026-01-10 13:43:28.272254 | Job: testbed-deploy-next-in-a-nutshell-ubuntu-24.04 2026-01-10 13:43:28.272287 | Pipeline: label 2026-01-10 13:43:28.272310 | Executor: 521e9411259a 2026-01-10 13:43:28.272331 | Triggered by: https://github.com/osism/testbed/pull/2818 2026-01-10 13:43:28.272353 | Event ID: 4a28f280-ee2a-11f0-88a7-9820c87091e7 2026-01-10 13:43:28.279871 | 2026-01-10 13:43:28.288808 | LOOP [emit-job-header : Print node information] 2026-01-10 13:43:28.503886 | orchestrator | ok: 2026-01-10 13:43:28.504114 | orchestrator | # Node Information 2026-01-10 13:43:28.504152 | orchestrator | Inventory Hostname: orchestrator 2026-01-10 13:43:28.504178 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2026-01-10 13:43:28.504200 | orchestrator | Username: zuul-testbed01 2026-01-10 13:43:28.504221 | orchestrator | Distro: Debian 12.12 2026-01-10 13:43:28.504245 | orchestrator | Provider: static-testbed 2026-01-10 13:43:28.504267 | orchestrator | Region: 2026-01-10 13:43:28.504289 | orchestrator | Label: testbed-orchestrator 2026-01-10 13:43:28.504309 | orchestrator | Product Name: OpenStack Nova 2026-01-10 13:43:28.504328 | orchestrator | Interface IP: 81.163.193.140 2026-01-10 13:43:28.528864 | 2026-01-10 13:43:28.529046 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2026-01-10 13:43:29.522810 | orchestrator -> localhost | changed 2026-01-10 13:43:29.533179 | 2026-01-10 13:43:29.533343 | TASK [log-inventory : Copy ansible inventory to logs dir] 2026-01-10 13:43:31.311682 | orchestrator -> localhost | changed 2026-01-10 13:43:31.331002 | 2026-01-10 13:43:31.331189 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2026-01-10 13:43:31.744625 | orchestrator -> localhost | ok 2026-01-10 13:43:31.753032 | 2026-01-10 13:43:31.753180 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2026-01-10 13:43:31.783586 | orchestrator | ok 2026-01-10 13:43:31.823360 | orchestrator | included: /var/lib/zuul/builds/aed8ea0702db46caaf17932aabaccc56/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2026-01-10 13:43:31.832381 | 2026-01-10 13:43:31.832528 | TASK [add-build-sshkey : Create Temp SSH key] 2026-01-10 13:43:33.431221 | orchestrator -> localhost | Generating public/private rsa key pair. 2026-01-10 13:43:33.431451 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/aed8ea0702db46caaf17932aabaccc56/work/aed8ea0702db46caaf17932aabaccc56_id_rsa 2026-01-10 13:43:33.431492 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/aed8ea0702db46caaf17932aabaccc56/work/aed8ea0702db46caaf17932aabaccc56_id_rsa.pub 2026-01-10 13:43:33.431518 | orchestrator -> localhost | The key fingerprint is: 2026-01-10 13:43:33.431546 | orchestrator -> localhost | SHA256:9btb3Wgq+kbWMnHhXKCnozv7/KlggUamdDK+SoQbMVY zuul-build-sshkey 2026-01-10 13:43:33.431569 | orchestrator -> localhost | The key's randomart image is: 2026-01-10 13:43:33.431606 | orchestrator -> localhost | +---[RSA 3072]----+ 2026-01-10 13:43:33.431628 | orchestrator -> localhost | | E .. | 2026-01-10 13:43:33.431650 | orchestrator -> localhost | | . .. . | 2026-01-10 13:43:33.431670 | orchestrator -> localhost | |o. + + oo.o | 2026-01-10 13:43:33.431689 | orchestrator -> localhost | |.+ o B . ..++ | 2026-01-10 13:43:33.431709 | orchestrator -> localhost | |o . o o S o+. | 2026-01-10 13:43:33.431735 | orchestrator -> localhost | | + o o=... o.| 2026-01-10 13:43:33.431756 | orchestrator -> localhost | |. . . +o o. + o| 2026-01-10 13:43:33.431777 | orchestrator -> localhost | | . . ..+o * | 2026-01-10 13:43:33.431798 | orchestrator -> localhost | | . +B=+*. | 2026-01-10 13:43:33.431818 | orchestrator -> localhost | +----[SHA256]-----+ 2026-01-10 13:43:33.431874 | orchestrator -> localhost | ok: Runtime: 0:00:00.779898 2026-01-10 13:43:33.444380 | 2026-01-10 13:43:33.444544 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2026-01-10 13:43:33.486874 | orchestrator | ok 2026-01-10 13:43:33.515151 | orchestrator | included: /var/lib/zuul/builds/aed8ea0702db46caaf17932aabaccc56/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2026-01-10 13:43:33.531563 | 2026-01-10 13:43:33.531710 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2026-01-10 13:43:33.587429 | orchestrator | skipping: Conditional result was False 2026-01-10 13:43:33.595067 | 2026-01-10 13:43:33.595203 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2026-01-10 13:43:34.370388 | orchestrator | changed 2026-01-10 13:43:34.387580 | 2026-01-10 13:43:34.387730 | TASK [add-build-sshkey : Make sure user has a .ssh] 2026-01-10 13:43:34.742405 | orchestrator | ok 2026-01-10 13:43:34.750563 | 2026-01-10 13:43:34.750704 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2026-01-10 13:43:35.226939 | orchestrator | ok 2026-01-10 13:43:35.233447 | 2026-01-10 13:43:35.233572 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2026-01-10 13:43:35.692430 | orchestrator | ok 2026-01-10 13:43:35.699295 | 2026-01-10 13:43:35.699430 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2026-01-10 13:43:35.724282 | orchestrator | skipping: Conditional result was False 2026-01-10 13:43:35.731801 | 2026-01-10 13:43:35.731924 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2026-01-10 13:43:36.421527 | orchestrator -> localhost | changed 2026-01-10 13:43:36.444333 | 2026-01-10 13:43:36.444553 | TASK [add-build-sshkey : Add back temp key] 2026-01-10 13:43:37.136646 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/aed8ea0702db46caaf17932aabaccc56/work/aed8ea0702db46caaf17932aabaccc56_id_rsa (zuul-build-sshkey) 2026-01-10 13:43:37.136928 | orchestrator -> localhost | ok: Runtime: 0:00:00.036217 2026-01-10 13:43:37.146653 | 2026-01-10 13:43:37.146799 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2026-01-10 13:43:37.764188 | orchestrator | ok 2026-01-10 13:43:37.770424 | 2026-01-10 13:43:37.770561 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2026-01-10 13:43:37.818604 | orchestrator | skipping: Conditional result was False 2026-01-10 13:43:37.893587 | 2026-01-10 13:43:37.893726 | TASK [start-zuul-console : Start zuul_console daemon.] 2026-01-10 13:43:38.335412 | orchestrator | ok 2026-01-10 13:43:38.348641 | 2026-01-10 13:43:38.348842 | TASK [validate-host : Define zuul_info_dir fact] 2026-01-10 13:43:38.394066 | orchestrator | ok 2026-01-10 13:43:38.404054 | 2026-01-10 13:43:38.404221 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2026-01-10 13:43:38.872092 | orchestrator -> localhost | ok 2026-01-10 13:43:38.885762 | 2026-01-10 13:43:38.885949 | TASK [validate-host : Collect information about the host] 2026-01-10 13:43:40.433909 | orchestrator | ok 2026-01-10 13:43:40.449553 | 2026-01-10 13:43:40.449709 | TASK [validate-host : Sanitize hostname] 2026-01-10 13:43:40.511852 | orchestrator | ok 2026-01-10 13:43:40.517728 | 2026-01-10 13:43:40.517858 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2026-01-10 13:43:41.545940 | orchestrator -> localhost | changed 2026-01-10 13:43:41.553760 | 2026-01-10 13:43:41.553894 | TASK [validate-host : Collect information about zuul worker] 2026-01-10 13:43:42.203673 | orchestrator | ok 2026-01-10 13:43:42.211373 | 2026-01-10 13:43:42.211519 | TASK [validate-host : Write out all zuul information for each host] 2026-01-10 13:43:43.024580 | orchestrator -> localhost | changed 2026-01-10 13:43:43.035935 | 2026-01-10 13:43:43.036116 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2026-01-10 13:43:43.364148 | orchestrator | ok 2026-01-10 13:43:43.370583 | 2026-01-10 13:43:43.370710 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2026-01-10 13:44:40.172163 | orchestrator | changed: 2026-01-10 13:44:40.172536 | orchestrator | .d..t...... src/ 2026-01-10 13:44:40.172599 | orchestrator | .d..t...... src/github.com/ 2026-01-10 13:44:40.172644 | orchestrator | .d..t...... src/github.com/osism/ 2026-01-10 13:44:40.172683 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2026-01-10 13:44:40.172721 | orchestrator | RedHat.yml 2026-01-10 13:44:40.191526 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2026-01-10 13:44:40.191544 | orchestrator | RedHat.yml 2026-01-10 13:44:40.191597 | orchestrator | = 1.53.0"... 2026-01-10 13:44:50.636375 | orchestrator | - Finding hashicorp/local versions matching ">= 2.2.0"... 2026-01-10 13:44:50.654305 | orchestrator | - Finding latest version of hashicorp/null... 2026-01-10 13:44:51.695822 | orchestrator | - Installing terraform-provider-openstack/openstack v3.4.0... 2026-01-10 13:44:52.507880 | orchestrator | - Installed terraform-provider-openstack/openstack v3.4.0 (signed, key ID 4F80527A391BEFD2) 2026-01-10 13:44:52.764208 | orchestrator | - Installing hashicorp/local v2.6.1... 2026-01-10 13:44:53.250319 | orchestrator | - Installed hashicorp/local v2.6.1 (signed, key ID 0C0AF313E5FD9F80) 2026-01-10 13:44:53.518961 | orchestrator | - Installing hashicorp/null v3.2.4... 2026-01-10 13:44:53.995625 | orchestrator | - Installed hashicorp/null v3.2.4 (signed, key ID 0C0AF313E5FD9F80) 2026-01-10 13:44:53.995707 | orchestrator | 2026-01-10 13:44:53.995714 | orchestrator | Providers are signed by their developers. 2026-01-10 13:44:53.995721 | orchestrator | If you'd like to know more about provider signing, you can read about it here: 2026-01-10 13:44:53.995727 | orchestrator | https://opentofu.org/docs/cli/plugins/signing/ 2026-01-10 13:44:53.995736 | orchestrator | 2026-01-10 13:44:53.995741 | orchestrator | OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2026-01-10 13:44:53.995746 | orchestrator | selections it made above. Include this file in your version control repository 2026-01-10 13:44:53.995765 | orchestrator | so that OpenTofu can guarantee to make the same selections by default when 2026-01-10 13:44:53.995769 | orchestrator | you run "tofu init" in the future. 2026-01-10 13:44:53.995785 | orchestrator | 2026-01-10 13:44:53.995789 | orchestrator | OpenTofu has been successfully initialized! 2026-01-10 13:44:53.995793 | orchestrator | 2026-01-10 13:44:53.995802 | orchestrator | You may now begin working with OpenTofu. Try running "tofu plan" to see 2026-01-10 13:44:53.995806 | orchestrator | any changes that are required for your infrastructure. All OpenTofu commands 2026-01-10 13:44:53.995810 | orchestrator | should now work. 2026-01-10 13:44:53.995814 | orchestrator | 2026-01-10 13:44:53.995838 | orchestrator | If you ever set or change modules or backend configuration for OpenTofu, 2026-01-10 13:44:53.995842 | orchestrator | rerun this command to reinitialize your working directory. If you forget, other 2026-01-10 13:44:53.995847 | orchestrator | commands will detect it and remind you to do so if necessary. 2026-01-10 13:44:54.194220 | orchestrator | Created and switched to workspace "ci"! 2026-01-10 13:44:54.194390 | orchestrator | 2026-01-10 13:44:54.194401 | orchestrator | You're now on a new, empty workspace. Workspaces isolate their state, 2026-01-10 13:44:54.194408 | orchestrator | so if you run "tofu plan" OpenTofu will not see any existing state 2026-01-10 13:44:54.194412 | orchestrator | for this configuration. 2026-01-10 13:44:54.329391 | orchestrator | ci.auto.tfvars 2026-01-10 13:44:54.332559 | orchestrator | default_custom.tf 2026-01-10 13:44:56.185987 | orchestrator | data.openstack_networking_network_v2.public: Reading... 2026-01-10 13:44:56.743941 | orchestrator | data.openstack_networking_network_v2.public: Read complete after 1s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2026-01-10 13:44:57.044944 | orchestrator | 2026-01-10 13:44:57.045020 | orchestrator | OpenTofu used the selected providers to generate the following execution 2026-01-10 13:44:57.045035 | orchestrator | plan. Resource actions are indicated with the following symbols: 2026-01-10 13:44:57.045040 | orchestrator | + create 2026-01-10 13:44:57.045054 | orchestrator | <= read (data resources) 2026-01-10 13:44:57.045060 | orchestrator | 2026-01-10 13:44:57.045064 | orchestrator | OpenTofu will perform the following actions: 2026-01-10 13:44:57.045068 | orchestrator | 2026-01-10 13:44:57.045073 | orchestrator | # data.openstack_images_image_v2.image will be read during apply 2026-01-10 13:44:57.045077 | orchestrator | # (config refers to values not yet known) 2026-01-10 13:44:57.045082 | orchestrator | <= data "openstack_images_image_v2" "image" { 2026-01-10 13:44:57.045086 | orchestrator | + checksum = (known after apply) 2026-01-10 13:44:57.045091 | orchestrator | + created_at = (known after apply) 2026-01-10 13:44:57.045095 | orchestrator | + file = (known after apply) 2026-01-10 13:44:57.045099 | orchestrator | + id = (known after apply) 2026-01-10 13:44:57.045122 | orchestrator | + metadata = (known after apply) 2026-01-10 13:44:57.045126 | orchestrator | + min_disk_gb = (known after apply) 2026-01-10 13:44:57.045130 | orchestrator | + min_ram_mb = (known after apply) 2026-01-10 13:44:57.045134 | orchestrator | + most_recent = true 2026-01-10 13:44:57.045139 | orchestrator | + name = (known after apply) 2026-01-10 13:44:57.045143 | orchestrator | + protected = (known after apply) 2026-01-10 13:44:57.045147 | orchestrator | + region = (known after apply) 2026-01-10 13:44:57.045154 | orchestrator | + schema = (known after apply) 2026-01-10 13:44:57.045158 | orchestrator | + size_bytes = (known after apply) 2026-01-10 13:44:57.045162 | orchestrator | + tags = (known after apply) 2026-01-10 13:44:57.045166 | orchestrator | + updated_at = (known after apply) 2026-01-10 13:44:57.045170 | orchestrator | } 2026-01-10 13:44:57.045176 | orchestrator | 2026-01-10 13:44:57.045180 | orchestrator | # data.openstack_images_image_v2.image_node will be read during apply 2026-01-10 13:44:57.045185 | orchestrator | # (config refers to values not yet known) 2026-01-10 13:44:57.045189 | orchestrator | <= data "openstack_images_image_v2" "image_node" { 2026-01-10 13:44:57.045193 | orchestrator | + checksum = (known after apply) 2026-01-10 13:44:57.045197 | orchestrator | + created_at = (known after apply) 2026-01-10 13:44:57.045201 | orchestrator | + file = (known after apply) 2026-01-10 13:44:57.045204 | orchestrator | + id = (known after apply) 2026-01-10 13:44:57.045208 | orchestrator | + metadata = (known after apply) 2026-01-10 13:44:57.045212 | orchestrator | + min_disk_gb = (known after apply) 2026-01-10 13:44:57.045216 | orchestrator | + min_ram_mb = (known after apply) 2026-01-10 13:44:57.045220 | orchestrator | + most_recent = true 2026-01-10 13:44:57.045224 | orchestrator | + name = (known after apply) 2026-01-10 13:44:57.045228 | orchestrator | + protected = (known after apply) 2026-01-10 13:44:57.045231 | orchestrator | + region = (known after apply) 2026-01-10 13:44:57.045235 | orchestrator | + schema = (known after apply) 2026-01-10 13:44:57.045239 | orchestrator | + size_bytes = (known after apply) 2026-01-10 13:44:57.045243 | orchestrator | + tags = (known after apply) 2026-01-10 13:44:57.045246 | orchestrator | + updated_at = (known after apply) 2026-01-10 13:44:57.045250 | orchestrator | } 2026-01-10 13:44:57.045254 | orchestrator | 2026-01-10 13:44:57.045258 | orchestrator | # local_file.MANAGER_ADDRESS will be created 2026-01-10 13:44:57.045262 | orchestrator | + resource "local_file" "MANAGER_ADDRESS" { 2026-01-10 13:44:57.045266 | orchestrator | + content = (known after apply) 2026-01-10 13:44:57.045270 | orchestrator | + content_base64sha256 = (known after apply) 2026-01-10 13:44:57.045274 | orchestrator | + content_base64sha512 = (known after apply) 2026-01-10 13:44:57.045278 | orchestrator | + content_md5 = (known after apply) 2026-01-10 13:44:57.045282 | orchestrator | + content_sha1 = (known after apply) 2026-01-10 13:44:57.045303 | orchestrator | + content_sha256 = (known after apply) 2026-01-10 13:44:57.045308 | orchestrator | + content_sha512 = (known after apply) 2026-01-10 13:44:57.045311 | orchestrator | + directory_permission = "0777" 2026-01-10 13:44:57.045315 | orchestrator | + file_permission = "0644" 2026-01-10 13:44:57.045319 | orchestrator | + filename = ".MANAGER_ADDRESS.ci" 2026-01-10 13:44:57.045323 | orchestrator | + id = (known after apply) 2026-01-10 13:44:57.045327 | orchestrator | } 2026-01-10 13:44:57.045333 | orchestrator | 2026-01-10 13:44:57.045337 | orchestrator | # local_file.id_rsa_pub will be created 2026-01-10 13:44:57.045341 | orchestrator | + resource "local_file" "id_rsa_pub" { 2026-01-10 13:44:57.045345 | orchestrator | + content = (known after apply) 2026-01-10 13:44:57.045349 | orchestrator | + content_base64sha256 = (known after apply) 2026-01-10 13:44:57.045352 | orchestrator | + content_base64sha512 = (known after apply) 2026-01-10 13:44:57.045356 | orchestrator | + content_md5 = (known after apply) 2026-01-10 13:44:57.045360 | orchestrator | + content_sha1 = (known after apply) 2026-01-10 13:44:57.045364 | orchestrator | + content_sha256 = (known after apply) 2026-01-10 13:44:57.045368 | orchestrator | + content_sha512 = (known after apply) 2026-01-10 13:44:57.045371 | orchestrator | + directory_permission = "0777" 2026-01-10 13:44:57.045375 | orchestrator | + file_permission = "0644" 2026-01-10 13:44:57.045383 | orchestrator | + filename = ".id_rsa.ci.pub" 2026-01-10 13:44:57.045387 | orchestrator | + id = (known after apply) 2026-01-10 13:44:57.045391 | orchestrator | } 2026-01-10 13:44:57.045394 | orchestrator | 2026-01-10 13:44:57.045404 | orchestrator | # local_file.inventory will be created 2026-01-10 13:44:57.045407 | orchestrator | + resource "local_file" "inventory" { 2026-01-10 13:44:57.045411 | orchestrator | + content = (known after apply) 2026-01-10 13:44:57.045415 | orchestrator | + content_base64sha256 = (known after apply) 2026-01-10 13:44:57.045419 | orchestrator | + content_base64sha512 = (known after apply) 2026-01-10 13:44:57.045423 | orchestrator | + content_md5 = (known after apply) 2026-01-10 13:44:57.045427 | orchestrator | + content_sha1 = (known after apply) 2026-01-10 13:44:57.045431 | orchestrator | + content_sha256 = (known after apply) 2026-01-10 13:44:57.045435 | orchestrator | + content_sha512 = (known after apply) 2026-01-10 13:44:57.045438 | orchestrator | + directory_permission = "0777" 2026-01-10 13:44:57.045442 | orchestrator | + file_permission = "0644" 2026-01-10 13:44:57.045446 | orchestrator | + filename = "inventory.ci" 2026-01-10 13:44:57.045450 | orchestrator | + id = (known after apply) 2026-01-10 13:44:57.045454 | orchestrator | } 2026-01-10 13:44:57.045459 | orchestrator | 2026-01-10 13:44:57.045463 | orchestrator | # local_sensitive_file.id_rsa will be created 2026-01-10 13:44:57.045467 | orchestrator | + resource "local_sensitive_file" "id_rsa" { 2026-01-10 13:44:57.045471 | orchestrator | + content = (sensitive value) 2026-01-10 13:44:57.045475 | orchestrator | + content_base64sha256 = (known after apply) 2026-01-10 13:44:57.045479 | orchestrator | + content_base64sha512 = (known after apply) 2026-01-10 13:44:57.045483 | orchestrator | + content_md5 = (known after apply) 2026-01-10 13:44:57.045486 | orchestrator | + content_sha1 = (known after apply) 2026-01-10 13:44:57.045490 | orchestrator | + content_sha256 = (known after apply) 2026-01-10 13:44:57.045494 | orchestrator | + content_sha512 = (known after apply) 2026-01-10 13:44:57.045498 | orchestrator | + directory_permission = "0700" 2026-01-10 13:44:57.045502 | orchestrator | + file_permission = "0600" 2026-01-10 13:44:57.045505 | orchestrator | + filename = ".id_rsa.ci" 2026-01-10 13:44:57.045509 | orchestrator | + id = (known after apply) 2026-01-10 13:44:57.045513 | orchestrator | } 2026-01-10 13:44:57.047878 | orchestrator | 2026-01-10 13:44:57.047969 | orchestrator | # null_resource.node_semaphore will be created 2026-01-10 13:44:57.047974 | orchestrator | + resource "null_resource" "node_semaphore" { 2026-01-10 13:44:57.047977 | orchestrator | + id = (known after apply) 2026-01-10 13:44:57.047982 | orchestrator | } 2026-01-10 13:44:57.047986 | orchestrator | 2026-01-10 13:44:57.047990 | orchestrator | # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2026-01-10 13:44:57.047995 | orchestrator | + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2026-01-10 13:44:57.047999 | orchestrator | + attachment = (known after apply) 2026-01-10 13:44:57.048003 | orchestrator | + availability_zone = "nova" 2026-01-10 13:44:57.048007 | orchestrator | + id = (known after apply) 2026-01-10 13:44:57.048011 | orchestrator | + image_id = (known after apply) 2026-01-10 13:44:57.048015 | orchestrator | + metadata = (known after apply) 2026-01-10 13:44:57.048019 | orchestrator | + name = "testbed-volume-manager-base" 2026-01-10 13:44:57.048023 | orchestrator | + region = (known after apply) 2026-01-10 13:44:57.048027 | orchestrator | + size = 80 2026-01-10 13:44:57.048031 | orchestrator | + volume_retype_policy = "never" 2026-01-10 13:44:57.048035 | orchestrator | + volume_type = "ssd" 2026-01-10 13:44:57.048039 | orchestrator | } 2026-01-10 13:44:57.048053 | orchestrator | 2026-01-10 13:44:57.048057 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2026-01-10 13:44:57.048061 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-01-10 13:44:57.048065 | orchestrator | + attachment = (known after apply) 2026-01-10 13:44:57.048068 | orchestrator | + availability_zone = "nova" 2026-01-10 13:44:57.048072 | orchestrator | + id = (known after apply) 2026-01-10 13:44:57.048085 | orchestrator | + image_id = (known after apply) 2026-01-10 13:44:57.048089 | orchestrator | + metadata = (known after apply) 2026-01-10 13:44:57.048093 | orchestrator | + name = "testbed-volume-0-node-base" 2026-01-10 13:44:57.048097 | orchestrator | + region = (known after apply) 2026-01-10 13:44:57.048101 | orchestrator | + size = 80 2026-01-10 13:44:57.048105 | orchestrator | + volume_retype_policy = "never" 2026-01-10 13:44:57.048109 | orchestrator | + volume_type = "ssd" 2026-01-10 13:44:57.048113 | orchestrator | } 2026-01-10 13:44:57.048117 | orchestrator | 2026-01-10 13:44:57.048121 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2026-01-10 13:44:57.048124 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-01-10 13:44:57.048128 | orchestrator | + attachment = (known after apply) 2026-01-10 13:44:57.048132 | orchestrator | + availability_zone = "nova" 2026-01-10 13:44:57.048136 | orchestrator | + id = (known after apply) 2026-01-10 13:44:57.048140 | orchestrator | + image_id = (known after apply) 2026-01-10 13:44:57.048143 | orchestrator | + metadata = (known after apply) 2026-01-10 13:44:57.048147 | orchestrator | + name = "testbed-volume-1-node-base" 2026-01-10 13:44:57.048151 | orchestrator | + region = (known after apply) 2026-01-10 13:44:57.048155 | orchestrator | + size = 80 2026-01-10 13:44:57.048159 | orchestrator | + volume_retype_policy = "never" 2026-01-10 13:44:57.048163 | orchestrator | + volume_type = "ssd" 2026-01-10 13:44:57.048166 | orchestrator | } 2026-01-10 13:44:57.048170 | orchestrator | 2026-01-10 13:44:57.048174 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2026-01-10 13:44:57.048178 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-01-10 13:44:57.048182 | orchestrator | + attachment = (known after apply) 2026-01-10 13:44:57.048185 | orchestrator | + availability_zone = "nova" 2026-01-10 13:44:57.048189 | orchestrator | + id = (known after apply) 2026-01-10 13:44:57.048193 | orchestrator | + image_id = (known after apply) 2026-01-10 13:44:57.048197 | orchestrator | + metadata = (known after apply) 2026-01-10 13:44:57.048201 | orchestrator | + name = "testbed-volume-2-node-base" 2026-01-10 13:44:57.048205 | orchestrator | + region = (known after apply) 2026-01-10 13:44:57.048208 | orchestrator | + size = 80 2026-01-10 13:44:57.048212 | orchestrator | + volume_retype_policy = "never" 2026-01-10 13:44:57.048216 | orchestrator | + volume_type = "ssd" 2026-01-10 13:44:57.048220 | orchestrator | } 2026-01-10 13:44:57.048224 | orchestrator | 2026-01-10 13:44:57.048227 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2026-01-10 13:44:57.048231 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-01-10 13:44:57.048235 | orchestrator | + attachment = (known after apply) 2026-01-10 13:44:57.048239 | orchestrator | + availability_zone = "nova" 2026-01-10 13:44:57.048243 | orchestrator | + id = (known after apply) 2026-01-10 13:44:57.048247 | orchestrator | + image_id = (known after apply) 2026-01-10 13:44:57.048251 | orchestrator | + metadata = (known after apply) 2026-01-10 13:44:57.048258 | orchestrator | + name = "testbed-volume-3-node-base" 2026-01-10 13:44:57.048262 | orchestrator | + region = (known after apply) 2026-01-10 13:44:57.048266 | orchestrator | + size = 80 2026-01-10 13:44:57.048270 | orchestrator | + volume_retype_policy = "never" 2026-01-10 13:44:57.048274 | orchestrator | + volume_type = "ssd" 2026-01-10 13:44:57.048278 | orchestrator | } 2026-01-10 13:44:57.048282 | orchestrator | 2026-01-10 13:44:57.048301 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2026-01-10 13:44:57.048305 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-01-10 13:44:57.048309 | orchestrator | + attachment = (known after apply) 2026-01-10 13:44:57.048313 | orchestrator | + availability_zone = "nova" 2026-01-10 13:44:57.048317 | orchestrator | + id = (known after apply) 2026-01-10 13:44:57.048324 | orchestrator | + image_id = (known after apply) 2026-01-10 13:44:57.048328 | orchestrator | + metadata = (known after apply) 2026-01-10 13:44:57.048332 | orchestrator | + name = "testbed-volume-4-node-base" 2026-01-10 13:44:57.048336 | orchestrator | + region = (known after apply) 2026-01-10 13:44:57.048340 | orchestrator | + size = 80 2026-01-10 13:44:57.048343 | orchestrator | + volume_retype_policy = "never" 2026-01-10 13:44:57.048347 | orchestrator | + volume_type = "ssd" 2026-01-10 13:44:57.048351 | orchestrator | } 2026-01-10 13:44:57.048355 | orchestrator | 2026-01-10 13:44:57.048359 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2026-01-10 13:44:57.048363 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-01-10 13:44:57.048366 | orchestrator | + attachment = (known after apply) 2026-01-10 13:44:57.048370 | orchestrator | + availability_zone = "nova" 2026-01-10 13:44:57.048374 | orchestrator | + id = (known after apply) 2026-01-10 13:44:57.048378 | orchestrator | + image_id = (known after apply) 2026-01-10 13:44:57.048382 | orchestrator | + metadata = (known after apply) 2026-01-10 13:44:57.048386 | orchestrator | + name = "testbed-volume-5-node-base" 2026-01-10 13:44:57.048390 | orchestrator | + region = (known after apply) 2026-01-10 13:44:57.048393 | orchestrator | + size = 80 2026-01-10 13:44:57.048397 | orchestrator | + volume_retype_policy = "never" 2026-01-10 13:44:57.048401 | orchestrator | + volume_type = "ssd" 2026-01-10 13:44:57.048405 | orchestrator | } 2026-01-10 13:44:57.048409 | orchestrator | 2026-01-10 13:44:57.048412 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[0] will be created 2026-01-10 13:44:57.048417 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-01-10 13:44:57.048421 | orchestrator | + attachment = (known after apply) 2026-01-10 13:44:57.048425 | orchestrator | + availability_zone = "nova" 2026-01-10 13:44:57.048429 | orchestrator | + id = (known after apply) 2026-01-10 13:44:57.048433 | orchestrator | + metadata = (known after apply) 2026-01-10 13:44:57.048437 | orchestrator | + name = "testbed-volume-0-node-3" 2026-01-10 13:44:57.048445 | orchestrator | + region = (known after apply) 2026-01-10 13:44:57.048449 | orchestrator | + size = 20 2026-01-10 13:44:57.048453 | orchestrator | + volume_retype_policy = "never" 2026-01-10 13:44:57.048457 | orchestrator | + volume_type = "ssd" 2026-01-10 13:44:57.048461 | orchestrator | } 2026-01-10 13:44:57.048465 | orchestrator | 2026-01-10 13:44:57.048469 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[1] will be created 2026-01-10 13:44:57.048473 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-01-10 13:44:57.048476 | orchestrator | + attachment = (known after apply) 2026-01-10 13:44:57.048480 | orchestrator | + availability_zone = "nova" 2026-01-10 13:44:57.048484 | orchestrator | + id = (known after apply) 2026-01-10 13:44:57.048488 | orchestrator | + metadata = (known after apply) 2026-01-10 13:44:57.048492 | orchestrator | + name = "testbed-volume-1-node-4" 2026-01-10 13:44:57.048495 | orchestrator | + region = (known after apply) 2026-01-10 13:44:57.048499 | orchestrator | + size = 20 2026-01-10 13:44:57.048503 | orchestrator | + volume_retype_policy = "never" 2026-01-10 13:44:57.048507 | orchestrator | + volume_type = "ssd" 2026-01-10 13:44:57.048511 | orchestrator | } 2026-01-10 13:44:57.048514 | orchestrator | 2026-01-10 13:44:57.048518 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[2] will be created 2026-01-10 13:44:57.048522 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-01-10 13:44:57.048526 | orchestrator | + attachment = (known after apply) 2026-01-10 13:44:57.048530 | orchestrator | + availability_zone = "nova" 2026-01-10 13:44:57.048534 | orchestrator | + id = (known after apply) 2026-01-10 13:44:57.048538 | orchestrator | + metadata = (known after apply) 2026-01-10 13:44:57.048541 | orchestrator | + name = "testbed-volume-2-node-5" 2026-01-10 13:44:57.048545 | orchestrator | + region = (known after apply) 2026-01-10 13:44:57.048552 | orchestrator | + size = 20 2026-01-10 13:44:57.048556 | orchestrator | + volume_retype_policy = "never" 2026-01-10 13:44:57.048560 | orchestrator | + volume_type = "ssd" 2026-01-10 13:44:57.048564 | orchestrator | } 2026-01-10 13:44:57.048568 | orchestrator | 2026-01-10 13:44:57.048572 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[3] will be created 2026-01-10 13:44:57.048576 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-01-10 13:44:57.048580 | orchestrator | + attachment = (known after apply) 2026-01-10 13:44:57.048583 | orchestrator | + availability_zone = "nova" 2026-01-10 13:44:57.048587 | orchestrator | + id = (known after apply) 2026-01-10 13:44:57.048591 | orchestrator | + metadata = (known after apply) 2026-01-10 13:44:57.048595 | orchestrator | + name = "testbed-volume-3-node-3" 2026-01-10 13:44:57.048599 | orchestrator | + region = (known after apply) 2026-01-10 13:44:57.048602 | orchestrator | + size = 20 2026-01-10 13:44:57.048606 | orchestrator | + volume_retype_policy = "never" 2026-01-10 13:44:57.048610 | orchestrator | + volume_type = "ssd" 2026-01-10 13:44:57.048614 | orchestrator | } 2026-01-10 13:44:57.048618 | orchestrator | 2026-01-10 13:44:57.048622 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[4] will be created 2026-01-10 13:44:57.048626 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-01-10 13:44:57.048629 | orchestrator | + attachment = (known after apply) 2026-01-10 13:44:57.048633 | orchestrator | + availability_zone = "nova" 2026-01-10 13:44:57.048637 | orchestrator | + id = (known after apply) 2026-01-10 13:44:57.048641 | orchestrator | + metadata = (known after apply) 2026-01-10 13:44:57.048645 | orchestrator | + name = "testbed-volume-4-node-4" 2026-01-10 13:44:57.048648 | orchestrator | + region = (known after apply) 2026-01-10 13:44:57.048661 | orchestrator | + size = 20 2026-01-10 13:44:57.048665 | orchestrator | + volume_retype_policy = "never" 2026-01-10 13:44:57.048669 | orchestrator | + volume_type = "ssd" 2026-01-10 13:44:57.048673 | orchestrator | } 2026-01-10 13:44:57.048676 | orchestrator | 2026-01-10 13:44:57.048680 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[5] will be created 2026-01-10 13:44:57.048684 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-01-10 13:44:57.048688 | orchestrator | + attachment = (known after apply) 2026-01-10 13:44:57.048692 | orchestrator | + availability_zone = "nova" 2026-01-10 13:44:57.048696 | orchestrator | + id = (known after apply) 2026-01-10 13:44:57.048699 | orchestrator | + metadata = (known after apply) 2026-01-10 13:44:57.048703 | orchestrator | + name = "testbed-volume-5-node-5" 2026-01-10 13:44:57.048707 | orchestrator | + region = (known after apply) 2026-01-10 13:44:57.048711 | orchestrator | + size = 20 2026-01-10 13:44:57.048715 | orchestrator | + volume_retype_policy = "never" 2026-01-10 13:44:57.048718 | orchestrator | + volume_type = "ssd" 2026-01-10 13:44:57.048722 | orchestrator | } 2026-01-10 13:44:57.048726 | orchestrator | 2026-01-10 13:44:57.048730 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[6] will be created 2026-01-10 13:44:57.048734 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-01-10 13:44:57.048738 | orchestrator | + attachment = (known after apply) 2026-01-10 13:44:57.048742 | orchestrator | + availability_zone = "nova" 2026-01-10 13:44:57.048745 | orchestrator | + id = (known after apply) 2026-01-10 13:44:57.048749 | orchestrator | + metadata = (known after apply) 2026-01-10 13:44:57.048753 | orchestrator | + name = "testbed-volume-6-node-3" 2026-01-10 13:44:57.048757 | orchestrator | + region = (known after apply) 2026-01-10 13:44:57.048761 | orchestrator | + size = 20 2026-01-10 13:44:57.048765 | orchestrator | + volume_retype_policy = "never" 2026-01-10 13:44:57.048768 | orchestrator | + volume_type = "ssd" 2026-01-10 13:44:57.048772 | orchestrator | } 2026-01-10 13:44:57.048776 | orchestrator | 2026-01-10 13:44:57.048780 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[7] will be created 2026-01-10 13:44:57.048784 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-01-10 13:44:57.048791 | orchestrator | + attachment = (known after apply) 2026-01-10 13:44:57.048795 | orchestrator | + availability_zone = "nova" 2026-01-10 13:44:57.048799 | orchestrator | + id = (known after apply) 2026-01-10 13:44:57.048802 | orchestrator | + metadata = (known after apply) 2026-01-10 13:44:57.048806 | orchestrator | + name = "testbed-volume-7-node-4" 2026-01-10 13:44:57.048810 | orchestrator | + region = (known after apply) 2026-01-10 13:44:57.048814 | orchestrator | + size = 20 2026-01-10 13:44:57.048818 | orchestrator | + volume_retype_policy = "never" 2026-01-10 13:44:57.048822 | orchestrator | + volume_type = "ssd" 2026-01-10 13:44:57.048825 | orchestrator | } 2026-01-10 13:44:57.048829 | orchestrator | 2026-01-10 13:44:57.048833 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[8] will be created 2026-01-10 13:44:57.048837 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-01-10 13:44:57.048848 | orchestrator | + attachment = (known after apply) 2026-01-10 13:44:57.048852 | orchestrator | + availability_zone = "nova" 2026-01-10 13:44:57.048856 | orchestrator | + id = (known after apply) 2026-01-10 13:44:57.048859 | orchestrator | + metadata = (known after apply) 2026-01-10 13:44:57.048863 | orchestrator | + name = "testbed-volume-8-node-5" 2026-01-10 13:44:57.048867 | orchestrator | + region = (known after apply) 2026-01-10 13:44:57.048871 | orchestrator | + size = 20 2026-01-10 13:44:57.048874 | orchestrator | + volume_retype_policy = "never" 2026-01-10 13:44:57.048878 | orchestrator | + volume_type = "ssd" 2026-01-10 13:44:57.048882 | orchestrator | } 2026-01-10 13:44:57.048886 | orchestrator | 2026-01-10 13:44:57.048890 | orchestrator | # openstack_compute_instance_v2.manager_server will be created 2026-01-10 13:44:57.048894 | orchestrator | + resource "openstack_compute_instance_v2" "manager_server" { 2026-01-10 13:44:57.048898 | orchestrator | + access_ip_v4 = (known after apply) 2026-01-10 13:44:57.048901 | orchestrator | + access_ip_v6 = (known after apply) 2026-01-10 13:44:57.048905 | orchestrator | + all_metadata = (known after apply) 2026-01-10 13:44:57.048909 | orchestrator | + all_tags = (known after apply) 2026-01-10 13:44:57.048913 | orchestrator | + availability_zone = "nova" 2026-01-10 13:44:57.048917 | orchestrator | + config_drive = true 2026-01-10 13:44:57.048920 | orchestrator | + created = (known after apply) 2026-01-10 13:44:57.048924 | orchestrator | + flavor_id = (known after apply) 2026-01-10 13:44:57.048928 | orchestrator | + flavor_name = "OSISM-4V-16" 2026-01-10 13:44:57.048932 | orchestrator | + force_delete = false 2026-01-10 13:44:57.048936 | orchestrator | + hypervisor_hostname = (known after apply) 2026-01-10 13:44:57.048939 | orchestrator | + id = (known after apply) 2026-01-10 13:44:57.048943 | orchestrator | + image_id = (known after apply) 2026-01-10 13:44:57.048947 | orchestrator | + image_name = (known after apply) 2026-01-10 13:44:57.048951 | orchestrator | + key_pair = "testbed" 2026-01-10 13:44:57.048954 | orchestrator | + name = "testbed-manager" 2026-01-10 13:44:57.048958 | orchestrator | + power_state = "active" 2026-01-10 13:44:57.048962 | orchestrator | + region = (known after apply) 2026-01-10 13:44:57.048966 | orchestrator | + security_groups = (known after apply) 2026-01-10 13:44:57.048970 | orchestrator | + stop_before_destroy = false 2026-01-10 13:44:57.048973 | orchestrator | + updated = (known after apply) 2026-01-10 13:44:57.048977 | orchestrator | + user_data = (sensitive value) 2026-01-10 13:44:57.048981 | orchestrator | 2026-01-10 13:44:57.048985 | orchestrator | + block_device { 2026-01-10 13:44:57.048989 | orchestrator | + boot_index = 0 2026-01-10 13:44:57.048993 | orchestrator | + delete_on_termination = false 2026-01-10 13:44:57.049000 | orchestrator | + destination_type = "volume" 2026-01-10 13:44:57.049004 | orchestrator | + multiattach = false 2026-01-10 13:44:57.049007 | orchestrator | + source_type = "volume" 2026-01-10 13:44:57.049011 | orchestrator | + uuid = (known after apply) 2026-01-10 13:44:57.049018 | orchestrator | } 2026-01-10 13:44:57.049022 | orchestrator | 2026-01-10 13:44:57.049026 | orchestrator | + network { 2026-01-10 13:44:57.049030 | orchestrator | + access_network = false 2026-01-10 13:44:57.049034 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-01-10 13:44:57.049038 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-01-10 13:44:57.049042 | orchestrator | + mac = (known after apply) 2026-01-10 13:44:57.049045 | orchestrator | + name = (known after apply) 2026-01-10 13:44:57.049049 | orchestrator | + port = (known after apply) 2026-01-10 13:44:57.049053 | orchestrator | + uuid = (known after apply) 2026-01-10 13:44:57.049057 | orchestrator | } 2026-01-10 13:44:57.049061 | orchestrator | } 2026-01-10 13:44:57.049064 | orchestrator | 2026-01-10 13:44:57.049068 | orchestrator | # openstack_compute_instance_v2.node_server[0] will be created 2026-01-10 13:44:57.049072 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-01-10 13:44:57.049076 | orchestrator | + access_ip_v4 = (known after apply) 2026-01-10 13:44:57.049080 | orchestrator | + access_ip_v6 = (known after apply) 2026-01-10 13:44:57.049083 | orchestrator | + all_metadata = (known after apply) 2026-01-10 13:44:57.049087 | orchestrator | + all_tags = (known after apply) 2026-01-10 13:44:57.049091 | orchestrator | + availability_zone = "nova" 2026-01-10 13:44:57.049095 | orchestrator | + config_drive = true 2026-01-10 13:44:57.049098 | orchestrator | + created = (known after apply) 2026-01-10 13:44:57.049102 | orchestrator | + flavor_id = (known after apply) 2026-01-10 13:44:57.049106 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-01-10 13:44:57.049110 | orchestrator | + force_delete = false 2026-01-10 13:44:57.049114 | orchestrator | + hypervisor_hostname = (known after apply) 2026-01-10 13:44:57.049117 | orchestrator | + id = (known after apply) 2026-01-10 13:44:57.049121 | orchestrator | + image_id = (known after apply) 2026-01-10 13:44:57.049125 | orchestrator | + image_name = (known after apply) 2026-01-10 13:44:57.049129 | orchestrator | + key_pair = "testbed" 2026-01-10 13:44:57.049133 | orchestrator | + name = "testbed-node-0" 2026-01-10 13:44:57.049136 | orchestrator | + power_state = "active" 2026-01-10 13:44:57.049140 | orchestrator | + region = (known after apply) 2026-01-10 13:44:57.049144 | orchestrator | + security_groups = (known after apply) 2026-01-10 13:44:57.049148 | orchestrator | + stop_before_destroy = false 2026-01-10 13:44:57.049151 | orchestrator | + updated = (known after apply) 2026-01-10 13:44:57.049155 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-01-10 13:44:57.049159 | orchestrator | 2026-01-10 13:44:57.049163 | orchestrator | + block_device { 2026-01-10 13:44:57.049167 | orchestrator | + boot_index = 0 2026-01-10 13:44:57.049171 | orchestrator | + delete_on_termination = false 2026-01-10 13:44:57.049175 | orchestrator | + destination_type = "volume" 2026-01-10 13:44:57.049178 | orchestrator | + multiattach = false 2026-01-10 13:44:57.049182 | orchestrator | + source_type = "volume" 2026-01-10 13:44:57.049186 | orchestrator | + uuid = (known after apply) 2026-01-10 13:44:57.049190 | orchestrator | } 2026-01-10 13:44:57.049194 | orchestrator | 2026-01-10 13:44:57.049198 | orchestrator | + network { 2026-01-10 13:44:57.049201 | orchestrator | + access_network = false 2026-01-10 13:44:57.049205 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-01-10 13:44:57.049209 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-01-10 13:44:57.049213 | orchestrator | + mac = (known after apply) 2026-01-10 13:44:57.049217 | orchestrator | + name = (known after apply) 2026-01-10 13:44:57.049221 | orchestrator | + port = (known after apply) 2026-01-10 13:44:57.049224 | orchestrator | + uuid = (known after apply) 2026-01-10 13:44:57.049228 | orchestrator | } 2026-01-10 13:44:57.049232 | orchestrator | } 2026-01-10 13:44:57.049239 | orchestrator | 2026-01-10 13:44:57.049243 | orchestrator | # openstack_compute_instance_v2.node_server[1] will be created 2026-01-10 13:44:57.049246 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-01-10 13:44:57.049250 | orchestrator | + access_ip_v4 = (known after apply) 2026-01-10 13:44:57.049257 | orchestrator | + access_ip_v6 = (known after apply) 2026-01-10 13:44:57.049261 | orchestrator | + all_metadata = (known after apply) 2026-01-10 13:44:57.049265 | orchestrator | + all_tags = (known after apply) 2026-01-10 13:44:57.049269 | orchestrator | + availability_zone = "nova" 2026-01-10 13:44:57.049272 | orchestrator | + config_drive = true 2026-01-10 13:44:57.049276 | orchestrator | + created = (known after apply) 2026-01-10 13:44:57.049280 | orchestrator | + flavor_id = (known after apply) 2026-01-10 13:44:57.049284 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-01-10 13:44:57.049301 | orchestrator | + force_delete = false 2026-01-10 13:44:57.049305 | orchestrator | + hypervisor_hostname = (known after apply) 2026-01-10 13:44:57.049309 | orchestrator | + id = (known after apply) 2026-01-10 13:44:57.049313 | orchestrator | + image_id = (known after apply) 2026-01-10 13:44:57.049316 | orchestrator | + image_name = (known after apply) 2026-01-10 13:44:57.049320 | orchestrator | + key_pair = "testbed" 2026-01-10 13:44:57.049324 | orchestrator | + name = "testbed-node-1" 2026-01-10 13:44:57.049328 | orchestrator | + power_state = "active" 2026-01-10 13:44:57.049332 | orchestrator | + region = (known after apply) 2026-01-10 13:44:57.049335 | orchestrator | + security_groups = (known after apply) 2026-01-10 13:44:57.049339 | orchestrator | + stop_before_destroy = false 2026-01-10 13:44:57.049343 | orchestrator | + updated = (known after apply) 2026-01-10 13:44:57.049347 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-01-10 13:44:57.049351 | orchestrator | 2026-01-10 13:44:57.049355 | orchestrator | + block_device { 2026-01-10 13:44:57.049359 | orchestrator | + boot_index = 0 2026-01-10 13:44:57.049362 | orchestrator | + delete_on_termination = false 2026-01-10 13:44:57.049366 | orchestrator | + destination_type = "volume" 2026-01-10 13:44:57.049370 | orchestrator | + multiattach = false 2026-01-10 13:44:57.049374 | orchestrator | + source_type = "volume" 2026-01-10 13:44:57.049377 | orchestrator | + uuid = (known after apply) 2026-01-10 13:44:57.049381 | orchestrator | } 2026-01-10 13:44:57.049385 | orchestrator | 2026-01-10 13:44:57.049389 | orchestrator | + network { 2026-01-10 13:44:57.049393 | orchestrator | + access_network = false 2026-01-10 13:44:57.049397 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-01-10 13:44:57.049401 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-01-10 13:44:57.049404 | orchestrator | + mac = (known after apply) 2026-01-10 13:44:57.049408 | orchestrator | + name = (known after apply) 2026-01-10 13:44:57.049412 | orchestrator | + port = (known after apply) 2026-01-10 13:44:57.049416 | orchestrator | + uuid = (known after apply) 2026-01-10 13:44:57.049420 | orchestrator | } 2026-01-10 13:44:57.049423 | orchestrator | } 2026-01-10 13:44:57.049427 | orchestrator | 2026-01-10 13:44:57.049431 | orchestrator | # openstack_compute_instance_v2.node_server[2] will be created 2026-01-10 13:44:57.049435 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-01-10 13:44:57.049439 | orchestrator | + access_ip_v4 = (known after apply) 2026-01-10 13:44:57.049443 | orchestrator | + access_ip_v6 = (known after apply) 2026-01-10 13:44:57.049447 | orchestrator | + all_metadata = (known after apply) 2026-01-10 13:44:57.049451 | orchestrator | + all_tags = (known after apply) 2026-01-10 13:44:57.049457 | orchestrator | + availability_zone = "nova" 2026-01-10 13:44:57.049461 | orchestrator | + config_drive = true 2026-01-10 13:44:57.049465 | orchestrator | + created = (known after apply) 2026-01-10 13:44:57.049469 | orchestrator | + flavor_id = (known after apply) 2026-01-10 13:44:57.049473 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-01-10 13:44:57.049477 | orchestrator | + force_delete = false 2026-01-10 13:44:57.049480 | orchestrator | + hypervisor_hostname = (known after apply) 2026-01-10 13:44:57.049484 | orchestrator | + id = (known after apply) 2026-01-10 13:44:57.049488 | orchestrator | + image_id = (known after apply) 2026-01-10 13:44:57.049495 | orchestrator | + image_name = (known after apply) 2026-01-10 13:44:57.049499 | orchestrator | + key_pair = "testbed" 2026-01-10 13:44:57.049503 | orchestrator | + name = "testbed-node-2" 2026-01-10 13:44:57.049506 | orchestrator | + power_state = "active" 2026-01-10 13:44:57.049510 | orchestrator | + region = (known after apply) 2026-01-10 13:44:57.049514 | orchestrator | + security_groups = (known after apply) 2026-01-10 13:44:57.049518 | orchestrator | + stop_before_destroy = false 2026-01-10 13:44:57.049522 | orchestrator | + updated = (known after apply) 2026-01-10 13:44:57.049525 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-01-10 13:44:57.049529 | orchestrator | 2026-01-10 13:44:57.049533 | orchestrator | + block_device { 2026-01-10 13:44:57.049537 | orchestrator | + boot_index = 0 2026-01-10 13:44:57.049541 | orchestrator | + delete_on_termination = false 2026-01-10 13:44:57.049545 | orchestrator | + destination_type = "volume" 2026-01-10 13:44:57.049548 | orchestrator | + multiattach = false 2026-01-10 13:44:57.049552 | orchestrator | + source_type = "volume" 2026-01-10 13:44:57.049556 | orchestrator | + uuid = (known after apply) 2026-01-10 13:44:57.049560 | orchestrator | } 2026-01-10 13:44:57.049564 | orchestrator | 2026-01-10 13:44:57.049568 | orchestrator | + network { 2026-01-10 13:44:57.049571 | orchestrator | + access_network = false 2026-01-10 13:44:57.049575 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-01-10 13:44:57.049579 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-01-10 13:44:57.049583 | orchestrator | + mac = (known after apply) 2026-01-10 13:44:57.049587 | orchestrator | + name = (known after apply) 2026-01-10 13:44:57.049591 | orchestrator | + port = (known after apply) 2026-01-10 13:44:57.049594 | orchestrator | + uuid = (known after apply) 2026-01-10 13:44:57.049598 | orchestrator | } 2026-01-10 13:44:57.049602 | orchestrator | } 2026-01-10 13:44:57.049609 | orchestrator | 2026-01-10 13:44:57.049613 | orchestrator | # openstack_compute_instance_v2.node_server[3] will be created 2026-01-10 13:44:57.049617 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-01-10 13:44:57.049620 | orchestrator | + access_ip_v4 = (known after apply) 2026-01-10 13:44:57.049624 | orchestrator | + access_ip_v6 = (known after apply) 2026-01-10 13:44:57.049628 | orchestrator | + all_metadata = (known after apply) 2026-01-10 13:44:57.049632 | orchestrator | + all_tags = (known after apply) 2026-01-10 13:44:57.049636 | orchestrator | + availability_zone = "nova" 2026-01-10 13:44:57.049639 | orchestrator | + config_drive = true 2026-01-10 13:44:57.049643 | orchestrator | + created = (known after apply) 2026-01-10 13:44:57.049708 | orchestrator | + flavor_id = (known after apply) 2026-01-10 13:44:57.049714 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-01-10 13:44:57.049718 | orchestrator | + force_delete = false 2026-01-10 13:44:57.049722 | orchestrator | + hypervisor_hostname = (known after apply) 2026-01-10 13:44:57.049726 | orchestrator | + id = (known after apply) 2026-01-10 13:44:57.049730 | orchestrator | + image_id = (known after apply) 2026-01-10 13:44:57.049733 | orchestrator | + image_name = (known after apply) 2026-01-10 13:44:57.049737 | orchestrator | + key_pair = "testbed" 2026-01-10 13:44:57.049741 | orchestrator | + name = "testbed-node-3" 2026-01-10 13:44:57.049745 | orchestrator | + power_state = "active" 2026-01-10 13:44:57.049749 | orchestrator | + region = (known after apply) 2026-01-10 13:44:57.049753 | orchestrator | + security_groups = (known after apply) 2026-01-10 13:44:57.049756 | orchestrator | + stop_before_destroy = false 2026-01-10 13:44:57.049760 | orchestrator | + updated = (known after apply) 2026-01-10 13:44:57.049764 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-01-10 13:44:57.049768 | orchestrator | 2026-01-10 13:44:57.049772 | orchestrator | + block_device { 2026-01-10 13:44:57.049779 | orchestrator | + boot_index = 0 2026-01-10 13:44:57.049783 | orchestrator | + delete_on_termination = false 2026-01-10 13:44:57.049787 | orchestrator | + destination_type = "volume" 2026-01-10 13:44:57.049795 | orchestrator | + multiattach = false 2026-01-10 13:44:57.049798 | orchestrator | + source_type = "volume" 2026-01-10 13:44:57.049802 | orchestrator | + uuid = (known after apply) 2026-01-10 13:44:57.049806 | orchestrator | } 2026-01-10 13:44:57.049810 | orchestrator | 2026-01-10 13:44:57.049814 | orchestrator | + network { 2026-01-10 13:44:57.049818 | orchestrator | + access_network = false 2026-01-10 13:44:57.049822 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-01-10 13:44:57.049825 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-01-10 13:44:57.049829 | orchestrator | + mac = (known after apply) 2026-01-10 13:44:57.049833 | orchestrator | + name = (known after apply) 2026-01-10 13:44:57.049837 | orchestrator | + port = (known after apply) 2026-01-10 13:44:57.049841 | orchestrator | + uuid = (known after apply) 2026-01-10 13:44:57.049844 | orchestrator | } 2026-01-10 13:44:57.049848 | orchestrator | } 2026-01-10 13:44:57.049852 | orchestrator | 2026-01-10 13:44:57.049856 | orchestrator | # openstack_compute_instance_v2.node_server[4] will be created 2026-01-10 13:44:57.049860 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-01-10 13:44:57.049864 | orchestrator | + access_ip_v4 = (known after apply) 2026-01-10 13:44:57.049868 | orchestrator | + access_ip_v6 = (known after apply) 2026-01-10 13:44:57.049871 | orchestrator | + all_metadata = (known after apply) 2026-01-10 13:44:57.049875 | orchestrator | + all_tags = (known after apply) 2026-01-10 13:44:57.049879 | orchestrator | + availability_zone = "nova" 2026-01-10 13:44:57.049883 | orchestrator | + config_drive = true 2026-01-10 13:44:57.049887 | orchestrator | + created = (known after apply) 2026-01-10 13:44:57.049891 | orchestrator | + flavor_id = (known after apply) 2026-01-10 13:44:57.049894 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-01-10 13:44:57.049898 | orchestrator | + force_delete = false 2026-01-10 13:44:57.049902 | orchestrator | + hypervisor_hostname = (known after apply) 2026-01-10 13:44:57.049906 | orchestrator | + id = (known after apply) 2026-01-10 13:44:57.049910 | orchestrator | + image_id = (known after apply) 2026-01-10 13:44:57.049913 | orchestrator | + image_name = (known after apply) 2026-01-10 13:44:57.049917 | orchestrator | + key_pair = "testbed" 2026-01-10 13:44:57.049921 | orchestrator | + name = "testbed-node-4" 2026-01-10 13:44:57.049925 | orchestrator | + power_state = "active" 2026-01-10 13:44:57.049929 | orchestrator | + region = (known after apply) 2026-01-10 13:44:57.049932 | orchestrator | + security_groups = (known after apply) 2026-01-10 13:44:57.049936 | orchestrator | + stop_before_destroy = false 2026-01-10 13:44:57.049940 | orchestrator | + updated = (known after apply) 2026-01-10 13:44:57.049944 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-01-10 13:44:57.049948 | orchestrator | 2026-01-10 13:44:57.049952 | orchestrator | + block_device { 2026-01-10 13:44:57.049956 | orchestrator | + boot_index = 0 2026-01-10 13:44:57.049959 | orchestrator | + delete_on_termination = false 2026-01-10 13:44:57.049963 | orchestrator | + destination_type = "volume" 2026-01-10 13:44:57.049967 | orchestrator | + multiattach = false 2026-01-10 13:44:57.049971 | orchestrator | + source_type = "volume" 2026-01-10 13:44:57.049975 | orchestrator | + uuid = (known after apply) 2026-01-10 13:44:57.049979 | orchestrator | } 2026-01-10 13:44:57.049982 | orchestrator | 2026-01-10 13:44:57.049986 | orchestrator | + network { 2026-01-10 13:44:57.049990 | orchestrator | + access_network = false 2026-01-10 13:44:57.049994 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-01-10 13:44:57.049998 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-01-10 13:44:57.050002 | orchestrator | + mac = (known after apply) 2026-01-10 13:44:57.050006 | orchestrator | + name = (known after apply) 2026-01-10 13:44:57.050009 | orchestrator | + port = (known after apply) 2026-01-10 13:44:57.050034 | orchestrator | + uuid = (known after apply) 2026-01-10 13:44:57.050039 | orchestrator | } 2026-01-10 13:44:57.050043 | orchestrator | } 2026-01-10 13:44:57.050054 | orchestrator | 2026-01-10 13:44:57.050058 | orchestrator | # openstack_compute_instance_v2.node_server[5] will be created 2026-01-10 13:44:57.050062 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-01-10 13:44:57.050066 | orchestrator | + access_ip_v4 = (known after apply) 2026-01-10 13:44:57.050070 | orchestrator | + access_ip_v6 = (known after apply) 2026-01-10 13:44:57.050073 | orchestrator | + all_metadata = (known after apply) 2026-01-10 13:44:57.050077 | orchestrator | + all_tags = (known after apply) 2026-01-10 13:44:57.050081 | orchestrator | + availability_zone = "nova" 2026-01-10 13:44:57.050085 | orchestrator | + config_drive = true 2026-01-10 13:44:57.050088 | orchestrator | + created = (known after apply) 2026-01-10 13:44:57.050092 | orchestrator | + flavor_id = (known after apply) 2026-01-10 13:44:57.050096 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-01-10 13:44:57.050100 | orchestrator | + force_delete = false 2026-01-10 13:44:57.050106 | orchestrator | + hypervisor_hostname = (known after apply) 2026-01-10 13:44:57.050110 | orchestrator | + id = (known after apply) 2026-01-10 13:44:57.050114 | orchestrator | + image_id = (known after apply) 2026-01-10 13:44:57.050117 | orchestrator | + image_name = (known after apply) 2026-01-10 13:44:57.050121 | orchestrator | + key_pair = "testbed" 2026-01-10 13:44:57.050125 | orchestrator | + name = "testbed-node-5" 2026-01-10 13:44:57.050129 | orchestrator | + power_state = "active" 2026-01-10 13:44:57.050132 | orchestrator | + region = (known after apply) 2026-01-10 13:44:57.050136 | orchestrator | + security_groups = (known after apply) 2026-01-10 13:44:57.050140 | orchestrator | + stop_before_destroy = false 2026-01-10 13:44:57.050144 | orchestrator | + updated = (known after apply) 2026-01-10 13:44:57.050147 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-01-10 13:44:57.050151 | orchestrator | 2026-01-10 13:44:57.050155 | orchestrator | + block_device { 2026-01-10 13:44:57.050159 | orchestrator | + boot_index = 0 2026-01-10 13:44:57.050163 | orchestrator | + delete_on_termination = false 2026-01-10 13:44:57.050166 | orchestrator | + destination_type = "volume" 2026-01-10 13:44:57.050170 | orchestrator | + multiattach = false 2026-01-10 13:44:57.050174 | orchestrator | + source_type = "volume" 2026-01-10 13:44:57.050178 | orchestrator | + uuid = (known after apply) 2026-01-10 13:44:57.050182 | orchestrator | } 2026-01-10 13:44:57.050185 | orchestrator | 2026-01-10 13:44:57.050189 | orchestrator | + network { 2026-01-10 13:44:57.050193 | orchestrator | + access_network = false 2026-01-10 13:44:57.050197 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-01-10 13:44:57.050201 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-01-10 13:44:57.050204 | orchestrator | + mac = (known after apply) 2026-01-10 13:44:57.050208 | orchestrator | + name = (known after apply) 2026-01-10 13:44:57.050212 | orchestrator | + port = (known after apply) 2026-01-10 13:44:57.050216 | orchestrator | + uuid = (known after apply) 2026-01-10 13:44:57.050220 | orchestrator | } 2026-01-10 13:44:57.050223 | orchestrator | } 2026-01-10 13:44:57.050227 | orchestrator | 2026-01-10 13:44:57.050231 | orchestrator | # openstack_compute_keypair_v2.key will be created 2026-01-10 13:44:57.050235 | orchestrator | + resource "openstack_compute_keypair_v2" "key" { 2026-01-10 13:44:57.050239 | orchestrator | + fingerprint = (known after apply) 2026-01-10 13:44:57.050243 | orchestrator | + id = (known after apply) 2026-01-10 13:44:57.050247 | orchestrator | + name = "testbed" 2026-01-10 13:44:57.050250 | orchestrator | + private_key = (sensitive value) 2026-01-10 13:44:57.050254 | orchestrator | + public_key = (known after apply) 2026-01-10 13:44:57.050258 | orchestrator | + region = (known after apply) 2026-01-10 13:44:57.050262 | orchestrator | + user_id = (known after apply) 2026-01-10 13:44:57.050266 | orchestrator | } 2026-01-10 13:44:57.050270 | orchestrator | 2026-01-10 13:44:57.050273 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2026-01-10 13:44:57.050277 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-01-10 13:44:57.050285 | orchestrator | + device = (known after apply) 2026-01-10 13:44:57.050302 | orchestrator | + id = (known after apply) 2026-01-10 13:44:57.050305 | orchestrator | + instance_id = (known after apply) 2026-01-10 13:44:57.050309 | orchestrator | + region = (known after apply) 2026-01-10 13:44:57.050313 | orchestrator | + volume_id = (known after apply) 2026-01-10 13:44:57.050317 | orchestrator | } 2026-01-10 13:44:57.050321 | orchestrator | 2026-01-10 13:44:57.050324 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2026-01-10 13:44:57.050328 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-01-10 13:44:57.050332 | orchestrator | + device = (known after apply) 2026-01-10 13:44:57.050336 | orchestrator | + id = (known after apply) 2026-01-10 13:44:57.050340 | orchestrator | + instance_id = (known after apply) 2026-01-10 13:44:57.050344 | orchestrator | + region = (known after apply) 2026-01-10 13:44:57.050347 | orchestrator | + volume_id = (known after apply) 2026-01-10 13:44:57.050351 | orchestrator | } 2026-01-10 13:44:57.050355 | orchestrator | 2026-01-10 13:44:57.050359 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2026-01-10 13:44:57.050363 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-01-10 13:44:57.050367 | orchestrator | + device = (known after apply) 2026-01-10 13:44:57.050371 | orchestrator | + id = (known after apply) 2026-01-10 13:44:57.050374 | orchestrator | + instance_id = (known after apply) 2026-01-10 13:44:57.050378 | orchestrator | + region = (known after apply) 2026-01-10 13:44:57.050382 | orchestrator | + volume_id = (known after apply) 2026-01-10 13:44:57.050386 | orchestrator | } 2026-01-10 13:44:57.050389 | orchestrator | 2026-01-10 13:44:57.050393 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2026-01-10 13:44:57.050397 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-01-10 13:44:57.050401 | orchestrator | + device = (known after apply) 2026-01-10 13:44:57.050405 | orchestrator | + id = (known after apply) 2026-01-10 13:44:57.050409 | orchestrator | + instance_id = (known after apply) 2026-01-10 13:44:57.050412 | orchestrator | + region = (known after apply) 2026-01-10 13:44:57.050416 | orchestrator | + volume_id = (known after apply) 2026-01-10 13:44:57.050420 | orchestrator | } 2026-01-10 13:44:57.050424 | orchestrator | 2026-01-10 13:44:57.050428 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2026-01-10 13:44:57.050431 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-01-10 13:44:57.050435 | orchestrator | + device = (known after apply) 2026-01-10 13:44:57.050439 | orchestrator | + id = (known after apply) 2026-01-10 13:44:57.050443 | orchestrator | + instance_id = (known after apply) 2026-01-10 13:44:57.050450 | orchestrator | + region = (known after apply) 2026-01-10 13:44:57.050454 | orchestrator | + volume_id = (known after apply) 2026-01-10 13:44:57.050462 | orchestrator | } 2026-01-10 13:44:57.050466 | orchestrator | 2026-01-10 13:44:57.050469 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2026-01-10 13:44:57.050473 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-01-10 13:44:57.050477 | orchestrator | + device = (known after apply) 2026-01-10 13:44:57.050481 | orchestrator | + id = (known after apply) 2026-01-10 13:44:57.050485 | orchestrator | + instance_id = (known after apply) 2026-01-10 13:44:57.050489 | orchestrator | + region = (known after apply) 2026-01-10 13:44:57.050492 | orchestrator | + volume_id = (known after apply) 2026-01-10 13:44:57.050496 | orchestrator | } 2026-01-10 13:44:57.050500 | orchestrator | 2026-01-10 13:44:57.050504 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2026-01-10 13:44:57.050508 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-01-10 13:44:57.050511 | orchestrator | + device = (known after apply) 2026-01-10 13:44:57.050515 | orchestrator | + id = (known after apply) 2026-01-10 13:44:57.050519 | orchestrator | + instance_id = (known after apply) 2026-01-10 13:44:57.050523 | orchestrator | + region = (known after apply) 2026-01-10 13:44:57.050530 | orchestrator | + volume_id = (known after apply) 2026-01-10 13:44:57.050534 | orchestrator | } 2026-01-10 13:44:57.050538 | orchestrator | 2026-01-10 13:44:57.050541 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2026-01-10 13:44:57.050545 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-01-10 13:44:57.050549 | orchestrator | + device = (known after apply) 2026-01-10 13:44:57.050553 | orchestrator | + id = (known after apply) 2026-01-10 13:44:57.050557 | orchestrator | + instance_id = (known after apply) 2026-01-10 13:44:57.050561 | orchestrator | + region = (known after apply) 2026-01-10 13:44:57.050564 | orchestrator | + volume_id = (known after apply) 2026-01-10 13:44:57.050568 | orchestrator | } 2026-01-10 13:44:57.050572 | orchestrator | 2026-01-10 13:44:57.050576 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2026-01-10 13:44:57.050580 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-01-10 13:44:57.050583 | orchestrator | + device = (known after apply) 2026-01-10 13:44:57.050587 | orchestrator | + id = (known after apply) 2026-01-10 13:44:57.050591 | orchestrator | + instance_id = (known after apply) 2026-01-10 13:44:57.050595 | orchestrator | + region = (known after apply) 2026-01-10 13:44:57.050599 | orchestrator | + volume_id = (known after apply) 2026-01-10 13:44:57.050602 | orchestrator | } 2026-01-10 13:44:57.050606 | orchestrator | 2026-01-10 13:44:57.050610 | orchestrator | # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2026-01-10 13:44:57.050615 | orchestrator | + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2026-01-10 13:44:57.050618 | orchestrator | + fixed_ip = (known after apply) 2026-01-10 13:44:57.050622 | orchestrator | + floating_ip = (known after apply) 2026-01-10 13:44:57.050626 | orchestrator | + id = (known after apply) 2026-01-10 13:44:57.050630 | orchestrator | + port_id = (known after apply) 2026-01-10 13:44:57.050634 | orchestrator | + region = (known after apply) 2026-01-10 13:44:57.050637 | orchestrator | } 2026-01-10 13:44:57.050641 | orchestrator | 2026-01-10 13:44:57.050645 | orchestrator | # openstack_networking_floatingip_v2.manager_floating_ip will be created 2026-01-10 13:44:57.050649 | orchestrator | + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2026-01-10 13:44:57.050653 | orchestrator | + address = (known after apply) 2026-01-10 13:44:57.050657 | orchestrator | + all_tags = (known after apply) 2026-01-10 13:44:57.050660 | orchestrator | + dns_domain = (known after apply) 2026-01-10 13:44:57.050664 | orchestrator | + dns_name = (known after apply) 2026-01-10 13:44:57.050668 | orchestrator | + fixed_ip = (known after apply) 2026-01-10 13:44:57.050672 | orchestrator | + id = (known after apply) 2026-01-10 13:44:57.050675 | orchestrator | + pool = "public" 2026-01-10 13:44:57.050679 | orchestrator | + port_id = (known after apply) 2026-01-10 13:44:57.050683 | orchestrator | + region = (known after apply) 2026-01-10 13:44:57.050687 | orchestrator | + subnet_id = (known after apply) 2026-01-10 13:44:57.050690 | orchestrator | + tenant_id = (known after apply) 2026-01-10 13:44:57.050694 | orchestrator | } 2026-01-10 13:44:57.050698 | orchestrator | 2026-01-10 13:44:57.050702 | orchestrator | # openstack_networking_network_v2.net_management will be created 2026-01-10 13:44:57.050706 | orchestrator | + resource "openstack_networking_network_v2" "net_management" { 2026-01-10 13:44:57.050710 | orchestrator | + admin_state_up = (known after apply) 2026-01-10 13:44:57.050713 | orchestrator | + all_tags = (known after apply) 2026-01-10 13:44:57.050717 | orchestrator | + availability_zone_hints = [ 2026-01-10 13:44:57.050721 | orchestrator | + "nova", 2026-01-10 13:44:57.050725 | orchestrator | ] 2026-01-10 13:44:57.050729 | orchestrator | + dns_domain = (known after apply) 2026-01-10 13:44:57.050733 | orchestrator | + external = (known after apply) 2026-01-10 13:44:57.050736 | orchestrator | + id = (known after apply) 2026-01-10 13:44:57.050740 | orchestrator | + mtu = (known after apply) 2026-01-10 13:44:57.050744 | orchestrator | + name = "net-testbed-management" 2026-01-10 13:44:57.050748 | orchestrator | + port_security_enabled = (known after apply) 2026-01-10 13:44:57.050755 | orchestrator | + qos_policy_id = (known after apply) 2026-01-10 13:44:57.050759 | orchestrator | + region = (known after apply) 2026-01-10 13:44:57.050763 | orchestrator | + shared = (known after apply) 2026-01-10 13:44:57.050767 | orchestrator | + tenant_id = (known after apply) 2026-01-10 13:44:57.050771 | orchestrator | + transparent_vlan = (known after apply) 2026-01-10 13:44:57.050775 | orchestrator | 2026-01-10 13:44:57.050778 | orchestrator | + segments (known after apply) 2026-01-10 13:44:57.050782 | orchestrator | } 2026-01-10 13:44:57.050786 | orchestrator | 2026-01-10 13:44:57.050790 | orchestrator | # openstack_networking_port_v2.manager_port_management will be created 2026-01-10 13:44:57.050794 | orchestrator | + resource "openstack_networking_port_v2" "manager_port_management" { 2026-01-10 13:44:57.050797 | orchestrator | + admin_state_up = (known after apply) 2026-01-10 13:44:57.050801 | orchestrator | + all_fixed_ips = (known after apply) 2026-01-10 13:44:57.050805 | orchestrator | + all_security_group_ids = (known after apply) 2026-01-10 13:44:57.050815 | orchestrator | + all_tags = (known after apply) 2026-01-10 13:44:57.050819 | orchestrator | + device_id = (known after apply) 2026-01-10 13:44:57.050822 | orchestrator | + device_owner = (known after apply) 2026-01-10 13:44:57.050826 | orchestrator | + dns_assignment = (known after apply) 2026-01-10 13:44:57.050830 | orchestrator | + dns_name = (known after apply) 2026-01-10 13:44:57.050834 | orchestrator | + id = (known after apply) 2026-01-10 13:44:57.050841 | orchestrator | + mac_address = (known after apply) 2026-01-10 13:44:57.050845 | orchestrator | + network_id = (known after apply) 2026-01-10 13:44:57.050849 | orchestrator | + port_security_enabled = (known after apply) 2026-01-10 13:44:57.050853 | orchestrator | + qos_policy_id = (known after apply) 2026-01-10 13:44:57.050857 | orchestrator | + region = (known after apply) 2026-01-10 13:44:57.050861 | orchestrator | + security_group_ids = (known after apply) 2026-01-10 13:44:57.050864 | orchestrator | + tenant_id = (known after apply) 2026-01-10 13:44:57.050868 | orchestrator | 2026-01-10 13:44:57.050872 | orchestrator | + allowed_address_pairs { 2026-01-10 13:44:57.050876 | orchestrator | + ip_address = "192.168.16.8/32" 2026-01-10 13:44:57.050880 | orchestrator | } 2026-01-10 13:44:57.050883 | orchestrator | 2026-01-10 13:44:57.050887 | orchestrator | + binding (known after apply) 2026-01-10 13:44:57.050891 | orchestrator | 2026-01-10 13:44:57.050895 | orchestrator | + fixed_ip { 2026-01-10 13:44:57.050899 | orchestrator | + ip_address = "192.168.16.5" 2026-01-10 13:44:57.050902 | orchestrator | + subnet_id = (known after apply) 2026-01-10 13:44:57.050906 | orchestrator | } 2026-01-10 13:44:57.050910 | orchestrator | } 2026-01-10 13:44:57.050914 | orchestrator | 2026-01-10 13:44:57.050918 | orchestrator | # openstack_networking_port_v2.node_port_management[0] will be created 2026-01-10 13:44:57.050922 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-01-10 13:44:57.050925 | orchestrator | + admin_state_up = (known after apply) 2026-01-10 13:44:57.050929 | orchestrator | + all_fixed_ips = (known after apply) 2026-01-10 13:44:57.050933 | orchestrator | + all_security_group_ids = (known after apply) 2026-01-10 13:44:57.050937 | orchestrator | + all_tags = (known after apply) 2026-01-10 13:44:57.050941 | orchestrator | + device_id = (known after apply) 2026-01-10 13:44:57.050944 | orchestrator | + device_owner = (known after apply) 2026-01-10 13:44:57.050948 | orchestrator | + dns_assignment = (known after apply) 2026-01-10 13:44:57.050952 | orchestrator | + dns_name = (known after apply) 2026-01-10 13:44:57.050956 | orchestrator | + id = (known after apply) 2026-01-10 13:44:57.050960 | orchestrator | + mac_address = (known after apply) 2026-01-10 13:44:57.050963 | orchestrator | + network_id = (known after apply) 2026-01-10 13:44:57.050967 | orchestrator | + port_security_enabled = (known after apply) 2026-01-10 13:44:57.050971 | orchestrator | + qos_policy_id = (known after apply) 2026-01-10 13:44:57.050975 | orchestrator | + region = (known after apply) 2026-01-10 13:44:57.050981 | orchestrator | + security_group_ids = (known after apply) 2026-01-10 13:44:57.050985 | orchestrator | + tenant_id = (known after apply) 2026-01-10 13:44:57.050989 | orchestrator | 2026-01-10 13:44:57.050993 | orchestrator | + allowed_address_pairs { 2026-01-10 13:44:57.050997 | orchestrator | + ip_address = "192.168.16.254/32" 2026-01-10 13:44:57.051001 | orchestrator | } 2026-01-10 13:44:57.051004 | orchestrator | + allowed_address_pairs { 2026-01-10 13:44:57.051008 | orchestrator | + ip_address = "192.168.16.8/32" 2026-01-10 13:44:57.051012 | orchestrator | } 2026-01-10 13:44:57.051016 | orchestrator | + allowed_address_pairs { 2026-01-10 13:44:57.051020 | orchestrator | + ip_address = "192.168.16.9/32" 2026-01-10 13:44:57.051024 | orchestrator | } 2026-01-10 13:44:57.051027 | orchestrator | 2026-01-10 13:44:57.051031 | orchestrator | + binding (known after apply) 2026-01-10 13:44:57.051035 | orchestrator | 2026-01-10 13:44:57.051039 | orchestrator | + fixed_ip { 2026-01-10 13:44:57.051043 | orchestrator | + ip_address = "192.168.16.10" 2026-01-10 13:44:57.051047 | orchestrator | + subnet_id = (known after apply) 2026-01-10 13:44:57.051050 | orchestrator | } 2026-01-10 13:44:57.051054 | orchestrator | } 2026-01-10 13:44:57.051058 | orchestrator | 2026-01-10 13:44:57.051062 | orchestrator | # openstack_networking_port_v2.node_port_management[1] will be created 2026-01-10 13:44:57.051066 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-01-10 13:44:57.051070 | orchestrator | + admin_state_up = (known after apply) 2026-01-10 13:44:57.051073 | orchestrator | + all_fixed_ips = (known after apply) 2026-01-10 13:44:57.051077 | orchestrator | + all_security_group_ids = (known after apply) 2026-01-10 13:44:57.051081 | orchestrator | + all_tags = (known after apply) 2026-01-10 13:44:57.051085 | orchestrator | + device_id = (known after apply) 2026-01-10 13:44:57.051089 | orchestrator | + device_owner = (known after apply) 2026-01-10 13:44:57.051092 | orchestrator | + dns_assignment = (known after apply) 2026-01-10 13:44:57.051096 | orchestrator | + dns_name = (known after apply) 2026-01-10 13:44:57.051100 | orchestrator | + id = (known after apply) 2026-01-10 13:44:57.051104 | orchestrator | + mac_address = (known after apply) 2026-01-10 13:44:57.051108 | orchestrator | + network_id = (known after apply) 2026-01-10 13:44:57.051111 | orchestrator | + port_security_enabled = (known after apply) 2026-01-10 13:44:57.051115 | orchestrator | + qos_policy_id = (known after apply) 2026-01-10 13:44:57.051119 | orchestrator | + region = (known after apply) 2026-01-10 13:44:57.051123 | orchestrator | + security_group_ids = (known after apply) 2026-01-10 13:44:57.051127 | orchestrator | + tenant_id = (known after apply) 2026-01-10 13:44:57.051130 | orchestrator | 2026-01-10 13:44:57.051134 | orchestrator | + allowed_address_pairs { 2026-01-10 13:44:57.051138 | orchestrator | + ip_address = "192.168.16.254/32" 2026-01-10 13:44:57.051142 | orchestrator | } 2026-01-10 13:44:57.051146 | orchestrator | + allowed_address_pairs { 2026-01-10 13:44:57.051149 | orchestrator | + ip_address = "192.168.16.8/32" 2026-01-10 13:44:57.051153 | orchestrator | } 2026-01-10 13:44:57.051157 | orchestrator | + allowed_address_pairs { 2026-01-10 13:44:57.051161 | orchestrator | + ip_address = "192.168.16.9/32" 2026-01-10 13:44:57.051165 | orchestrator | } 2026-01-10 13:44:57.051169 | orchestrator | 2026-01-10 13:44:57.051172 | orchestrator | + binding (known after apply) 2026-01-10 13:44:57.051176 | orchestrator | 2026-01-10 13:44:57.051180 | orchestrator | + fixed_ip { 2026-01-10 13:44:57.051184 | orchestrator | + ip_address = "192.168.16.11" 2026-01-10 13:44:57.051188 | orchestrator | + subnet_id = (known after apply) 2026-01-10 13:44:57.051191 | orchestrator | } 2026-01-10 13:44:57.051195 | orchestrator | } 2026-01-10 13:44:57.051199 | orchestrator | 2026-01-10 13:44:57.051203 | orchestrator | # openstack_networking_port_v2.node_port_management[2] will be created 2026-01-10 13:44:57.051207 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-01-10 13:44:57.051210 | orchestrator | + admin_state_up = (known after apply) 2026-01-10 13:44:57.051214 | orchestrator | + all_fixed_ips = (known after apply) 2026-01-10 13:44:57.051218 | orchestrator | + all_security_group_ids = (known after apply) 2026-01-10 13:44:57.051222 | orchestrator | + all_tags = (known after apply) 2026-01-10 13:44:57.051229 | orchestrator | + device_id = (known after apply) 2026-01-10 13:44:57.051232 | orchestrator | + device_owner = (known after apply) 2026-01-10 13:44:57.051236 | orchestrator | + dns_assignment = (known after apply) 2026-01-10 13:44:57.051240 | orchestrator | + dns_name = (known after apply) 2026-01-10 13:44:57.051246 | orchestrator | + id = (known after apply) 2026-01-10 13:44:57.051250 | orchestrator | + mac_address = (known after apply) 2026-01-10 13:44:57.051257 | orchestrator | + network_id = (known after apply) 2026-01-10 13:44:57.051261 | orchestrator | + port_security_enabled = (known after apply) 2026-01-10 13:44:57.051265 | orchestrator | + qos_policy_id = (known after apply) 2026-01-10 13:44:57.051269 | orchestrator | + region = (known after apply) 2026-01-10 13:44:57.051272 | orchestrator | + security_group_ids = (known after apply) 2026-01-10 13:44:57.051276 | orchestrator | + tenant_id = (known after apply) 2026-01-10 13:44:57.051280 | orchestrator | 2026-01-10 13:44:57.051284 | orchestrator | + allowed_address_pairs { 2026-01-10 13:44:57.051320 | orchestrator | + ip_address = "192.168.16.254/32" 2026-01-10 13:44:57.051324 | orchestrator | } 2026-01-10 13:44:57.051328 | orchestrator | + allowed_address_pairs { 2026-01-10 13:44:57.051332 | orchestrator | + ip_address = "192.168.16.8/32" 2026-01-10 13:44:57.051336 | orchestrator | } 2026-01-10 13:44:57.051340 | orchestrator | + allowed_address_pairs { 2026-01-10 13:44:57.051343 | orchestrator | + ip_address = "192.168.16.9/32" 2026-01-10 13:44:57.051347 | orchestrator | } 2026-01-10 13:44:57.051351 | orchestrator | 2026-01-10 13:44:57.051355 | orchestrator | + binding (known after apply) 2026-01-10 13:44:57.051359 | orchestrator | 2026-01-10 13:44:57.051362 | orchestrator | + fixed_ip { 2026-01-10 13:44:57.051366 | orchestrator | + ip_address = "192.168.16.12" 2026-01-10 13:44:57.051370 | orchestrator | + subnet_id = (known after apply) 2026-01-10 13:44:57.051374 | orchestrator | } 2026-01-10 13:44:57.051378 | orchestrator | } 2026-01-10 13:44:57.051381 | orchestrator | 2026-01-10 13:44:57.051385 | orchestrator | # openstack_networking_port_v2.node_port_management[3] will be created 2026-01-10 13:44:57.051389 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-01-10 13:44:57.051393 | orchestrator | + admin_state_up = (known after apply) 2026-01-10 13:44:57.051397 | orchestrator | + all_fixed_ips = (known after apply) 2026-01-10 13:44:57.051401 | orchestrator | + all_security_group_ids = (known after apply) 2026-01-10 13:44:57.051404 | orchestrator | + all_tags = (known after apply) 2026-01-10 13:44:57.051408 | orchestrator | + device_id = (known after apply) 2026-01-10 13:44:57.051412 | orchestrator | + device_owner = (known after apply) 2026-01-10 13:44:57.051416 | orchestrator | + dns_assignment = (known after apply) 2026-01-10 13:44:57.051419 | orchestrator | + dns_name = (known after apply) 2026-01-10 13:44:57.051423 | orchestrator | + id = (known after apply) 2026-01-10 13:44:57.051427 | orchestrator | + mac_address = (known after apply) 2026-01-10 13:44:57.051431 | orchestrator | + network_id = (known after apply) 2026-01-10 13:44:57.051434 | orchestrator | + port_security_enabled = (known after apply) 2026-01-10 13:44:57.051438 | orchestrator | + qos_policy_id = (known after apply) 2026-01-10 13:44:57.051442 | orchestrator | + region = (known after apply) 2026-01-10 13:44:57.051446 | orchestrator | + security_group_ids = (known after apply) 2026-01-10 13:44:57.051449 | orchestrator | + tenant_id = (known after apply) 2026-01-10 13:44:57.051453 | orchestrator | 2026-01-10 13:44:57.051457 | orchestrator | + allowed_address_pairs { 2026-01-10 13:44:57.051461 | orchestrator | + ip_address = "192.168.16.254/32" 2026-01-10 13:44:57.051465 | orchestrator | } 2026-01-10 13:44:57.051469 | orchestrator | + allowed_address_pairs { 2026-01-10 13:44:57.051472 | orchestrator | + ip_address = "192.168.16.8/32" 2026-01-10 13:44:57.051476 | orchestrator | } 2026-01-10 13:44:57.051480 | orchestrator | + allowed_address_pairs { 2026-01-10 13:44:57.051484 | orchestrator | + ip_address = "192.168.16.9/32" 2026-01-10 13:44:57.051487 | orchestrator | } 2026-01-10 13:44:57.051491 | orchestrator | 2026-01-10 13:44:57.051498 | orchestrator | + binding (known after apply) 2026-01-10 13:44:57.051502 | orchestrator | 2026-01-10 13:44:57.051506 | orchestrator | + fixed_ip { 2026-01-10 13:44:57.051510 | orchestrator | + ip_address = "192.168.16.13" 2026-01-10 13:44:57.051514 | orchestrator | + subnet_id = (known after apply) 2026-01-10 13:44:57.051517 | orchestrator | } 2026-01-10 13:44:57.051521 | orchestrator | } 2026-01-10 13:44:57.051525 | orchestrator | 2026-01-10 13:44:57.051529 | orchestrator | # openstack_networking_port_v2.node_port_management[4] will be created 2026-01-10 13:44:57.051533 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-01-10 13:44:57.051536 | orchestrator | + admin_state_up = (known after apply) 2026-01-10 13:44:57.051540 | orchestrator | + all_fixed_ips = (known after apply) 2026-01-10 13:44:57.051544 | orchestrator | + all_security_group_ids = (known after apply) 2026-01-10 13:44:57.051548 | orchestrator | + all_tags = (known after apply) 2026-01-10 13:44:57.051551 | orchestrator | + device_id = (known after apply) 2026-01-10 13:44:57.051555 | orchestrator | + device_owner = (known after apply) 2026-01-10 13:44:57.051559 | orchestrator | + dns_assignment = (known after apply) 2026-01-10 13:44:57.051563 | orchestrator | + dns_name = (known after apply) 2026-01-10 13:44:57.051566 | orchestrator | + id = (known after apply) 2026-01-10 13:44:57.051570 | orchestrator | + mac_address = (known after apply) 2026-01-10 13:44:57.051574 | orchestrator | + network_id = (known after apply) 2026-01-10 13:44:57.051578 | orchestrator | + port_security_enabled = (known after apply) 2026-01-10 13:44:57.051581 | orchestrator | + qos_policy_id = (known after apply) 2026-01-10 13:44:57.051585 | orchestrator | + region = (known after apply) 2026-01-10 13:44:57.051589 | orchestrator | + security_group_ids = (known after apply) 2026-01-10 13:44:57.051593 | orchestrator | + tenant_id = (known after apply) 2026-01-10 13:44:57.051597 | orchestrator | 2026-01-10 13:44:57.051601 | orchestrator | + allowed_address_pairs { 2026-01-10 13:44:57.051605 | orchestrator | + ip_address = "192.168.16.254/32" 2026-01-10 13:44:57.051608 | orchestrator | } 2026-01-10 13:44:57.051612 | orchestrator | + allowed_address_pairs { 2026-01-10 13:44:57.051616 | orchestrator | + ip_address = "192.168.16.8/32" 2026-01-10 13:44:57.051620 | orchestrator | } 2026-01-10 13:44:57.051624 | orchestrator | + allowed_address_pairs { 2026-01-10 13:44:57.051627 | orchestrator | + ip_address = "192.168.16.9/32" 2026-01-10 13:44:57.051631 | orchestrator | } 2026-01-10 13:44:57.051635 | orchestrator | 2026-01-10 13:44:57.051639 | orchestrator | + binding (known after apply) 2026-01-10 13:44:57.051642 | orchestrator | 2026-01-10 13:44:57.051646 | orchestrator | + fixed_ip { 2026-01-10 13:44:57.051650 | orchestrator | + ip_address = "192.168.16.14" 2026-01-10 13:44:57.051654 | orchestrator | + subnet_id = (known after apply) 2026-01-10 13:44:57.051658 | orchestrator | } 2026-01-10 13:44:57.051661 | orchestrator | } 2026-01-10 13:44:57.051665 | orchestrator | 2026-01-10 13:44:57.051669 | orchestrator | # openstack_networking_port_v2.node_port_management[5] will be created 2026-01-10 13:44:57.051673 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-01-10 13:44:57.051677 | orchestrator | + admin_state_up = (known after apply) 2026-01-10 13:44:57.051680 | orchestrator | + all_fixed_ips = (known after apply) 2026-01-10 13:44:57.051684 | orchestrator | + all_security_group_ids = (known after apply) 2026-01-10 13:44:57.051688 | orchestrator | + all_tags = (known after apply) 2026-01-10 13:44:57.051692 | orchestrator | + device_id = (known after apply) 2026-01-10 13:44:57.051696 | orchestrator | + device_owner = (known after apply) 2026-01-10 13:44:57.051699 | orchestrator | + dns_assignment = (known after apply) 2026-01-10 13:44:57.051707 | orchestrator | + dns_name = (known after apply) 2026-01-10 13:44:57.051711 | orchestrator | + id = (known after apply) 2026-01-10 13:44:57.051714 | orchestrator | + mac_address = (known after apply) 2026-01-10 13:44:57.051718 | orchestrator | + network_id = (known after apply) 2026-01-10 13:44:57.051722 | orchestrator | + port_security_enabled = (known after apply) 2026-01-10 13:44:57.051726 | orchestrator | + qos_policy_id = (known after apply) 2026-01-10 13:44:57.051732 | orchestrator | + region = (known after apply) 2026-01-10 13:44:57.051736 | orchestrator | + security_group_ids = (known after apply) 2026-01-10 13:44:57.051740 | orchestrator | + tenant_id = (known after apply) 2026-01-10 13:44:57.051744 | orchestrator | 2026-01-10 13:44:57.051748 | orchestrator | + allowed_address_pairs { 2026-01-10 13:44:57.051751 | orchestrator | + ip_address = "192.168.16.254/32" 2026-01-10 13:44:57.051755 | orchestrator | } 2026-01-10 13:44:57.051759 | orchestrator | + allowed_address_pairs { 2026-01-10 13:44:57.051763 | orchestrator | + ip_address = "192.168.16.8/32" 2026-01-10 13:44:57.051767 | orchestrator | } 2026-01-10 13:44:57.051770 | orchestrator | + allowed_address_pairs { 2026-01-10 13:44:57.051774 | orchestrator | + ip_address = "192.168.16.9/32" 2026-01-10 13:44:57.051778 | orchestrator | } 2026-01-10 13:44:57.051782 | orchestrator | 2026-01-10 13:44:57.051789 | orchestrator | + binding (known after apply) 2026-01-10 13:44:57.051793 | orchestrator | 2026-01-10 13:44:57.051796 | orchestrator | + fixed_ip { 2026-01-10 13:44:57.051800 | orchestrator | + ip_address = "192.168.16.15" 2026-01-10 13:44:57.051804 | orchestrator | + subnet_id = (known after apply) 2026-01-10 13:44:57.051808 | orchestrator | } 2026-01-10 13:44:57.051812 | orchestrator | } 2026-01-10 13:44:57.051815 | orchestrator | 2026-01-10 13:44:57.051819 | orchestrator | # openstack_networking_router_interface_v2.router_interface will be created 2026-01-10 13:44:57.051823 | orchestrator | + resource "openstack_networking_router_interface_v2" "router_interface" { 2026-01-10 13:44:57.051827 | orchestrator | + force_destroy = false 2026-01-10 13:44:57.051831 | orchestrator | + id = (known after apply) 2026-01-10 13:44:57.051834 | orchestrator | + port_id = (known after apply) 2026-01-10 13:44:57.051866 | orchestrator | + region = (known after apply) 2026-01-10 13:44:57.051870 | orchestrator | + router_id = (known after apply) 2026-01-10 13:44:57.051873 | orchestrator | + subnet_id = (known after apply) 2026-01-10 13:44:57.051877 | orchestrator | } 2026-01-10 13:44:57.051881 | orchestrator | 2026-01-10 13:44:57.051885 | orchestrator | # openstack_networking_router_v2.router will be created 2026-01-10 13:44:57.051889 | orchestrator | + resource "openstack_networking_router_v2" "router" { 2026-01-10 13:44:57.051893 | orchestrator | + admin_state_up = (known after apply) 2026-01-10 13:44:57.051896 | orchestrator | + all_tags = (known after apply) 2026-01-10 13:44:57.051900 | orchestrator | + availability_zone_hints = [ 2026-01-10 13:44:57.051904 | orchestrator | + "nova", 2026-01-10 13:44:57.051908 | orchestrator | ] 2026-01-10 13:44:57.051912 | orchestrator | + distributed = (known after apply) 2026-01-10 13:44:57.051916 | orchestrator | + enable_snat = (known after apply) 2026-01-10 13:44:57.051920 | orchestrator | + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2026-01-10 13:44:57.051923 | orchestrator | + external_qos_policy_id = (known after apply) 2026-01-10 13:44:57.051927 | orchestrator | + id = (known after apply) 2026-01-10 13:44:57.051931 | orchestrator | + name = "testbed" 2026-01-10 13:44:57.051935 | orchestrator | + region = (known after apply) 2026-01-10 13:44:57.051939 | orchestrator | + tenant_id = (known after apply) 2026-01-10 13:44:57.051943 | orchestrator | 2026-01-10 13:44:57.051946 | orchestrator | + external_fixed_ip (known after apply) 2026-01-10 13:44:57.051950 | orchestrator | } 2026-01-10 13:44:57.051954 | orchestrator | 2026-01-10 13:44:57.051958 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2026-01-10 13:44:57.051962 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2026-01-10 13:44:57.051966 | orchestrator | + description = "ssh" 2026-01-10 13:44:57.051970 | orchestrator | + direction = "ingress" 2026-01-10 13:44:57.051974 | orchestrator | + ethertype = "IPv4" 2026-01-10 13:44:57.051977 | orchestrator | + id = (known after apply) 2026-01-10 13:44:57.051981 | orchestrator | + port_range_max = 22 2026-01-10 13:44:57.051985 | orchestrator | + port_range_min = 22 2026-01-10 13:44:57.051989 | orchestrator | + protocol = "tcp" 2026-01-10 13:44:57.051993 | orchestrator | + region = (known after apply) 2026-01-10 13:44:57.052000 | orchestrator | + remote_address_group_id = (known after apply) 2026-01-10 13:44:57.052004 | orchestrator | + remote_group_id = (known after apply) 2026-01-10 13:44:57.052007 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-01-10 13:44:57.052011 | orchestrator | + security_group_id = (known after apply) 2026-01-10 13:44:57.052015 | orchestrator | + tenant_id = (known after apply) 2026-01-10 13:44:57.052019 | orchestrator | } 2026-01-10 13:44:57.052022 | orchestrator | 2026-01-10 13:44:57.052026 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2026-01-10 13:44:57.052030 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2026-01-10 13:44:57.052034 | orchestrator | + description = "wireguard" 2026-01-10 13:44:57.052038 | orchestrator | + direction = "ingress" 2026-01-10 13:44:57.052042 | orchestrator | + ethertype = "IPv4" 2026-01-10 13:44:57.052045 | orchestrator | + id = (known after apply) 2026-01-10 13:44:57.052049 | orchestrator | + port_range_max = 51820 2026-01-10 13:44:57.052053 | orchestrator | + port_range_min = 51820 2026-01-10 13:44:57.052057 | orchestrator | + protocol = "udp" 2026-01-10 13:44:57.052060 | orchestrator | + region = (known after apply) 2026-01-10 13:44:57.052064 | orchestrator | + remote_address_group_id = (known after apply) 2026-01-10 13:44:57.052068 | orchestrator | + remote_group_id = (known after apply) 2026-01-10 13:44:57.052072 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-01-10 13:44:57.052075 | orchestrator | + security_group_id = (known after apply) 2026-01-10 13:44:57.052079 | orchestrator | + tenant_id = (known after apply) 2026-01-10 13:44:57.052083 | orchestrator | } 2026-01-10 13:44:57.052087 | orchestrator | 2026-01-10 13:44:57.052091 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2026-01-10 13:44:57.052095 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2026-01-10 13:44:57.052099 | orchestrator | + direction = "ingress" 2026-01-10 13:44:57.052102 | orchestrator | + ethertype = "IPv4" 2026-01-10 13:44:57.052106 | orchestrator | + id = (known after apply) 2026-01-10 13:44:57.052110 | orchestrator | + protocol = "tcp" 2026-01-10 13:44:57.052114 | orchestrator | + region = (known after apply) 2026-01-10 13:44:57.052121 | orchestrator | + remote_address_group_id = (known after apply) 2026-01-10 13:44:57.052125 | orchestrator | + remote_group_id = (known after apply) 2026-01-10 13:44:57.052129 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-01-10 13:44:57.052133 | orchestrator | + security_group_id = (known after apply) 2026-01-10 13:44:57.052137 | orchestrator | + tenant_id = (known after apply) 2026-01-10 13:44:57.052140 | orchestrator | } 2026-01-10 13:44:57.052144 | orchestrator | 2026-01-10 13:44:57.052148 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2026-01-10 13:44:57.052152 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2026-01-10 13:44:57.052156 | orchestrator | + direction = "ingress" 2026-01-10 13:44:57.052160 | orchestrator | + ethertype = "IPv4" 2026-01-10 13:44:57.052163 | orchestrator | + id = (known after apply) 2026-01-10 13:44:57.052167 | orchestrator | + protocol = "udp" 2026-01-10 13:44:57.052171 | orchestrator | + region = (known after apply) 2026-01-10 13:44:57.052175 | orchestrator | + remote_address_group_id = (known after apply) 2026-01-10 13:44:57.052178 | orchestrator | + remote_group_id = (known after apply) 2026-01-10 13:44:57.052182 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-01-10 13:44:57.052186 | orchestrator | + security_group_id = (known after apply) 2026-01-10 13:44:57.052190 | orchestrator | + tenant_id = (known after apply) 2026-01-10 13:44:57.052194 | orchestrator | } 2026-01-10 13:44:57.052197 | orchestrator | 2026-01-10 13:44:57.052201 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2026-01-10 13:44:57.052208 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2026-01-10 13:44:57.052212 | orchestrator | + direction = "ingress" 2026-01-10 13:44:57.052216 | orchestrator | + ethertype = "IPv4" 2026-01-10 13:44:57.052220 | orchestrator | + id = (known after apply) 2026-01-10 13:44:57.052224 | orchestrator | + protocol = "icmp" 2026-01-10 13:44:57.052228 | orchestrator | + region = (known after apply) 2026-01-10 13:44:57.052231 | orchestrator | + remote_address_group_id = (known after apply) 2026-01-10 13:44:57.052235 | orchestrator | + remote_group_id = (known after apply) 2026-01-10 13:44:57.052239 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-01-10 13:44:57.052243 | orchestrator | + security_group_id = (known after apply) 2026-01-10 13:44:57.052246 | orchestrator | + tenant_id = (known after apply) 2026-01-10 13:44:57.052250 | orchestrator | } 2026-01-10 13:44:57.052254 | orchestrator | 2026-01-10 13:44:57.052258 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2026-01-10 13:44:57.052262 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2026-01-10 13:44:57.052266 | orchestrator | + direction = "ingress" 2026-01-10 13:44:57.052269 | orchestrator | + ethertype = "IPv4" 2026-01-10 13:44:57.052273 | orchestrator | + id = (known after apply) 2026-01-10 13:44:57.052277 | orchestrator | + protocol = "tcp" 2026-01-10 13:44:57.052281 | orchestrator | + region = (known after apply) 2026-01-10 13:44:57.052285 | orchestrator | + remote_address_group_id = (known after apply) 2026-01-10 13:44:57.052306 | orchestrator | + remote_group_id = (known after apply) 2026-01-10 13:44:57.052310 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-01-10 13:44:57.052314 | orchestrator | + security_group_id = (known after apply) 2026-01-10 13:44:57.052318 | orchestrator | + tenant_id = (known after apply) 2026-01-10 13:44:57.052322 | orchestrator | } 2026-01-10 13:44:57.052325 | orchestrator | 2026-01-10 13:44:57.052329 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2026-01-10 13:44:57.052333 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2026-01-10 13:44:57.052337 | orchestrator | + direction = "ingress" 2026-01-10 13:44:57.052341 | orchestrator | + ethertype = "IPv4" 2026-01-10 13:44:57.052345 | orchestrator | + id = (known after apply) 2026-01-10 13:44:57.052348 | orchestrator | + protocol = "udp" 2026-01-10 13:44:57.052352 | orchestrator | + region = (known after apply) 2026-01-10 13:44:57.052356 | orchestrator | + remote_address_group_id = (known after apply) 2026-01-10 13:44:57.052360 | orchestrator | + remote_group_id = (known after apply) 2026-01-10 13:44:57.052364 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-01-10 13:44:57.052367 | orchestrator | + security_group_id = (known after apply) 2026-01-10 13:44:57.052371 | orchestrator | + tenant_id = (known after apply) 2026-01-10 13:44:57.052375 | orchestrator | } 2026-01-10 13:44:57.052379 | orchestrator | 2026-01-10 13:44:57.052383 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2026-01-10 13:44:57.052386 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2026-01-10 13:44:57.052390 | orchestrator | + direction = "ingress" 2026-01-10 13:44:57.052397 | orchestrator | + ethertype = "IPv4" 2026-01-10 13:44:57.052401 | orchestrator | + id = (known after apply) 2026-01-10 13:44:57.052405 | orchestrator | + protocol = "icmp" 2026-01-10 13:44:57.052409 | orchestrator | + region = (known after apply) 2026-01-10 13:44:57.052412 | orchestrator | + remote_address_group_id = (known after apply) 2026-01-10 13:44:57.052416 | orchestrator | + remote_group_id = (known after apply) 2026-01-10 13:44:57.052420 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-01-10 13:44:57.052424 | orchestrator | + security_group_id = (known after apply) 2026-01-10 13:44:57.052428 | orchestrator | + tenant_id = (known after apply) 2026-01-10 13:44:57.052435 | orchestrator | } 2026-01-10 13:44:57.052439 | orchestrator | 2026-01-10 13:44:57.052443 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2026-01-10 13:44:57.052447 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2026-01-10 13:44:57.052451 | orchestrator | + description = "vrrp" 2026-01-10 13:44:57.052454 | orchestrator | + direction = "ingress" 2026-01-10 13:44:57.052458 | orchestrator | + ethertype = "IPv4" 2026-01-10 13:44:57.052462 | orchestrator | + id = (known after apply) 2026-01-10 13:44:57.052466 | orchestrator | + protocol = "112" 2026-01-10 13:44:57.052470 | orchestrator | + region = (known after apply) 2026-01-10 13:44:57.052478 | orchestrator | + remote_address_group_id = (known after apply) 2026-01-10 13:44:57.052482 | orchestrator | + remote_group_id = (known after apply) 2026-01-10 13:44:57.052486 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-01-10 13:44:57.052490 | orchestrator | + security_group_id = (known after apply) 2026-01-10 13:44:57.052493 | orchestrator | + tenant_id = (known after apply) 2026-01-10 13:44:57.052497 | orchestrator | } 2026-01-10 13:44:57.052501 | orchestrator | 2026-01-10 13:44:57.052505 | orchestrator | # openstack_networking_secgroup_v2.security_group_management will be created 2026-01-10 13:44:57.052509 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_management" { 2026-01-10 13:44:57.052513 | orchestrator | + all_tags = (known after apply) 2026-01-10 13:44:57.052516 | orchestrator | + description = "management security group" 2026-01-10 13:44:57.052520 | orchestrator | + id = (known after apply) 2026-01-10 13:44:57.052524 | orchestrator | + name = "testbed-management" 2026-01-10 13:44:57.052528 | orchestrator | + region = (known after apply) 2026-01-10 13:44:57.052531 | orchestrator | + stateful = (known after apply) 2026-01-10 13:44:57.052535 | orchestrator | + tenant_id = (known after apply) 2026-01-10 13:44:57.052539 | orchestrator | } 2026-01-10 13:44:57.052543 | orchestrator | 2026-01-10 13:44:57.052547 | orchestrator | # openstack_networking_secgroup_v2.security_group_node will be created 2026-01-10 13:44:57.052551 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_node" { 2026-01-10 13:44:57.052554 | orchestrator | + all_tags = (known after apply) 2026-01-10 13:44:57.052558 | orchestrator | + description = "node security group" 2026-01-10 13:44:57.052562 | orchestrator | + id = (known after apply) 2026-01-10 13:44:57.052566 | orchestrator | + name = "testbed-node" 2026-01-10 13:44:57.052570 | orchestrator | + region = (known after apply) 2026-01-10 13:44:57.052574 | orchestrator | + stateful = (known after apply) 2026-01-10 13:44:57.052577 | orchestrator | + tenant_id = (known after apply) 2026-01-10 13:44:57.052581 | orchestrator | } 2026-01-10 13:44:57.052585 | orchestrator | 2026-01-10 13:44:57.052589 | orchestrator | # openstack_networking_subnet_v2.subnet_management will be created 2026-01-10 13:44:57.052592 | orchestrator | + resource "openstack_networking_subnet_v2" "subnet_management" { 2026-01-10 13:44:57.052596 | orchestrator | + all_tags = (known after apply) 2026-01-10 13:44:57.052600 | orchestrator | + cidr = "192.168.16.0/20" 2026-01-10 13:44:57.052604 | orchestrator | + dns_nameservers = [ 2026-01-10 13:44:57.052608 | orchestrator | + "8.8.8.8", 2026-01-10 13:44:57.052612 | orchestrator | + "9.9.9.9", 2026-01-10 13:44:57.052616 | orchestrator | ] 2026-01-10 13:44:57.052620 | orchestrator | + enable_dhcp = true 2026-01-10 13:44:57.052623 | orchestrator | + gateway_ip = (known after apply) 2026-01-10 13:44:57.052627 | orchestrator | + id = (known after apply) 2026-01-10 13:44:57.052631 | orchestrator | + ip_version = 4 2026-01-10 13:44:57.052635 | orchestrator | + ipv6_address_mode = (known after apply) 2026-01-10 13:44:57.052639 | orchestrator | + ipv6_ra_mode = (known after apply) 2026-01-10 13:44:57.052643 | orchestrator | + name = "subnet-testbed-management" 2026-01-10 13:44:57.052646 | orchestrator | + network_id = (known after apply) 2026-01-10 13:44:57.052650 | orchestrator | + no_gateway = false 2026-01-10 13:44:57.052654 | orchestrator | + region = (known after apply) 2026-01-10 13:44:57.052658 | orchestrator | + service_types = (known after apply) 2026-01-10 13:44:57.052665 | orchestrator | + tenant_id = (known after apply) 2026-01-10 13:44:57.052669 | orchestrator | 2026-01-10 13:44:57.052673 | orchestrator | + allocation_pool { 2026-01-10 13:44:57.052677 | orchestrator | + end = "192.168.31.250" 2026-01-10 13:44:57.052680 | orchestrator | + start = "192.168.31.200" 2026-01-10 13:44:57.052684 | orchestrator | } 2026-01-10 13:44:57.052688 | orchestrator | } 2026-01-10 13:44:57.052692 | orchestrator | 2026-01-10 13:44:57.052696 | orchestrator | # terraform_data.image will be created 2026-01-10 13:44:57.052699 | orchestrator | + resource "terraform_data" "image" { 2026-01-10 13:44:57.052703 | orchestrator | + id = (known after apply) 2026-01-10 13:44:57.052707 | orchestrator | + input = "Ubuntu 24.04" 2026-01-10 13:44:57.052711 | orchestrator | + output = (known after apply) 2026-01-10 13:44:57.052715 | orchestrator | } 2026-01-10 13:44:57.052719 | orchestrator | 2026-01-10 13:44:57.052722 | orchestrator | # terraform_data.image_node will be created 2026-01-10 13:44:57.052726 | orchestrator | + resource "terraform_data" "image_node" { 2026-01-10 13:44:57.052730 | orchestrator | + id = (known after apply) 2026-01-10 13:44:57.052734 | orchestrator | + input = "Ubuntu 24.04" 2026-01-10 13:44:57.052738 | orchestrator | + output = (known after apply) 2026-01-10 13:44:57.052742 | orchestrator | } 2026-01-10 13:44:57.052745 | orchestrator | 2026-01-10 13:44:57.052749 | orchestrator | Plan: 64 to add, 0 to change, 0 to destroy. 2026-01-10 13:44:57.052753 | orchestrator | 2026-01-10 13:44:57.052757 | orchestrator | Changes to Outputs: 2026-01-10 13:44:57.052761 | orchestrator | + manager_address = (sensitive value) 2026-01-10 13:44:57.052765 | orchestrator | + private_key = (sensitive value) 2026-01-10 13:44:57.224440 | orchestrator | terraform_data.image_node: Creating... 2026-01-10 13:44:57.224741 | orchestrator | terraform_data.image_node: Creation complete after 0s [id=0c16b793-9fb8-feac-5603-37e544cb671b] 2026-01-10 13:44:57.302852 | orchestrator | terraform_data.image: Creating... 2026-01-10 13:44:57.306180 | orchestrator | terraform_data.image: Creation complete after 0s [id=4a11a9cf-d3e5-12a0-71c5-726d054cc1b6] 2026-01-10 13:44:57.335844 | orchestrator | data.openstack_images_image_v2.image: Reading... 2026-01-10 13:44:57.345625 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2026-01-10 13:44:57.356265 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2026-01-10 13:44:57.357609 | orchestrator | openstack_compute_keypair_v2.key: Creating... 2026-01-10 13:44:57.363441 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2026-01-10 13:44:57.365589 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2026-01-10 13:44:57.365875 | orchestrator | data.openstack_images_image_v2.image_node: Reading... 2026-01-10 13:44:57.366050 | orchestrator | openstack_networking_network_v2.net_management: Creating... 2026-01-10 13:44:57.366265 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2026-01-10 13:44:57.369816 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2026-01-10 13:44:57.807551 | orchestrator | data.openstack_images_image_v2.image: Read complete after 1s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-01-10 13:44:57.812605 | orchestrator | data.openstack_images_image_v2.image_node: Read complete after 1s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-01-10 13:44:57.814345 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2026-01-10 13:44:57.816188 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2026-01-10 13:44:57.885962 | orchestrator | openstack_compute_keypair_v2.key: Creation complete after 1s [id=testbed] 2026-01-10 13:44:57.895640 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2026-01-10 13:44:58.390278 | orchestrator | openstack_networking_network_v2.net_management: Creation complete after 1s [id=936ab61c-5063-4430-a47a-fffb3b6a0d4b] 2026-01-10 13:44:58.402797 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2026-01-10 13:45:01.023737 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 4s [id=2130b2ec-580e-4b39-88b4-748d7926916f] 2026-01-10 13:45:01.030525 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2026-01-10 13:45:01.031784 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 4s [id=f7705bd4-29b3-411e-b8b9-50568fcffd73] 2026-01-10 13:45:01.041324 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2026-01-10 13:45:01.044701 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 4s [id=4515c98e-1f25-421e-81d3-264e20827141] 2026-01-10 13:45:01.051662 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2026-01-10 13:45:01.061796 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 4s [id=70c6fd94-218f-483a-b965-10c70b1b97fc] 2026-01-10 13:45:01.077138 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2026-01-10 13:45:01.081979 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 4s [id=9cc4e4a6-fdb0-4f2b-8497-7d80ad86af00] 2026-01-10 13:45:01.087790 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2026-01-10 13:45:01.101176 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 4s [id=355a7212-75f2-41c4-a284-fbc15ac49d3c] 2026-01-10 13:45:01.106243 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2026-01-10 13:45:01.119336 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 3s [id=e6c5241f-60aa-42cf-822c-98275b24deb1] 2026-01-10 13:45:01.133570 | orchestrator | local_file.id_rsa_pub: Creating... 2026-01-10 13:45:01.140030 | orchestrator | local_file.id_rsa_pub: Creation complete after 0s [id=14b81f32c483f6cdb039769d1ce4e521c69939e3] 2026-01-10 13:45:01.148775 | orchestrator | local_sensitive_file.id_rsa: Creating... 2026-01-10 13:45:01.152895 | orchestrator | local_sensitive_file.id_rsa: Creation complete after 0s [id=8a1874aa4c77ca88092ed449664ee5cb3b638618] 2026-01-10 13:45:01.159873 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creating... 2026-01-10 13:45:01.165629 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 3s [id=763a4a26-d97a-40e2-a569-d464b2971007] 2026-01-10 13:45:01.173347 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 3s [id=45b03c06-0ab6-4b62-8b16-77c772305c6a] 2026-01-10 13:45:01.754107 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 4s [id=4de0ae7e-d1be-41e9-b1b3-63033e05d901] 2026-01-10 13:45:02.116331 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creation complete after 1s [id=2ac081aa-a5c8-41c2-9239-48334ef5b596] 2026-01-10 13:45:02.124630 | orchestrator | openstack_networking_router_v2.router: Creating... 2026-01-10 13:45:04.454784 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 3s [id=21aeb2d8-b6d0-4615-8a9a-0fb4b5fbf431] 2026-01-10 13:45:04.516932 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 4s [id=832b7b05-f737-40ff-a441-99af22cffa7c] 2026-01-10 13:45:04.541614 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 4s [id=eb1a97c9-b500-4e71-8a2b-c22723210725] 2026-01-10 13:45:04.544406 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 4s [id=6133482b-469c-4f4b-9769-bc6dc055ce78] 2026-01-10 13:45:04.554494 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 4s [id=9057548c-db5e-442a-947a-e28af578a58f] 2026-01-10 13:45:04.600169 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 4s [id=6ed1754d-0592-4676-ba37-32169761691d] 2026-01-10 13:45:05.867390 | orchestrator | openstack_networking_router_v2.router: Creation complete after 4s [id=e6b58735-71ce-484b-a334-e16324c7f683] 2026-01-10 13:45:05.880856 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creating... 2026-01-10 13:45:05.880930 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creating... 2026-01-10 13:45:05.880939 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creating... 2026-01-10 13:45:06.171812 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creation complete after 0s [id=1ebfaa40-1915-4ae8-9bc8-25812fb1e951] 2026-01-10 13:45:06.182719 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2026-01-10 13:45:06.184260 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2026-01-10 13:45:06.184349 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2026-01-10 13:45:06.184386 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2026-01-10 13:45:06.184394 | orchestrator | openstack_networking_port_v2.manager_port_management: Creating... 2026-01-10 13:45:06.184408 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2026-01-10 13:45:06.360987 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creation complete after 0s [id=42e5532c-4422-41d1-b050-638b2c253e43] 2026-01-10 13:45:06.372275 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2026-01-10 13:45:06.372402 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2026-01-10 13:45:06.376146 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creating... 2026-01-10 13:45:06.409195 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 0s [id=8865de06-990e-4125-ba41-7e7793827370] 2026-01-10 13:45:06.422070 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2026-01-10 13:45:06.625781 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 1s [id=375cd6be-28ff-4f9e-a6fb-67d0daebb901] 2026-01-10 13:45:06.638218 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 1s [id=d34a892d-8b82-4adc-ab9e-7add27111901] 2026-01-10 13:45:06.640604 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creating... 2026-01-10 13:45:06.650236 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creating... 2026-01-10 13:45:06.798602 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 1s [id=75795a3e-d1c2-47c1-bdee-9394cd14c763] 2026-01-10 13:45:06.812323 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creating... 2026-01-10 13:45:06.825878 | orchestrator | openstack_networking_port_v2.manager_port_management: Creation complete after 1s [id=3e0c9515-5380-42c7-a24d-f73e336916a4] 2026-01-10 13:45:06.836538 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creating... 2026-01-10 13:45:07.028050 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 1s [id=ba1f38f7-6d02-4fd6-8e92-feae7eb9ad38] 2026-01-10 13:45:07.037804 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creating... 2026-01-10 13:45:07.139995 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creation complete after 1s [id=22b798ee-68e4-4f5a-916d-a968b9464d61] 2026-01-10 13:45:07.147469 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2026-01-10 13:45:07.214190 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 1s [id=bdc82d66-ec0e-4cee-9b1b-e7d6e955d96a] 2026-01-10 13:45:07.388860 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creation complete after 0s [id=6f5d7597-9ce8-401f-aeac-791a57f66809] 2026-01-10 13:45:07.549397 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creation complete after 1s [id=2eb836ac-e5d7-40da-b842-32062da087ab] 2026-01-10 13:45:07.614924 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 1s [id=3944cb25-2857-4c43-ac80-073df8303852] 2026-01-10 13:45:07.712372 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creation complete after 1s [id=15bccc55-3b79-4612-a026-a719205ac2ed] 2026-01-10 13:45:07.840320 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creation complete after 1s [id=41044753-bfca-4855-b291-797854a64a1a] 2026-01-10 13:45:07.851538 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 2s [id=275c95e9-f0d4-4827-ba68-921d4120e180] 2026-01-10 13:45:08.109236 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 2s [id=dce56364-5a60-495d-8c5d-4c2dbd23f83d] 2026-01-10 13:45:08.810607 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creation complete after 2s [id=0d29d766-e7a1-4fb6-89d1-1dfc193ea721] 2026-01-10 13:45:09.208836 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creation complete after 3s [id=0f75f6dc-6810-47cc-ab0f-154331ac49eb] 2026-01-10 13:45:09.232078 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2026-01-10 13:45:09.238868 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creating... 2026-01-10 13:45:09.245455 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creating... 2026-01-10 13:45:09.248521 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creating... 2026-01-10 13:45:09.248810 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creating... 2026-01-10 13:45:09.261443 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creating... 2026-01-10 13:45:09.263667 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creating... 2026-01-10 13:45:11.193773 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 2s [id=78b074d1-8424-4ce7-9141-0cea21126655] 2026-01-10 13:45:11.202445 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2026-01-10 13:45:11.207767 | orchestrator | local_file.MANAGER_ADDRESS: Creating... 2026-01-10 13:45:11.210770 | orchestrator | local_file.inventory: Creating... 2026-01-10 13:45:11.212872 | orchestrator | local_file.MANAGER_ADDRESS: Creation complete after 0s [id=6fbe91d8f44a83ea83f83fef6b311f910eedbbca] 2026-01-10 13:45:11.214815 | orchestrator | local_file.inventory: Creation complete after 0s [id=eda2874cc014eb7a4f220f179f576de30539416d] 2026-01-10 13:45:12.469500 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 1s [id=78b074d1-8424-4ce7-9141-0cea21126655] 2026-01-10 13:45:19.242314 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2026-01-10 13:45:19.246630 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2026-01-10 13:45:19.249781 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2026-01-10 13:45:19.249833 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2026-01-10 13:45:19.263142 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2026-01-10 13:45:19.264275 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2026-01-10 13:45:29.251756 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2026-01-10 13:45:29.251907 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2026-01-10 13:45:29.251923 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2026-01-10 13:45:29.251950 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2026-01-10 13:45:29.264106 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2026-01-10 13:45:29.264503 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2026-01-10 13:45:39.261412 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [30s elapsed] 2026-01-10 13:45:39.261601 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [30s elapsed] 2026-01-10 13:45:39.261632 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [30s elapsed] 2026-01-10 13:45:39.261651 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [30s elapsed] 2026-01-10 13:45:39.264734 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [30s elapsed] 2026-01-10 13:45:39.264821 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [30s elapsed] 2026-01-10 13:45:49.271032 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [40s elapsed] 2026-01-10 13:45:49.271177 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [40s elapsed] 2026-01-10 13:45:49.271191 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [40s elapsed] 2026-01-10 13:45:49.271200 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [40s elapsed] 2026-01-10 13:45:49.271225 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [40s elapsed] 2026-01-10 13:45:49.271233 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [40s elapsed] 2026-01-10 13:45:50.189827 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creation complete after 41s [id=74f13f40-63ab-4e83-ae25-09645a3e8ccf] 2026-01-10 13:45:50.337934 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creation complete after 41s [id=4253e144-2c7d-4f4b-9635-0cb2fcd0eba1] 2026-01-10 13:45:59.279732 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [50s elapsed] 2026-01-10 13:45:59.279883 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [50s elapsed] 2026-01-10 13:45:59.279897 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [50s elapsed] 2026-01-10 13:45:59.279907 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [50s elapsed] 2026-01-10 13:46:00.151966 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creation complete after 51s [id=f1af1796-3330-4bac-b68d-e94ccafa9ac0] 2026-01-10 13:46:00.238244 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creation complete after 51s [id=54688b9a-75fe-4cc3-8113-c8e0ea6c7b60] 2026-01-10 13:46:00.381703 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creation complete after 51s [id=2c5b7dcc-524e-4e4e-8008-640ee55cc508] 2026-01-10 13:46:09.288262 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [1m0s elapsed] 2026-01-10 13:46:10.268888 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creation complete after 1m1s [id=b916409b-dd3e-402d-94f1-9299cef073c2] 2026-01-10 13:46:10.293465 | orchestrator | null_resource.node_semaphore: Creating... 2026-01-10 13:46:10.296899 | orchestrator | null_resource.node_semaphore: Creation complete after 0s [id=7512716156338129915] 2026-01-10 13:46:10.298770 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2026-01-10 13:46:10.302072 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2026-01-10 13:46:10.317241 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2026-01-10 13:46:10.329490 | orchestrator | openstack_compute_instance_v2.manager_server: Creating... 2026-01-10 13:46:10.329712 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2026-01-10 13:46:10.332018 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2026-01-10 13:46:10.332182 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2026-01-10 13:46:10.350499 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2026-01-10 13:46:10.351237 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2026-01-10 13:46:10.363809 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2026-01-10 13:46:13.704510 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 4s [id=2c5b7dcc-524e-4e4e-8008-640ee55cc508/355a7212-75f2-41c4-a284-fbc15ac49d3c] 2026-01-10 13:46:13.764352 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 4s [id=2c5b7dcc-524e-4e4e-8008-640ee55cc508/9cc4e4a6-fdb0-4f2b-8497-7d80ad86af00] 2026-01-10 13:46:13.787421 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 4s [id=4253e144-2c7d-4f4b-9635-0cb2fcd0eba1/e6c5241f-60aa-42cf-822c-98275b24deb1] 2026-01-10 13:46:13.790183 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 4s [id=54688b9a-75fe-4cc3-8113-c8e0ea6c7b60/2130b2ec-580e-4b39-88b4-748d7926916f] 2026-01-10 13:46:13.812148 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 4s [id=4253e144-2c7d-4f4b-9635-0cb2fcd0eba1/45b03c06-0ab6-4b62-8b16-77c772305c6a] 2026-01-10 13:46:13.836718 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 4s [id=54688b9a-75fe-4cc3-8113-c8e0ea6c7b60/f7705bd4-29b3-411e-b8b9-50568fcffd73] 2026-01-10 13:46:19.860117 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 10s [id=2c5b7dcc-524e-4e4e-8008-640ee55cc508/4515c98e-1f25-421e-81d3-264e20827141] 2026-01-10 13:46:19.915060 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 10s [id=4253e144-2c7d-4f4b-9635-0cb2fcd0eba1/763a4a26-d97a-40e2-a569-d464b2971007] 2026-01-10 13:46:19.952375 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 10s [id=54688b9a-75fe-4cc3-8113-c8e0ea6c7b60/70c6fd94-218f-483a-b965-10c70b1b97fc] 2026-01-10 13:46:20.331966 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2026-01-10 13:46:30.341030 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2026-01-10 13:46:32.097963 | orchestrator | openstack_compute_instance_v2.manager_server: Creation complete after 22s [id=70c28bd8-d6dd-4ed9-ba2d-4e6a2657b18a] 2026-01-10 13:46:32.121333 | orchestrator | 2026-01-10 13:46:32.121441 | orchestrator | Apply complete! Resources: 64 added, 0 changed, 0 destroyed. 2026-01-10 13:46:32.121462 | orchestrator | 2026-01-10 13:46:32.121468 | orchestrator | Outputs: 2026-01-10 13:46:32.121474 | orchestrator | 2026-01-10 13:46:32.121478 | orchestrator | manager_address = 2026-01-10 13:46:32.121484 | orchestrator | private_key = 2026-01-10 13:46:32.517268 | orchestrator | ok: Runtime: 0:01:41.760266 2026-01-10 13:46:32.556907 | 2026-01-10 13:46:32.557130 | TASK [Create infrastructure (stable)] 2026-01-10 13:46:33.096510 | orchestrator | skipping: Conditional result was False 2026-01-10 13:46:33.108667 | 2026-01-10 13:46:33.108838 | TASK [Fetch manager address] 2026-01-10 13:46:33.596709 | orchestrator | ok 2026-01-10 13:46:33.607536 | 2026-01-10 13:46:33.607684 | TASK [Set manager_host address] 2026-01-10 13:46:33.686932 | orchestrator | ok 2026-01-10 13:46:33.699060 | 2026-01-10 13:46:33.699233 | LOOP [Update ansible collections] 2026-01-10 13:46:34.726481 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-01-10 13:46:34.726770 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-01-10 13:46:34.726807 | orchestrator | Starting galaxy collection install process 2026-01-10 13:46:34.726857 | orchestrator | Process install dependency map 2026-01-10 13:46:34.726882 | orchestrator | Starting collection install process 2026-01-10 13:46:34.726903 | orchestrator | Installing 'osism.commons:999.0.0' to '/home/zuul-testbed01/.ansible/collections/ansible_collections/osism/commons' 2026-01-10 13:46:34.726927 | orchestrator | Created collection for osism.commons:999.0.0 at /home/zuul-testbed01/.ansible/collections/ansible_collections/osism/commons 2026-01-10 13:46:34.726952 | orchestrator | osism.commons:999.0.0 was installed successfully 2026-01-10 13:46:34.727042 | orchestrator | ok: Item: commons Runtime: 0:00:00.663064 2026-01-10 13:46:35.739745 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-01-10 13:46:35.739921 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-01-10 13:46:35.739978 | orchestrator | Starting galaxy collection install process 2026-01-10 13:46:35.740075 | orchestrator | Process install dependency map 2026-01-10 13:46:35.740134 | orchestrator | Starting collection install process 2026-01-10 13:46:35.740179 | orchestrator | Installing 'osism.services:999.0.0' to '/home/zuul-testbed01/.ansible/collections/ansible_collections/osism/services' 2026-01-10 13:46:35.740216 | orchestrator | Created collection for osism.services:999.0.0 at /home/zuul-testbed01/.ansible/collections/ansible_collections/osism/services 2026-01-10 13:46:35.740251 | orchestrator | osism.services:999.0.0 was installed successfully 2026-01-10 13:46:35.740312 | orchestrator | ok: Item: services Runtime: 0:00:00.715367 2026-01-10 13:46:35.765480 | 2026-01-10 13:46:35.765661 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-01-10 13:46:46.380383 | orchestrator | ok 2026-01-10 13:46:46.392528 | 2026-01-10 13:46:46.392670 | TASK [Wait a little longer for the manager so that everything is ready] 2026-01-10 13:47:46.438086 | orchestrator | ok 2026-01-10 13:47:46.449051 | 2026-01-10 13:47:46.449193 | TASK [Fetch manager ssh hostkey] 2026-01-10 13:47:48.023300 | orchestrator | Output suppressed because no_log was given 2026-01-10 13:47:48.039645 | 2026-01-10 13:47:48.039827 | TASK [Get ssh keypair from terraform environment] 2026-01-10 13:47:48.580129 | orchestrator | ok: Runtime: 0:00:00.010118 2026-01-10 13:47:48.596888 | 2026-01-10 13:47:48.597137 | TASK [Point out that the following task takes some time and does not give any output] 2026-01-10 13:47:48.631269 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2026-01-10 13:47:48.641048 | 2026-01-10 13:47:48.641202 | TASK [Run manager part 0] 2026-01-10 13:47:49.777077 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-01-10 13:47:49.847762 | orchestrator | 2026-01-10 13:47:49.847847 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2026-01-10 13:47:49.847857 | orchestrator | 2026-01-10 13:47:49.847877 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2026-01-10 13:47:51.744527 | orchestrator | ok: [testbed-manager] 2026-01-10 13:47:52.099621 | orchestrator | 2026-01-10 13:47:52.099697 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-01-10 13:47:52.099708 | orchestrator | 2026-01-10 13:47:52.099718 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-01-10 13:47:54.172845 | orchestrator | ok: [testbed-manager] 2026-01-10 13:47:54.172915 | orchestrator | 2026-01-10 13:47:54.172923 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-01-10 13:47:54.828652 | orchestrator | ok: [testbed-manager] 2026-01-10 13:47:54.828722 | orchestrator | 2026-01-10 13:47:54.828732 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-01-10 13:47:54.871755 | orchestrator | skipping: [testbed-manager] 2026-01-10 13:47:54.871825 | orchestrator | 2026-01-10 13:47:54.871838 | orchestrator | TASK [Update package cache] **************************************************** 2026-01-10 13:47:54.899823 | orchestrator | skipping: [testbed-manager] 2026-01-10 13:47:54.899901 | orchestrator | 2026-01-10 13:47:54.899914 | orchestrator | TASK [Install required packages] *********************************************** 2026-01-10 13:47:54.938980 | orchestrator | skipping: [testbed-manager] 2026-01-10 13:47:54.939039 | orchestrator | 2026-01-10 13:47:54.939045 | orchestrator | TASK [Remove some python packages] ********************************************* 2026-01-10 13:47:54.998183 | orchestrator | skipping: [testbed-manager] 2026-01-10 13:47:54.998283 | orchestrator | 2026-01-10 13:47:54.998297 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2026-01-10 13:47:55.027867 | orchestrator | skipping: [testbed-manager] 2026-01-10 13:47:55.027925 | orchestrator | 2026-01-10 13:47:55.027934 | orchestrator | TASK [Fail if Ubuntu version is lower than 24.04] ****************************** 2026-01-10 13:47:55.071754 | orchestrator | skipping: [testbed-manager] 2026-01-10 13:47:55.071806 | orchestrator | 2026-01-10 13:47:55.071814 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2026-01-10 13:47:55.113334 | orchestrator | skipping: [testbed-manager] 2026-01-10 13:47:55.113402 | orchestrator | 2026-01-10 13:47:55.113411 | orchestrator | TASK [Set APT options on manager] ********************************************** 2026-01-10 13:47:55.862596 | orchestrator | changed: [testbed-manager] 2026-01-10 13:47:55.862658 | orchestrator | 2026-01-10 13:47:55.862665 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2026-01-10 13:50:37.879724 | orchestrator | changed: [testbed-manager] 2026-01-10 13:50:37.879968 | orchestrator | 2026-01-10 13:50:37.879993 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-01-10 13:51:55.560407 | orchestrator | changed: [testbed-manager] 2026-01-10 13:51:55.560480 | orchestrator | 2026-01-10 13:51:55.560499 | orchestrator | TASK [Install required packages] *********************************************** 2026-01-10 13:52:19.230888 | orchestrator | changed: [testbed-manager] 2026-01-10 13:52:19.231135 | orchestrator | 2026-01-10 13:52:19.231148 | orchestrator | TASK [Remove some python packages] ********************************************* 2026-01-10 13:52:28.501478 | orchestrator | changed: [testbed-manager] 2026-01-10 13:52:28.501795 | orchestrator | 2026-01-10 13:52:28.501822 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-01-10 13:52:28.551444 | orchestrator | ok: [testbed-manager] 2026-01-10 13:52:28.551557 | orchestrator | 2026-01-10 13:52:28.551574 | orchestrator | TASK [Get current user] ******************************************************** 2026-01-10 13:52:29.366679 | orchestrator | ok: [testbed-manager] 2026-01-10 13:52:29.366732 | orchestrator | 2026-01-10 13:52:29.366744 | orchestrator | TASK [Create venv directory] *************************************************** 2026-01-10 13:52:30.089424 | orchestrator | changed: [testbed-manager] 2026-01-10 13:52:30.089469 | orchestrator | 2026-01-10 13:52:30.089477 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2026-01-10 13:52:36.618550 | orchestrator | changed: [testbed-manager] 2026-01-10 13:52:36.618602 | orchestrator | 2026-01-10 13:52:36.618627 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2026-01-10 13:52:42.990089 | orchestrator | changed: [testbed-manager] 2026-01-10 13:52:42.990389 | orchestrator | 2026-01-10 13:52:42.990417 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2026-01-10 13:52:45.747112 | orchestrator | changed: [testbed-manager] 2026-01-10 13:52:45.747245 | orchestrator | 2026-01-10 13:52:45.747264 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2026-01-10 13:52:47.644022 | orchestrator | changed: [testbed-manager] 2026-01-10 13:52:47.644140 | orchestrator | 2026-01-10 13:52:47.644157 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2026-01-10 13:52:48.828866 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-01-10 13:52:48.828979 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-01-10 13:52:48.828988 | orchestrator | 2026-01-10 13:52:48.828996 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2026-01-10 13:52:48.872825 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-01-10 13:52:48.872945 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-01-10 13:52:48.872967 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-01-10 13:52:48.872986 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-01-10 13:52:52.216268 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-01-10 13:52:52.216393 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-01-10 13:52:52.216416 | orchestrator | 2026-01-10 13:52:52.216428 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2026-01-10 13:52:52.808670 | orchestrator | changed: [testbed-manager] 2026-01-10 13:52:52.808795 | orchestrator | 2026-01-10 13:52:52.808814 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2026-01-10 13:55:11.401444 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2026-01-10 13:55:11.401673 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2026-01-10 13:55:11.401700 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2026-01-10 13:55:11.401714 | orchestrator | 2026-01-10 13:55:11.401728 | orchestrator | TASK [Install local collections] *********************************************** 2026-01-10 13:55:13.801702 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2026-01-10 13:55:13.801796 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2026-01-10 13:55:13.801811 | orchestrator | 2026-01-10 13:55:13.801824 | orchestrator | PLAY [Create operator user] **************************************************** 2026-01-10 13:55:13.801836 | orchestrator | 2026-01-10 13:55:13.801848 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-01-10 13:55:15.250254 | orchestrator | ok: [testbed-manager] 2026-01-10 13:55:15.250314 | orchestrator | 2026-01-10 13:55:15.250323 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-01-10 13:55:15.303757 | orchestrator | ok: [testbed-manager] 2026-01-10 13:55:15.303840 | orchestrator | 2026-01-10 13:55:15.303854 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-01-10 13:55:15.369067 | orchestrator | ok: [testbed-manager] 2026-01-10 13:55:15.369185 | orchestrator | 2026-01-10 13:55:15.369201 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-01-10 13:55:16.167258 | orchestrator | changed: [testbed-manager] 2026-01-10 13:55:16.167346 | orchestrator | 2026-01-10 13:55:16.167367 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-01-10 13:55:16.906710 | orchestrator | changed: [testbed-manager] 2026-01-10 13:55:16.906760 | orchestrator | 2026-01-10 13:55:16.906768 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-01-10 13:55:18.331739 | orchestrator | changed: [testbed-manager] => (item=adm) 2026-01-10 13:55:18.331830 | orchestrator | changed: [testbed-manager] => (item=sudo) 2026-01-10 13:55:18.331847 | orchestrator | 2026-01-10 13:55:18.331880 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-01-10 13:55:19.738184 | orchestrator | changed: [testbed-manager] 2026-01-10 13:55:19.738302 | orchestrator | 2026-01-10 13:55:19.738319 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-01-10 13:55:21.522988 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2026-01-10 13:55:21.523064 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2026-01-10 13:55:21.523074 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2026-01-10 13:55:21.523082 | orchestrator | 2026-01-10 13:55:21.523091 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-01-10 13:55:21.573901 | orchestrator | skipping: [testbed-manager] 2026-01-10 13:55:21.573998 | orchestrator | 2026-01-10 13:55:21.574051 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-01-10 13:55:21.649173 | orchestrator | skipping: [testbed-manager] 2026-01-10 13:55:21.650090 | orchestrator | 2026-01-10 13:55:21.650185 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-01-10 13:55:22.198553 | orchestrator | changed: [testbed-manager] 2026-01-10 13:55:22.198623 | orchestrator | 2026-01-10 13:55:22.198634 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-01-10 13:55:22.266813 | orchestrator | skipping: [testbed-manager] 2026-01-10 13:55:22.266911 | orchestrator | 2026-01-10 13:55:22.266929 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-01-10 13:55:23.159900 | orchestrator | changed: [testbed-manager] => (item=None) 2026-01-10 13:55:23.159983 | orchestrator | changed: [testbed-manager] 2026-01-10 13:55:23.160000 | orchestrator | 2026-01-10 13:55:23.160013 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-01-10 13:55:23.198161 | orchestrator | skipping: [testbed-manager] 2026-01-10 13:55:23.198358 | orchestrator | 2026-01-10 13:55:23.198375 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-01-10 13:55:23.228585 | orchestrator | skipping: [testbed-manager] 2026-01-10 13:55:23.228638 | orchestrator | 2026-01-10 13:55:23.228648 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-01-10 13:55:23.257658 | orchestrator | skipping: [testbed-manager] 2026-01-10 13:55:23.257724 | orchestrator | 2026-01-10 13:55:23.257742 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-01-10 13:55:23.324200 | orchestrator | skipping: [testbed-manager] 2026-01-10 13:55:23.324269 | orchestrator | 2026-01-10 13:55:23.324284 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-01-10 13:55:24.134896 | orchestrator | ok: [testbed-manager] 2026-01-10 13:55:24.134986 | orchestrator | 2026-01-10 13:55:24.135003 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-01-10 13:55:24.135016 | orchestrator | 2026-01-10 13:55:24.135027 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-01-10 13:55:25.579892 | orchestrator | ok: [testbed-manager] 2026-01-10 13:55:25.579960 | orchestrator | 2026-01-10 13:55:25.579974 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2026-01-10 13:55:26.573020 | orchestrator | changed: [testbed-manager] 2026-01-10 13:55:26.573114 | orchestrator | 2026-01-10 13:55:26.573165 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-10 13:55:26.573180 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=14 rescued=0 ignored=0 2026-01-10 13:55:26.573192 | orchestrator | 2026-01-10 13:55:26.977034 | orchestrator | ok: Runtime: 0:07:37.647990 2026-01-10 13:55:26.995663 | 2026-01-10 13:55:26.995824 | TASK [Point out that the log in on the manager is now possible] 2026-01-10 13:55:27.045983 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2026-01-10 13:55:27.056473 | 2026-01-10 13:55:27.056687 | TASK [Point out that the following task takes some time and does not give any output] 2026-01-10 13:55:27.094763 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2026-01-10 13:55:27.107599 | 2026-01-10 13:55:27.107861 | TASK [Run manager part 1 + 2] 2026-01-10 13:55:27.987448 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-01-10 13:55:28.048344 | orchestrator | 2026-01-10 13:55:28.048430 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2026-01-10 13:55:28.048444 | orchestrator | 2026-01-10 13:55:28.048470 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-01-10 13:55:31.127295 | orchestrator | ok: [testbed-manager] 2026-01-10 13:55:31.127488 | orchestrator | 2026-01-10 13:55:31.127540 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2026-01-10 13:55:31.172460 | orchestrator | skipping: [testbed-manager] 2026-01-10 13:55:31.172543 | orchestrator | 2026-01-10 13:55:31.172563 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-01-10 13:55:31.213777 | orchestrator | ok: [testbed-manager] 2026-01-10 13:55:31.213867 | orchestrator | 2026-01-10 13:55:31.213894 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-01-10 13:55:31.260921 | orchestrator | ok: [testbed-manager] 2026-01-10 13:55:31.261004 | orchestrator | 2026-01-10 13:55:31.261022 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-01-10 13:55:31.331712 | orchestrator | ok: [testbed-manager] 2026-01-10 13:55:31.331800 | orchestrator | 2026-01-10 13:55:31.331820 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-01-10 13:55:31.394179 | orchestrator | ok: [testbed-manager] 2026-01-10 13:55:31.394262 | orchestrator | 2026-01-10 13:55:31.394280 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-01-10 13:55:31.435812 | orchestrator | included: /home/zuul-testbed01/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2026-01-10 13:55:31.435905 | orchestrator | 2026-01-10 13:55:31.435924 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-01-10 13:55:32.197047 | orchestrator | ok: [testbed-manager] 2026-01-10 13:55:32.197244 | orchestrator | 2026-01-10 13:55:32.197264 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-01-10 13:55:32.250759 | orchestrator | skipping: [testbed-manager] 2026-01-10 13:55:32.250817 | orchestrator | 2026-01-10 13:55:32.250823 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-01-10 13:55:33.675929 | orchestrator | changed: [testbed-manager] 2026-01-10 13:55:33.676037 | orchestrator | 2026-01-10 13:55:33.676068 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-01-10 13:55:34.271674 | orchestrator | ok: [testbed-manager] 2026-01-10 13:55:34.271771 | orchestrator | 2026-01-10 13:55:34.271789 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-01-10 13:55:35.475919 | orchestrator | changed: [testbed-manager] 2026-01-10 13:55:35.476018 | orchestrator | 2026-01-10 13:55:35.476038 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-01-10 13:55:50.671971 | orchestrator | changed: [testbed-manager] 2026-01-10 13:55:50.672045 | orchestrator | 2026-01-10 13:55:50.672061 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-01-10 13:55:51.361898 | orchestrator | ok: [testbed-manager] 2026-01-10 13:55:51.362006 | orchestrator | 2026-01-10 13:55:51.362072 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-01-10 13:55:51.413379 | orchestrator | skipping: [testbed-manager] 2026-01-10 13:55:51.413483 | orchestrator | 2026-01-10 13:55:51.413500 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2026-01-10 13:55:52.415383 | orchestrator | changed: [testbed-manager] 2026-01-10 13:55:52.415477 | orchestrator | 2026-01-10 13:55:52.415495 | orchestrator | TASK [Copy SSH private key] **************************************************** 2026-01-10 13:55:53.419518 | orchestrator | changed: [testbed-manager] 2026-01-10 13:55:53.419570 | orchestrator | 2026-01-10 13:55:53.419578 | orchestrator | TASK [Create configuration directory] ****************************************** 2026-01-10 13:55:54.008922 | orchestrator | changed: [testbed-manager] 2026-01-10 13:55:54.009016 | orchestrator | 2026-01-10 13:55:54.009033 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2026-01-10 13:55:54.053429 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-01-10 13:55:54.053550 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-01-10 13:55:54.053678 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-01-10 13:55:54.053707 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-01-10 13:55:56.048763 | orchestrator | changed: [testbed-manager] 2026-01-10 13:55:56.048866 | orchestrator | 2026-01-10 13:55:56.048884 | orchestrator | TASK [Install python requirements in venv] ************************************* 2026-01-10 13:56:05.064848 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2026-01-10 13:56:05.064951 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2026-01-10 13:56:05.064969 | orchestrator | ok: [testbed-manager] => (item=packaging) 2026-01-10 13:56:05.064982 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2026-01-10 13:56:05.065003 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2026-01-10 13:56:05.065014 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2026-01-10 13:56:05.065026 | orchestrator | 2026-01-10 13:56:05.065039 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2026-01-10 13:56:06.138791 | orchestrator | changed: [testbed-manager] 2026-01-10 13:56:06.138882 | orchestrator | 2026-01-10 13:56:06.138899 | orchestrator | TASK [Copy testbed custom CA certificate on CentOS] **************************** 2026-01-10 13:56:06.182553 | orchestrator | skipping: [testbed-manager] 2026-01-10 13:56:06.182637 | orchestrator | 2026-01-10 13:56:06.182652 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2026-01-10 13:56:09.387552 | orchestrator | changed: [testbed-manager] 2026-01-10 13:56:09.387744 | orchestrator | 2026-01-10 13:56:09.387765 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2026-01-10 13:56:09.429744 | orchestrator | skipping: [testbed-manager] 2026-01-10 13:56:09.429786 | orchestrator | 2026-01-10 13:56:09.429795 | orchestrator | TASK [Run manager part 2] ****************************************************** 2026-01-10 13:57:50.673272 | orchestrator | changed: [testbed-manager] 2026-01-10 13:57:50.673327 | orchestrator | 2026-01-10 13:57:50.673337 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-01-10 13:57:51.871729 | orchestrator | ok: [testbed-manager] 2026-01-10 13:57:51.871811 | orchestrator | 2026-01-10 13:57:51.871825 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-10 13:57:51.871839 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=5 rescued=0 ignored=0 2026-01-10 13:57:51.871850 | orchestrator | 2026-01-10 13:57:52.263251 | orchestrator | ok: Runtime: 0:02:24.571211 2026-01-10 13:57:52.285384 | 2026-01-10 13:57:52.285700 | TASK [Reboot manager] 2026-01-10 13:57:53.870571 | orchestrator | ok: Runtime: 0:00:01.014772 2026-01-10 13:57:53.888578 | 2026-01-10 13:57:53.888777 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-01-10 13:58:10.335263 | orchestrator | ok 2026-01-10 13:58:10.346089 | 2026-01-10 13:58:10.346267 | TASK [Wait a little longer for the manager so that everything is ready] 2026-01-10 13:59:10.402189 | orchestrator | ok 2026-01-10 13:59:10.418573 | 2026-01-10 13:59:10.418909 | TASK [Deploy manager + bootstrap nodes] 2026-01-10 13:59:13.162590 | orchestrator | 2026-01-10 13:59:13.162885 | orchestrator | # DEPLOY MANAGER 2026-01-10 13:59:13.162924 | orchestrator | 2026-01-10 13:59:13.162950 | orchestrator | + set -e 2026-01-10 13:59:13.162974 | orchestrator | + echo 2026-01-10 13:59:13.162997 | orchestrator | + echo '# DEPLOY MANAGER' 2026-01-10 13:59:13.163025 | orchestrator | + echo 2026-01-10 13:59:13.163096 | orchestrator | + cat /opt/manager-vars.sh 2026-01-10 13:59:13.166686 | orchestrator | export NUMBER_OF_NODES=6 2026-01-10 13:59:13.166727 | orchestrator | 2026-01-10 13:59:13.166741 | orchestrator | export CEPH_VERSION=reef 2026-01-10 13:59:13.166755 | orchestrator | export CONFIGURATION_VERSION=main 2026-01-10 13:59:13.166769 | orchestrator | export MANAGER_VERSION=latest 2026-01-10 13:59:13.166828 | orchestrator | export OPENSTACK_VERSION=2025.1 2026-01-10 13:59:13.166842 | orchestrator | 2026-01-10 13:59:13.166862 | orchestrator | export ARA=false 2026-01-10 13:59:13.166874 | orchestrator | export DEPLOY_MODE=manager 2026-01-10 13:59:13.166893 | orchestrator | export TEMPEST=false 2026-01-10 13:59:13.166905 | orchestrator | export IS_ZUUL=true 2026-01-10 13:59:13.166916 | orchestrator | 2026-01-10 13:59:13.166935 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.86 2026-01-10 13:59:13.166948 | orchestrator | export EXTERNAL_API=false 2026-01-10 13:59:13.166959 | orchestrator | 2026-01-10 13:59:13.166970 | orchestrator | export IMAGE_USER=ubuntu 2026-01-10 13:59:13.166987 | orchestrator | export IMAGE_NODE_USER=ubuntu 2026-01-10 13:59:13.166998 | orchestrator | 2026-01-10 13:59:13.167009 | orchestrator | export CEPH_STACK=ceph-ansible 2026-01-10 13:59:13.167029 | orchestrator | 2026-01-10 13:59:13.167041 | orchestrator | + echo 2026-01-10 13:59:13.167054 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-01-10 13:59:13.167925 | orchestrator | ++ export INTERACTIVE=false 2026-01-10 13:59:13.167946 | orchestrator | ++ INTERACTIVE=false 2026-01-10 13:59:13.167959 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-01-10 13:59:13.167972 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-01-10 13:59:13.168213 | orchestrator | + source /opt/manager-vars.sh 2026-01-10 13:59:13.168241 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-01-10 13:59:13.168254 | orchestrator | ++ NUMBER_OF_NODES=6 2026-01-10 13:59:13.168265 | orchestrator | ++ export CEPH_VERSION=reef 2026-01-10 13:59:13.168281 | orchestrator | ++ CEPH_VERSION=reef 2026-01-10 13:59:13.168292 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-01-10 13:59:13.168365 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-01-10 13:59:13.168402 | orchestrator | ++ export MANAGER_VERSION=latest 2026-01-10 13:59:13.168421 | orchestrator | ++ MANAGER_VERSION=latest 2026-01-10 13:59:13.168439 | orchestrator | ++ export OPENSTACK_VERSION=2025.1 2026-01-10 13:59:13.168471 | orchestrator | ++ OPENSTACK_VERSION=2025.1 2026-01-10 13:59:13.168492 | orchestrator | ++ export ARA=false 2026-01-10 13:59:13.168510 | orchestrator | ++ ARA=false 2026-01-10 13:59:13.168633 | orchestrator | ++ export DEPLOY_MODE=manager 2026-01-10 13:59:13.168652 | orchestrator | ++ DEPLOY_MODE=manager 2026-01-10 13:59:13.168698 | orchestrator | ++ export TEMPEST=false 2026-01-10 13:59:13.168735 | orchestrator | ++ TEMPEST=false 2026-01-10 13:59:13.168754 | orchestrator | ++ export IS_ZUUL=true 2026-01-10 13:59:13.168773 | orchestrator | ++ IS_ZUUL=true 2026-01-10 13:59:13.168792 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.86 2026-01-10 13:59:13.168812 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.86 2026-01-10 13:59:13.168837 | orchestrator | ++ export EXTERNAL_API=false 2026-01-10 13:59:13.168856 | orchestrator | ++ EXTERNAL_API=false 2026-01-10 13:59:13.168875 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-01-10 13:59:13.168894 | orchestrator | ++ IMAGE_USER=ubuntu 2026-01-10 13:59:13.168913 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-01-10 13:59:13.168931 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-01-10 13:59:13.168949 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-01-10 13:59:13.168967 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-01-10 13:59:13.168985 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2026-01-10 13:59:13.224085 | orchestrator | + docker version 2026-01-10 13:59:13.520229 | orchestrator | Client: Docker Engine - Community 2026-01-10 13:59:13.520461 | orchestrator | Version: 27.5.1 2026-01-10 13:59:13.520487 | orchestrator | API version: 1.47 2026-01-10 13:59:13.520503 | orchestrator | Go version: go1.22.11 2026-01-10 13:59:13.520516 | orchestrator | Git commit: 9f9e405 2026-01-10 13:59:13.520530 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-01-10 13:59:13.520546 | orchestrator | OS/Arch: linux/amd64 2026-01-10 13:59:13.520561 | orchestrator | Context: default 2026-01-10 13:59:13.520575 | orchestrator | 2026-01-10 13:59:13.520590 | orchestrator | Server: Docker Engine - Community 2026-01-10 13:59:13.520606 | orchestrator | Engine: 2026-01-10 13:59:13.520638 | orchestrator | Version: 27.5.1 2026-01-10 13:59:13.520654 | orchestrator | API version: 1.47 (minimum version 1.24) 2026-01-10 13:59:13.520700 | orchestrator | Go version: go1.22.11 2026-01-10 13:59:13.520717 | orchestrator | Git commit: 4c9b3b0 2026-01-10 13:59:13.520730 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-01-10 13:59:13.520745 | orchestrator | OS/Arch: linux/amd64 2026-01-10 13:59:13.520759 | orchestrator | Experimental: false 2026-01-10 13:59:13.520773 | orchestrator | containerd: 2026-01-10 13:59:13.520787 | orchestrator | Version: v2.2.1 2026-01-10 13:59:13.520802 | orchestrator | GitCommit: dea7da592f5d1d2b7755e3a161be07f43fad8f75 2026-01-10 13:59:13.520815 | orchestrator | runc: 2026-01-10 13:59:13.520829 | orchestrator | Version: 1.3.4 2026-01-10 13:59:13.520843 | orchestrator | GitCommit: v1.3.4-0-gd6d73eb8 2026-01-10 13:59:13.520857 | orchestrator | docker-init: 2026-01-10 13:59:13.520871 | orchestrator | Version: 0.19.0 2026-01-10 13:59:13.520885 | orchestrator | GitCommit: de40ad0 2026-01-10 13:59:13.522668 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2026-01-10 13:59:13.530807 | orchestrator | + set -e 2026-01-10 13:59:13.530838 | orchestrator | + source /opt/manager-vars.sh 2026-01-10 13:59:13.530851 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-01-10 13:59:13.530861 | orchestrator | ++ NUMBER_OF_NODES=6 2026-01-10 13:59:13.530871 | orchestrator | ++ export CEPH_VERSION=reef 2026-01-10 13:59:13.530881 | orchestrator | ++ CEPH_VERSION=reef 2026-01-10 13:59:13.530892 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-01-10 13:59:13.530904 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-01-10 13:59:13.530914 | orchestrator | ++ export MANAGER_VERSION=latest 2026-01-10 13:59:13.530924 | orchestrator | ++ MANAGER_VERSION=latest 2026-01-10 13:59:13.530935 | orchestrator | ++ export OPENSTACK_VERSION=2025.1 2026-01-10 13:59:13.530945 | orchestrator | ++ OPENSTACK_VERSION=2025.1 2026-01-10 13:59:13.530955 | orchestrator | ++ export ARA=false 2026-01-10 13:59:13.530966 | orchestrator | ++ ARA=false 2026-01-10 13:59:13.530976 | orchestrator | ++ export DEPLOY_MODE=manager 2026-01-10 13:59:13.530986 | orchestrator | ++ DEPLOY_MODE=manager 2026-01-10 13:59:13.530995 | orchestrator | ++ export TEMPEST=false 2026-01-10 13:59:13.531004 | orchestrator | ++ TEMPEST=false 2026-01-10 13:59:13.531014 | orchestrator | ++ export IS_ZUUL=true 2026-01-10 13:59:13.531025 | orchestrator | ++ IS_ZUUL=true 2026-01-10 13:59:13.531036 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.86 2026-01-10 13:59:13.531045 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.86 2026-01-10 13:59:13.531056 | orchestrator | ++ export EXTERNAL_API=false 2026-01-10 13:59:13.531066 | orchestrator | ++ EXTERNAL_API=false 2026-01-10 13:59:13.531076 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-01-10 13:59:13.531087 | orchestrator | ++ IMAGE_USER=ubuntu 2026-01-10 13:59:13.531098 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-01-10 13:59:13.531108 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-01-10 13:59:13.531118 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-01-10 13:59:13.531128 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-01-10 13:59:13.531138 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-01-10 13:59:13.531149 | orchestrator | ++ export INTERACTIVE=false 2026-01-10 13:59:13.531160 | orchestrator | ++ INTERACTIVE=false 2026-01-10 13:59:13.531171 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-01-10 13:59:13.531186 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-01-10 13:59:13.531203 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-01-10 13:59:13.531214 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-01-10 13:59:13.531224 | orchestrator | + /opt/configuration/scripts/set-ceph-version.sh reef 2026-01-10 13:59:13.535528 | orchestrator | + set -e 2026-01-10 13:59:13.535546 | orchestrator | + VERSION=reef 2026-01-10 13:59:13.536409 | orchestrator | ++ grep '^ceph_version:' /opt/configuration/environments/manager/configuration.yml 2026-01-10 13:59:13.542435 | orchestrator | + [[ -n ceph_version: reef ]] 2026-01-10 13:59:13.542452 | orchestrator | + sed -i 's/ceph_version: .*/ceph_version: reef/g' /opt/configuration/environments/manager/configuration.yml 2026-01-10 13:59:13.546870 | orchestrator | + /opt/configuration/scripts/set-openstack-version.sh 2025.1 2026-01-10 13:59:13.555176 | orchestrator | + set -e 2026-01-10 13:59:13.555207 | orchestrator | + VERSION=2025.1 2026-01-10 13:59:13.555292 | orchestrator | ++ grep '^openstack_version:' /opt/configuration/environments/manager/configuration.yml 2026-01-10 13:59:13.558049 | orchestrator | + [[ -n openstack_version: 2024.2 ]] 2026-01-10 13:59:13.558065 | orchestrator | + sed -i 's/openstack_version: .*/openstack_version: 2025.1/g' /opt/configuration/environments/manager/configuration.yml 2026-01-10 13:59:13.563926 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2026-01-10 13:59:13.564297 | orchestrator | ++ semver latest 7.0.0 2026-01-10 13:59:13.626061 | orchestrator | + [[ -1 -ge 0 ]] 2026-01-10 13:59:13.626129 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-01-10 13:59:13.626138 | orchestrator | + echo 'enable_osism_kubernetes: true' 2026-01-10 13:59:13.627067 | orchestrator | ++ semver latest 10.0.0-0 2026-01-10 13:59:13.692667 | orchestrator | + [[ -1 -ge 0 ]] 2026-01-10 13:59:13.693531 | orchestrator | ++ semver 2025.1 2025.1 2026-01-10 13:59:13.782824 | orchestrator | + [[ 0 -ge 0 ]] 2026-01-10 13:59:13.782921 | orchestrator | + sed -i '/^om_enable_rabbitmq_high_availability:/d' /opt/configuration/environments/kolla/configuration.yml 2026-01-10 13:59:13.788751 | orchestrator | + sed -i '/^om_enable_rabbitmq_quorum_queues:/d' /opt/configuration/environments/kolla/configuration.yml 2026-01-10 13:59:13.794102 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2026-01-10 13:59:13.892726 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-01-10 13:59:13.894482 | orchestrator | + source /opt/venv/bin/activate 2026-01-10 13:59:13.895671 | orchestrator | ++ deactivate nondestructive 2026-01-10 13:59:13.895743 | orchestrator | ++ '[' -n '' ']' 2026-01-10 13:59:13.895757 | orchestrator | ++ '[' -n '' ']' 2026-01-10 13:59:13.895770 | orchestrator | ++ hash -r 2026-01-10 13:59:13.895781 | orchestrator | ++ '[' -n '' ']' 2026-01-10 13:59:13.895793 | orchestrator | ++ unset VIRTUAL_ENV 2026-01-10 13:59:13.895811 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-01-10 13:59:13.895827 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-01-10 13:59:13.896091 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-01-10 13:59:13.896109 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-01-10 13:59:13.896121 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-01-10 13:59:13.896132 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-01-10 13:59:13.896341 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-01-10 13:59:13.896387 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-01-10 13:59:13.896426 | orchestrator | ++ export PATH 2026-01-10 13:59:13.896538 | orchestrator | ++ '[' -n '' ']' 2026-01-10 13:59:13.896648 | orchestrator | ++ '[' -z '' ']' 2026-01-10 13:59:13.896663 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-01-10 13:59:13.896730 | orchestrator | ++ PS1='(venv) ' 2026-01-10 13:59:13.896745 | orchestrator | ++ export PS1 2026-01-10 13:59:13.896782 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-01-10 13:59:13.896799 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-01-10 13:59:13.896876 | orchestrator | ++ hash -r 2026-01-10 13:59:13.897237 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2026-01-10 13:59:15.232701 | orchestrator | 2026-01-10 13:59:15.232819 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2026-01-10 13:59:15.232837 | orchestrator | 2026-01-10 13:59:15.232850 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-01-10 13:59:15.822444 | orchestrator | ok: [testbed-manager] 2026-01-10 13:59:15.822536 | orchestrator | 2026-01-10 13:59:15.822548 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-01-10 13:59:16.854929 | orchestrator | changed: [testbed-manager] 2026-01-10 13:59:16.855052 | orchestrator | 2026-01-10 13:59:16.855074 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2026-01-10 13:59:16.855091 | orchestrator | 2026-01-10 13:59:16.855111 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-01-10 13:59:20.355094 | orchestrator | ok: [testbed-manager] 2026-01-10 13:59:20.355212 | orchestrator | 2026-01-10 13:59:20.355229 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2026-01-10 13:59:20.407706 | orchestrator | ok: [testbed-manager] 2026-01-10 13:59:20.407809 | orchestrator | 2026-01-10 13:59:20.407825 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2026-01-10 13:59:20.895161 | orchestrator | changed: [testbed-manager] 2026-01-10 13:59:20.895269 | orchestrator | 2026-01-10 13:59:20.895287 | orchestrator | TASK [Add netbox_enable parameter] ********************************************* 2026-01-10 13:59:20.936667 | orchestrator | skipping: [testbed-manager] 2026-01-10 13:59:20.936769 | orchestrator | 2026-01-10 13:59:20.936784 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-01-10 13:59:21.311533 | orchestrator | changed: [testbed-manager] 2026-01-10 13:59:21.311669 | orchestrator | 2026-01-10 13:59:21.311687 | orchestrator | TASK [Use insecure glance configuration] *************************************** 2026-01-10 13:59:21.381216 | orchestrator | skipping: [testbed-manager] 2026-01-10 13:59:21.381444 | orchestrator | 2026-01-10 13:59:21.381465 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2026-01-10 13:59:21.721418 | orchestrator | ok: [testbed-manager] 2026-01-10 13:59:21.721521 | orchestrator | 2026-01-10 13:59:21.721539 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2026-01-10 13:59:21.841172 | orchestrator | skipping: [testbed-manager] 2026-01-10 13:59:21.841272 | orchestrator | 2026-01-10 13:59:21.841287 | orchestrator | PLAY [Apply role traefik] ****************************************************** 2026-01-10 13:59:21.841300 | orchestrator | 2026-01-10 13:59:21.841366 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-01-10 13:59:23.675167 | orchestrator | ok: [testbed-manager] 2026-01-10 13:59:23.675276 | orchestrator | 2026-01-10 13:59:23.675292 | orchestrator | TASK [Apply traefik role] ****************************************************** 2026-01-10 13:59:23.805239 | orchestrator | included: osism.services.traefik for testbed-manager 2026-01-10 13:59:23.805394 | orchestrator | 2026-01-10 13:59:23.805414 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2026-01-10 13:59:23.879652 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2026-01-10 13:59:23.879752 | orchestrator | 2026-01-10 13:59:23.879768 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2026-01-10 13:59:25.016804 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2026-01-10 13:59:25.016921 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2026-01-10 13:59:25.016939 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2026-01-10 13:59:25.016952 | orchestrator | 2026-01-10 13:59:25.016965 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2026-01-10 13:59:26.928177 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2026-01-10 13:59:26.928285 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2026-01-10 13:59:26.928301 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2026-01-10 13:59:26.928341 | orchestrator | 2026-01-10 13:59:26.928356 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2026-01-10 13:59:27.579809 | orchestrator | changed: [testbed-manager] => (item=None) 2026-01-10 13:59:27.579930 | orchestrator | changed: [testbed-manager] 2026-01-10 13:59:27.579949 | orchestrator | 2026-01-10 13:59:27.579961 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2026-01-10 13:59:28.232611 | orchestrator | changed: [testbed-manager] => (item=None) 2026-01-10 13:59:28.232717 | orchestrator | changed: [testbed-manager] 2026-01-10 13:59:28.232735 | orchestrator | 2026-01-10 13:59:28.232748 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2026-01-10 13:59:28.282510 | orchestrator | skipping: [testbed-manager] 2026-01-10 13:59:28.282599 | orchestrator | 2026-01-10 13:59:28.282615 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2026-01-10 13:59:28.649618 | orchestrator | ok: [testbed-manager] 2026-01-10 13:59:28.649718 | orchestrator | 2026-01-10 13:59:28.649734 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2026-01-10 13:59:28.742608 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2026-01-10 13:59:28.742731 | orchestrator | 2026-01-10 13:59:28.742756 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2026-01-10 13:59:29.840365 | orchestrator | changed: [testbed-manager] 2026-01-10 13:59:29.840479 | orchestrator | 2026-01-10 13:59:29.840499 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2026-01-10 13:59:30.714288 | orchestrator | changed: [testbed-manager] 2026-01-10 13:59:30.714487 | orchestrator | 2026-01-10 13:59:30.714520 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2026-01-10 13:59:44.954163 | orchestrator | changed: [testbed-manager] 2026-01-10 13:59:44.954282 | orchestrator | 2026-01-10 13:59:44.954300 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2026-01-10 13:59:45.008154 | orchestrator | skipping: [testbed-manager] 2026-01-10 13:59:45.008256 | orchestrator | 2026-01-10 13:59:45.008272 | orchestrator | PLAY [Deploy manager service] ************************************************** 2026-01-10 13:59:45.008285 | orchestrator | 2026-01-10 13:59:45.008296 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-01-10 13:59:46.911237 | orchestrator | ok: [testbed-manager] 2026-01-10 13:59:46.911436 | orchestrator | 2026-01-10 13:59:46.911467 | orchestrator | TASK [Apply manager role] ****************************************************** 2026-01-10 13:59:47.034319 | orchestrator | included: osism.services.manager for testbed-manager 2026-01-10 13:59:47.034510 | orchestrator | 2026-01-10 13:59:47.034526 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2026-01-10 13:59:47.093578 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2026-01-10 13:59:47.093700 | orchestrator | 2026-01-10 13:59:47.093718 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2026-01-10 13:59:49.824624 | orchestrator | ok: [testbed-manager] 2026-01-10 13:59:49.824766 | orchestrator | 2026-01-10 13:59:49.824786 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2026-01-10 13:59:49.878743 | orchestrator | ok: [testbed-manager] 2026-01-10 13:59:49.878846 | orchestrator | 2026-01-10 13:59:49.878860 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2026-01-10 13:59:50.006473 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2026-01-10 13:59:50.006600 | orchestrator | 2026-01-10 13:59:50.006616 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2026-01-10 13:59:52.929540 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2026-01-10 13:59:52.929690 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2026-01-10 13:59:52.929705 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2026-01-10 13:59:52.929717 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2026-01-10 13:59:52.929728 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2026-01-10 13:59:52.929738 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2026-01-10 13:59:52.929748 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2026-01-10 13:59:52.929759 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2026-01-10 13:59:52.929769 | orchestrator | 2026-01-10 13:59:52.929782 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2026-01-10 13:59:53.592363 | orchestrator | changed: [testbed-manager] 2026-01-10 13:59:53.592487 | orchestrator | 2026-01-10 13:59:53.592581 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2026-01-10 13:59:54.264471 | orchestrator | changed: [testbed-manager] 2026-01-10 13:59:54.264573 | orchestrator | 2026-01-10 13:59:54.264591 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2026-01-10 13:59:54.353068 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2026-01-10 13:59:54.353167 | orchestrator | 2026-01-10 13:59:54.353183 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2026-01-10 13:59:55.619288 | orchestrator | changed: [testbed-manager] => (item=ara) 2026-01-10 13:59:55.619457 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2026-01-10 13:59:55.619476 | orchestrator | 2026-01-10 13:59:55.619490 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2026-01-10 13:59:56.285434 | orchestrator | changed: [testbed-manager] 2026-01-10 13:59:56.285546 | orchestrator | 2026-01-10 13:59:56.285566 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2026-01-10 13:59:56.334252 | orchestrator | skipping: [testbed-manager] 2026-01-10 13:59:56.334365 | orchestrator | 2026-01-10 13:59:56.334381 | orchestrator | TASK [osism.services.manager : Include frontend config tasks] ****************** 2026-01-10 13:59:56.399715 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-frontend.yml for testbed-manager 2026-01-10 13:59:56.399884 | orchestrator | 2026-01-10 13:59:56.399903 | orchestrator | TASK [osism.services.manager : Copy frontend environment file] ***************** 2026-01-10 13:59:57.045411 | orchestrator | changed: [testbed-manager] 2026-01-10 13:59:57.045543 | orchestrator | 2026-01-10 13:59:57.045560 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2026-01-10 13:59:57.102947 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2026-01-10 13:59:57.103080 | orchestrator | 2026-01-10 13:59:57.103106 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2026-01-10 13:59:58.497640 | orchestrator | changed: [testbed-manager] => (item=None) 2026-01-10 13:59:58.497759 | orchestrator | changed: [testbed-manager] => (item=None) 2026-01-10 13:59:58.497769 | orchestrator | changed: [testbed-manager] 2026-01-10 13:59:58.497779 | orchestrator | 2026-01-10 13:59:58.497787 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2026-01-10 13:59:59.160412 | orchestrator | changed: [testbed-manager] 2026-01-10 13:59:59.160540 | orchestrator | 2026-01-10 13:59:59.160557 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2026-01-10 13:59:59.218603 | orchestrator | skipping: [testbed-manager] 2026-01-10 13:59:59.218708 | orchestrator | 2026-01-10 13:59:59.218742 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2026-01-10 13:59:59.304804 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2026-01-10 13:59:59.304911 | orchestrator | 2026-01-10 13:59:59.304922 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2026-01-10 13:59:59.849976 | orchestrator | changed: [testbed-manager] 2026-01-10 13:59:59.850134 | orchestrator | 2026-01-10 13:59:59.850147 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2026-01-10 14:00:00.285698 | orchestrator | changed: [testbed-manager] 2026-01-10 14:00:00.285829 | orchestrator | 2026-01-10 14:00:00.285848 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2026-01-10 14:00:01.582233 | orchestrator | changed: [testbed-manager] => (item=conductor) 2026-01-10 14:00:01.582428 | orchestrator | changed: [testbed-manager] => (item=openstack) 2026-01-10 14:00:01.582445 | orchestrator | 2026-01-10 14:00:01.582460 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2026-01-10 14:00:02.243060 | orchestrator | changed: [testbed-manager] 2026-01-10 14:00:02.243188 | orchestrator | 2026-01-10 14:00:02.243203 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2026-01-10 14:00:02.644680 | orchestrator | ok: [testbed-manager] 2026-01-10 14:00:02.644852 | orchestrator | 2026-01-10 14:00:02.644869 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2026-01-10 14:00:03.049565 | orchestrator | changed: [testbed-manager] 2026-01-10 14:00:03.049702 | orchestrator | 2026-01-10 14:00:03.049720 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2026-01-10 14:00:03.100059 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:00:03.100169 | orchestrator | 2026-01-10 14:00:03.100181 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2026-01-10 14:00:03.175785 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2026-01-10 14:00:03.175911 | orchestrator | 2026-01-10 14:00:03.175927 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2026-01-10 14:00:03.230670 | orchestrator | ok: [testbed-manager] 2026-01-10 14:00:03.230794 | orchestrator | 2026-01-10 14:00:03.230805 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2026-01-10 14:00:05.279572 | orchestrator | changed: [testbed-manager] => (item=osism) 2026-01-10 14:00:05.279720 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2026-01-10 14:00:05.279738 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2026-01-10 14:00:05.279750 | orchestrator | 2026-01-10 14:00:05.279767 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2026-01-10 14:00:06.011617 | orchestrator | changed: [testbed-manager] 2026-01-10 14:00:06.011791 | orchestrator | 2026-01-10 14:00:06.011810 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2026-01-10 14:00:06.737585 | orchestrator | changed: [testbed-manager] 2026-01-10 14:00:06.737717 | orchestrator | 2026-01-10 14:00:06.737735 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2026-01-10 14:00:07.523918 | orchestrator | changed: [testbed-manager] 2026-01-10 14:00:07.524009 | orchestrator | 2026-01-10 14:00:07.524023 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2026-01-10 14:00:07.602649 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2026-01-10 14:00:07.602748 | orchestrator | 2026-01-10 14:00:07.602765 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2026-01-10 14:00:07.648670 | orchestrator | ok: [testbed-manager] 2026-01-10 14:00:07.648769 | orchestrator | 2026-01-10 14:00:07.648785 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2026-01-10 14:00:08.398112 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2026-01-10 14:00:08.398216 | orchestrator | 2026-01-10 14:00:08.398233 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2026-01-10 14:00:08.491584 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2026-01-10 14:00:08.491676 | orchestrator | 2026-01-10 14:00:08.491691 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2026-01-10 14:00:09.193626 | orchestrator | changed: [testbed-manager] 2026-01-10 14:00:09.193718 | orchestrator | 2026-01-10 14:00:09.193734 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2026-01-10 14:00:09.805240 | orchestrator | ok: [testbed-manager] 2026-01-10 14:00:09.805329 | orchestrator | 2026-01-10 14:00:09.805398 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2026-01-10 14:00:09.860596 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:00:09.860672 | orchestrator | 2026-01-10 14:00:09.860686 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2026-01-10 14:00:09.915145 | orchestrator | ok: [testbed-manager] 2026-01-10 14:00:09.915247 | orchestrator | 2026-01-10 14:00:09.915262 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2026-01-10 14:00:10.760776 | orchestrator | changed: [testbed-manager] 2026-01-10 14:00:10.760889 | orchestrator | 2026-01-10 14:00:10.760912 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2026-01-10 14:01:18.366469 | orchestrator | changed: [testbed-manager] 2026-01-10 14:01:18.366587 | orchestrator | 2026-01-10 14:01:18.366604 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2026-01-10 14:01:19.371041 | orchestrator | ok: [testbed-manager] 2026-01-10 14:01:19.371157 | orchestrator | 2026-01-10 14:01:19.371197 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2026-01-10 14:01:19.431792 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:01:19.431848 | orchestrator | 2026-01-10 14:01:19.431863 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2026-01-10 14:01:21.915608 | orchestrator | changed: [testbed-manager] 2026-01-10 14:01:21.915714 | orchestrator | 2026-01-10 14:01:21.915732 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2026-01-10 14:01:21.981207 | orchestrator | ok: [testbed-manager] 2026-01-10 14:01:21.981312 | orchestrator | 2026-01-10 14:01:21.981332 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-01-10 14:01:21.981345 | orchestrator | 2026-01-10 14:01:21.981357 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2026-01-10 14:01:22.035556 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:01:22.035655 | orchestrator | 2026-01-10 14:01:22.035671 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2026-01-10 14:02:22.093720 | orchestrator | Pausing for 60 seconds 2026-01-10 14:02:22.093845 | orchestrator | changed: [testbed-manager] 2026-01-10 14:02:22.093863 | orchestrator | 2026-01-10 14:02:22.093878 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2026-01-10 14:02:25.193710 | orchestrator | changed: [testbed-manager] 2026-01-10 14:02:25.193850 | orchestrator | 2026-01-10 14:02:25.193869 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2026-01-10 14:03:27.211440 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2026-01-10 14:03:27.211692 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2026-01-10 14:03:27.211720 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (48 retries left). 2026-01-10 14:03:27.211741 | orchestrator | changed: [testbed-manager] 2026-01-10 14:03:27.211763 | orchestrator | 2026-01-10 14:03:27.211782 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2026-01-10 14:03:38.143972 | orchestrator | changed: [testbed-manager] 2026-01-10 14:03:38.144141 | orchestrator | 2026-01-10 14:03:38.144161 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2026-01-10 14:03:38.221097 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2026-01-10 14:03:38.221221 | orchestrator | 2026-01-10 14:03:38.221240 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-01-10 14:03:38.221254 | orchestrator | 2026-01-10 14:03:38.221266 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2026-01-10 14:03:38.285817 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:03:38.285888 | orchestrator | 2026-01-10 14:03:38.285903 | orchestrator | TASK [osism.services.manager : Include version verification tasks] ************* 2026-01-10 14:03:38.365247 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/verify-versions.yml for testbed-manager 2026-01-10 14:03:38.365339 | orchestrator | 2026-01-10 14:03:38.365355 | orchestrator | TASK [osism.services.manager : Deploy service manager version check script] **** 2026-01-10 14:03:39.156280 | orchestrator | changed: [testbed-manager] 2026-01-10 14:03:39.156408 | orchestrator | 2026-01-10 14:03:39.156428 | orchestrator | TASK [osism.services.manager : Execute service manager version check] ********** 2026-01-10 14:03:42.539195 | orchestrator | ok: [testbed-manager] 2026-01-10 14:03:42.539325 | orchestrator | 2026-01-10 14:03:42.539342 | orchestrator | TASK [osism.services.manager : Display version check results] ****************** 2026-01-10 14:03:42.617433 | orchestrator | ok: [testbed-manager] => { 2026-01-10 14:03:42.617589 | orchestrator | "version_check_result.stdout_lines": [ 2026-01-10 14:03:42.617605 | orchestrator | "=== OSISM Container Version Check ===", 2026-01-10 14:03:42.617616 | orchestrator | "Checking running containers against expected versions...", 2026-01-10 14:03:42.617629 | orchestrator | "", 2026-01-10 14:03:42.617641 | orchestrator | "Checking service: inventory_reconciler (Inventory Reconciler Service)", 2026-01-10 14:03:42.617651 | orchestrator | " Expected: registry.osism.tech/osism/inventory-reconciler:latest", 2026-01-10 14:03:42.617661 | orchestrator | " Enabled: true", 2026-01-10 14:03:42.617671 | orchestrator | " Running: registry.osism.tech/osism/inventory-reconciler:latest", 2026-01-10 14:03:42.617682 | orchestrator | " Status: ✅ MATCH", 2026-01-10 14:03:42.617692 | orchestrator | "", 2026-01-10 14:03:42.617702 | orchestrator | "Checking service: osism-ansible (OSISM Ansible Service)", 2026-01-10 14:03:42.617712 | orchestrator | " Expected: registry.osism.tech/osism/osism-ansible:latest", 2026-01-10 14:03:42.617723 | orchestrator | " Enabled: true", 2026-01-10 14:03:42.617732 | orchestrator | " Running: registry.osism.tech/osism/osism-ansible:latest", 2026-01-10 14:03:42.617742 | orchestrator | " Status: ✅ MATCH", 2026-01-10 14:03:42.617752 | orchestrator | "", 2026-01-10 14:03:42.617762 | orchestrator | "Checking service: osism-kubernetes (Osism-Kubernetes Service)", 2026-01-10 14:03:42.617772 | orchestrator | " Expected: registry.osism.tech/osism/osism-kubernetes:latest", 2026-01-10 14:03:42.617783 | orchestrator | " Enabled: true", 2026-01-10 14:03:42.617793 | orchestrator | " Running: registry.osism.tech/osism/osism-kubernetes:latest", 2026-01-10 14:03:42.617803 | orchestrator | " Status: ✅ MATCH", 2026-01-10 14:03:42.617813 | orchestrator | "", 2026-01-10 14:03:42.617823 | orchestrator | "Checking service: ceph-ansible (Ceph-Ansible Service)", 2026-01-10 14:03:42.617858 | orchestrator | " Expected: registry.osism.tech/osism/ceph-ansible:reef", 2026-01-10 14:03:42.617869 | orchestrator | " Enabled: true", 2026-01-10 14:03:42.617879 | orchestrator | " Running: registry.osism.tech/osism/ceph-ansible:reef", 2026-01-10 14:03:42.617889 | orchestrator | " Status: ✅ MATCH", 2026-01-10 14:03:42.617898 | orchestrator | "", 2026-01-10 14:03:42.617908 | orchestrator | "Checking service: kolla-ansible (Kolla-Ansible Service)", 2026-01-10 14:03:42.617918 | orchestrator | " Expected: registry.osism.tech/osism/kolla-ansible:2025.1", 2026-01-10 14:03:42.617927 | orchestrator | " Enabled: true", 2026-01-10 14:03:42.617937 | orchestrator | " Running: registry.osism.tech/osism/kolla-ansible:2025.1", 2026-01-10 14:03:42.617947 | orchestrator | " Status: ✅ MATCH", 2026-01-10 14:03:42.617956 | orchestrator | "", 2026-01-10 14:03:42.617966 | orchestrator | "Checking service: osismclient (OSISM Client)", 2026-01-10 14:03:42.617976 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-01-10 14:03:42.617986 | orchestrator | " Enabled: true", 2026-01-10 14:03:42.617996 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-01-10 14:03:42.618006 | orchestrator | " Status: ✅ MATCH", 2026-01-10 14:03:42.618168 | orchestrator | "", 2026-01-10 14:03:42.618182 | orchestrator | "Checking service: ara-server (ARA Server)", 2026-01-10 14:03:42.618192 | orchestrator | " Expected: registry.osism.tech/osism/ara-server:1.7.3", 2026-01-10 14:03:42.618202 | orchestrator | " Enabled: true", 2026-01-10 14:03:42.618223 | orchestrator | " Running: registry.osism.tech/osism/ara-server:1.7.3", 2026-01-10 14:03:42.618234 | orchestrator | " Status: ✅ MATCH", 2026-01-10 14:03:42.618243 | orchestrator | "", 2026-01-10 14:03:42.618253 | orchestrator | "Checking service: mariadb (MariaDB for ARA)", 2026-01-10 14:03:42.618263 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-01-10 14:03:42.618280 | orchestrator | " Enabled: true", 2026-01-10 14:03:42.618290 | orchestrator | " Running: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-01-10 14:03:42.618300 | orchestrator | " Status: ✅ MATCH", 2026-01-10 14:03:42.618310 | orchestrator | "", 2026-01-10 14:03:42.618320 | orchestrator | "Checking service: frontend (OSISM Frontend)", 2026-01-10 14:03:42.618330 | orchestrator | " Expected: registry.osism.tech/osism/osism-frontend:latest", 2026-01-10 14:03:42.618340 | orchestrator | " Enabled: true", 2026-01-10 14:03:42.618349 | orchestrator | " Running: registry.osism.tech/osism/osism-frontend:latest", 2026-01-10 14:03:42.618359 | orchestrator | " Status: ✅ MATCH", 2026-01-10 14:03:42.618369 | orchestrator | "", 2026-01-10 14:03:42.618379 | orchestrator | "Checking service: redis (Redis Cache)", 2026-01-10 14:03:42.618389 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-01-10 14:03:42.618398 | orchestrator | " Enabled: true", 2026-01-10 14:03:42.618408 | orchestrator | " Running: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-01-10 14:03:42.618418 | orchestrator | " Status: ✅ MATCH", 2026-01-10 14:03:42.618428 | orchestrator | "", 2026-01-10 14:03:42.618438 | orchestrator | "Checking service: api (OSISM API Service)", 2026-01-10 14:03:42.618447 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-01-10 14:03:42.618457 | orchestrator | " Enabled: true", 2026-01-10 14:03:42.618467 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-01-10 14:03:42.618499 | orchestrator | " Status: ✅ MATCH", 2026-01-10 14:03:42.618509 | orchestrator | "", 2026-01-10 14:03:42.618519 | orchestrator | "Checking service: listener (OpenStack Event Listener)", 2026-01-10 14:03:42.618529 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-01-10 14:03:42.618538 | orchestrator | " Enabled: true", 2026-01-10 14:03:42.618548 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-01-10 14:03:42.618558 | orchestrator | " Status: ✅ MATCH", 2026-01-10 14:03:42.618568 | orchestrator | "", 2026-01-10 14:03:42.618577 | orchestrator | "Checking service: openstack (OpenStack Integration)", 2026-01-10 14:03:42.618587 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-01-10 14:03:42.618606 | orchestrator | " Enabled: true", 2026-01-10 14:03:42.618616 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-01-10 14:03:42.618626 | orchestrator | " Status: ✅ MATCH", 2026-01-10 14:03:42.618635 | orchestrator | "", 2026-01-10 14:03:42.618645 | orchestrator | "Checking service: beat (Celery Beat Scheduler)", 2026-01-10 14:03:42.618655 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-01-10 14:03:42.618664 | orchestrator | " Enabled: true", 2026-01-10 14:03:42.618674 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-01-10 14:03:42.618684 | orchestrator | " Status: ✅ MATCH", 2026-01-10 14:03:42.618693 | orchestrator | "", 2026-01-10 14:03:42.618703 | orchestrator | "Checking service: flower (Celery Flower Monitor)", 2026-01-10 14:03:42.618735 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-01-10 14:03:42.618745 | orchestrator | " Enabled: true", 2026-01-10 14:03:42.618755 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-01-10 14:03:42.618765 | orchestrator | " Status: ✅ MATCH", 2026-01-10 14:03:42.618775 | orchestrator | "", 2026-01-10 14:03:42.618785 | orchestrator | "=== Summary ===", 2026-01-10 14:03:42.618794 | orchestrator | "Errors (version mismatches): 0", 2026-01-10 14:03:42.618804 | orchestrator | "Warnings (expected containers not running): 0", 2026-01-10 14:03:42.618814 | orchestrator | "", 2026-01-10 14:03:42.618824 | orchestrator | "✅ All running containers match expected versions!" 2026-01-10 14:03:42.618834 | orchestrator | ] 2026-01-10 14:03:42.618844 | orchestrator | } 2026-01-10 14:03:42.618855 | orchestrator | 2026-01-10 14:03:42.618866 | orchestrator | TASK [osism.services.manager : Skip version check due to service configuration] *** 2026-01-10 14:03:42.678352 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:03:42.678510 | orchestrator | 2026-01-10 14:03:42.678526 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-10 14:03:42.678540 | orchestrator | testbed-manager : ok=70 changed=37 unreachable=0 failed=0 skipped=13 rescued=0 ignored=0 2026-01-10 14:03:42.678551 | orchestrator | 2026-01-10 14:03:42.781207 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-01-10 14:03:42.781356 | orchestrator | + deactivate 2026-01-10 14:03:42.781385 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-01-10 14:03:42.781400 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-01-10 14:03:42.781411 | orchestrator | + export PATH 2026-01-10 14:03:42.781423 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-01-10 14:03:42.781435 | orchestrator | + '[' -n '' ']' 2026-01-10 14:03:42.781446 | orchestrator | + hash -r 2026-01-10 14:03:42.781457 | orchestrator | + '[' -n '' ']' 2026-01-10 14:03:42.781524 | orchestrator | + unset VIRTUAL_ENV 2026-01-10 14:03:42.781538 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-01-10 14:03:42.781549 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-01-10 14:03:42.781561 | orchestrator | + unset -f deactivate 2026-01-10 14:03:42.781573 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2026-01-10 14:03:42.787719 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-01-10 14:03:42.787749 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-01-10 14:03:42.787761 | orchestrator | + local max_attempts=60 2026-01-10 14:03:42.787773 | orchestrator | + local name=ceph-ansible 2026-01-10 14:03:42.787784 | orchestrator | + local attempt_num=1 2026-01-10 14:03:42.789088 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-10 14:03:42.820256 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-01-10 14:03:42.820338 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-01-10 14:03:42.820354 | orchestrator | + local max_attempts=60 2026-01-10 14:03:42.820367 | orchestrator | + local name=kolla-ansible 2026-01-10 14:03:42.820379 | orchestrator | + local attempt_num=1 2026-01-10 14:03:42.821088 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-01-10 14:03:42.863401 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-01-10 14:03:42.863536 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-01-10 14:03:42.863553 | orchestrator | + local max_attempts=60 2026-01-10 14:03:42.863565 | orchestrator | + local name=osism-ansible 2026-01-10 14:03:42.863577 | orchestrator | + local attempt_num=1 2026-01-10 14:03:42.864834 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-01-10 14:03:42.903419 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-01-10 14:03:42.903562 | orchestrator | + [[ true == \t\r\u\e ]] 2026-01-10 14:03:42.903578 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-01-10 14:03:43.665023 | orchestrator | + docker compose --project-directory /opt/manager ps 2026-01-10 14:03:43.851096 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2026-01-10 14:03:43.851232 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:reef "/entrypoint.sh osis…" ceph-ansible 2 minutes ago Up About a minute (healthy) 2026-01-10 14:03:43.851247 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:2025.1 "/entrypoint.sh osis…" kolla-ansible 2 minutes ago Up About a minute (healthy) 2026-01-10 14:03:43.851260 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" api 2 minutes ago Up 2 minutes (healthy) 192.168.16.5:8000->8000/tcp 2026-01-10 14:03:43.851276 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" ara-server 2 minutes ago Up 2 minutes (healthy) 8000/tcp 2026-01-10 14:03:43.851288 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" beat 2 minutes ago Up 2 minutes (healthy) 2026-01-10 14:03:43.851329 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" flower 2 minutes ago Up 2 minutes (healthy) 2026-01-10 14:03:43.851341 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:latest "/sbin/tini -- /entr…" inventory_reconciler 2 minutes ago Up About a minute (healthy) 2026-01-10 14:03:43.851353 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" listener 2 minutes ago Up 2 minutes (healthy) 2026-01-10 14:03:43.851364 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.8.4 "docker-entrypoint.s…" mariadb 2 minutes ago Up 2 minutes (healthy) 3306/tcp 2026-01-10 14:03:43.851375 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" openstack 2 minutes ago Up 2 minutes (healthy) 2026-01-10 14:03:43.851387 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.7-alpine "docker-entrypoint.s…" redis 2 minutes ago Up 2 minutes (healthy) 6379/tcp 2026-01-10 14:03:43.851398 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:latest "/entrypoint.sh osis…" osism-ansible 2 minutes ago Up About a minute (healthy) 2026-01-10 14:03:43.851409 | orchestrator | osism-frontend registry.osism.tech/osism/osism-frontend:latest "docker-entrypoint.s…" frontend 2 minutes ago Up 2 minutes 192.168.16.5:3000->3000/tcp 2026-01-10 14:03:43.851421 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:latest "/entrypoint.sh osis…" osism-kubernetes 2 minutes ago Up About a minute (healthy) 2026-01-10 14:03:43.851432 | orchestrator | osismclient registry.osism.tech/osism/osism:latest "/sbin/tini -- sleep…" osismclient 2 minutes ago Up 2 minutes (healthy) 2026-01-10 14:03:43.858117 | orchestrator | ++ semver latest 7.0.0 2026-01-10 14:03:43.914455 | orchestrator | + [[ -1 -ge 0 ]] 2026-01-10 14:03:43.914596 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-01-10 14:03:43.914646 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2026-01-10 14:03:43.916862 | orchestrator | + osism apply resolvconf -l testbed-manager 2026-01-10 14:03:56.363194 | orchestrator | 2026-01-10 14:03:56 | INFO  | Task da888c6b-abb1-4031-bc33-5c4ed94e8cde (resolvconf) was prepared for execution. 2026-01-10 14:03:56.363305 | orchestrator | 2026-01-10 14:03:56 | INFO  | It takes a moment until task da888c6b-abb1-4031-bc33-5c4ed94e8cde (resolvconf) has been started and output is visible here. 2026-01-10 14:04:10.294348 | orchestrator | 2026-01-10 14:04:10.294467 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2026-01-10 14:04:10.294547 | orchestrator | 2026-01-10 14:04:10.294562 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-01-10 14:04:10.294574 | orchestrator | Saturday 10 January 2026 14:04:00 +0000 (0:00:00.130) 0:00:00.130 ****** 2026-01-10 14:04:10.294585 | orchestrator | ok: [testbed-manager] 2026-01-10 14:04:10.294598 | orchestrator | 2026-01-10 14:04:10.294610 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-01-10 14:04:10.294622 | orchestrator | Saturday 10 January 2026 14:04:04 +0000 (0:00:03.520) 0:00:03.650 ****** 2026-01-10 14:04:10.294634 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:04:10.294646 | orchestrator | 2026-01-10 14:04:10.294657 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-01-10 14:04:10.294668 | orchestrator | Saturday 10 January 2026 14:04:04 +0000 (0:00:00.072) 0:00:03.723 ****** 2026-01-10 14:04:10.294680 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2026-01-10 14:04:10.294692 | orchestrator | 2026-01-10 14:04:10.294714 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-01-10 14:04:10.294726 | orchestrator | Saturday 10 January 2026 14:04:04 +0000 (0:00:00.102) 0:00:03.825 ****** 2026-01-10 14:04:10.294737 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2026-01-10 14:04:10.294749 | orchestrator | 2026-01-10 14:04:10.294760 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-01-10 14:04:10.294772 | orchestrator | Saturday 10 January 2026 14:04:04 +0000 (0:00:00.081) 0:00:03.907 ****** 2026-01-10 14:04:10.294783 | orchestrator | ok: [testbed-manager] 2026-01-10 14:04:10.294795 | orchestrator | 2026-01-10 14:04:10.294806 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-01-10 14:04:10.294817 | orchestrator | Saturday 10 January 2026 14:04:05 +0000 (0:00:01.146) 0:00:05.054 ****** 2026-01-10 14:04:10.294828 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:04:10.294839 | orchestrator | 2026-01-10 14:04:10.294850 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-01-10 14:04:10.294862 | orchestrator | Saturday 10 January 2026 14:04:05 +0000 (0:00:00.064) 0:00:05.119 ****** 2026-01-10 14:04:10.294875 | orchestrator | ok: [testbed-manager] 2026-01-10 14:04:10.294888 | orchestrator | 2026-01-10 14:04:10.294900 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-01-10 14:04:10.294913 | orchestrator | Saturday 10 January 2026 14:04:06 +0000 (0:00:00.499) 0:00:05.619 ****** 2026-01-10 14:04:10.294926 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:04:10.294938 | orchestrator | 2026-01-10 14:04:10.294951 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-01-10 14:04:10.294966 | orchestrator | Saturday 10 January 2026 14:04:06 +0000 (0:00:00.081) 0:00:05.701 ****** 2026-01-10 14:04:10.294979 | orchestrator | changed: [testbed-manager] 2026-01-10 14:04:10.294991 | orchestrator | 2026-01-10 14:04:10.295003 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-01-10 14:04:10.295016 | orchestrator | Saturday 10 January 2026 14:04:06 +0000 (0:00:00.558) 0:00:06.259 ****** 2026-01-10 14:04:10.295028 | orchestrator | changed: [testbed-manager] 2026-01-10 14:04:10.295059 | orchestrator | 2026-01-10 14:04:10.295072 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-01-10 14:04:10.295084 | orchestrator | Saturday 10 January 2026 14:04:07 +0000 (0:00:01.141) 0:00:07.401 ****** 2026-01-10 14:04:10.295097 | orchestrator | ok: [testbed-manager] 2026-01-10 14:04:10.295114 | orchestrator | 2026-01-10 14:04:10.295132 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-01-10 14:04:10.295145 | orchestrator | Saturday 10 January 2026 14:04:08 +0000 (0:00:01.004) 0:00:08.405 ****** 2026-01-10 14:04:10.295159 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2026-01-10 14:04:10.295172 | orchestrator | 2026-01-10 14:04:10.295252 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-01-10 14:04:10.295266 | orchestrator | Saturday 10 January 2026 14:04:08 +0000 (0:00:00.080) 0:00:08.486 ****** 2026-01-10 14:04:10.295279 | orchestrator | changed: [testbed-manager] 2026-01-10 14:04:10.295290 | orchestrator | 2026-01-10 14:04:10.295301 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-10 14:04:10.295314 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-01-10 14:04:10.295326 | orchestrator | 2026-01-10 14:04:10.295337 | orchestrator | 2026-01-10 14:04:10.295349 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-10 14:04:10.295361 | orchestrator | Saturday 10 January 2026 14:04:10 +0000 (0:00:01.137) 0:00:09.623 ****** 2026-01-10 14:04:10.295380 | orchestrator | =============================================================================== 2026-01-10 14:04:10.295392 | orchestrator | Gathering Facts --------------------------------------------------------- 3.52s 2026-01-10 14:04:10.295403 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 1.15s 2026-01-10 14:04:10.295414 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 1.14s 2026-01-10 14:04:10.295425 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.14s 2026-01-10 14:04:10.295436 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 1.00s 2026-01-10 14:04:10.295447 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.56s 2026-01-10 14:04:10.295498 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.50s 2026-01-10 14:04:10.295512 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.10s 2026-01-10 14:04:10.295523 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.08s 2026-01-10 14:04:10.295534 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.08s 2026-01-10 14:04:10.295552 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.08s 2026-01-10 14:04:10.295563 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.07s 2026-01-10 14:04:10.295575 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.06s 2026-01-10 14:04:10.617820 | orchestrator | + osism apply sshconfig 2026-01-10 14:04:22.664235 | orchestrator | 2026-01-10 14:04:22 | INFO  | Task febaec3e-76c6-4f98-8ae0-a7cbcff43d6d (sshconfig) was prepared for execution. 2026-01-10 14:04:22.664380 | orchestrator | 2026-01-10 14:04:22 | INFO  | It takes a moment until task febaec3e-76c6-4f98-8ae0-a7cbcff43d6d (sshconfig) has been started and output is visible here. 2026-01-10 14:04:34.858793 | orchestrator | 2026-01-10 14:04:34.858981 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2026-01-10 14:04:34.859000 | orchestrator | 2026-01-10 14:04:34.859012 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2026-01-10 14:04:34.859023 | orchestrator | Saturday 10 January 2026 14:04:26 +0000 (0:00:00.172) 0:00:00.172 ****** 2026-01-10 14:04:34.859064 | orchestrator | ok: [testbed-manager] 2026-01-10 14:04:34.859076 | orchestrator | 2026-01-10 14:04:34.859086 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2026-01-10 14:04:34.859096 | orchestrator | Saturday 10 January 2026 14:04:27 +0000 (0:00:00.570) 0:00:00.743 ****** 2026-01-10 14:04:34.859113 | orchestrator | changed: [testbed-manager] 2026-01-10 14:04:34.859130 | orchestrator | 2026-01-10 14:04:34.859146 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2026-01-10 14:04:34.859163 | orchestrator | Saturday 10 January 2026 14:04:28 +0000 (0:00:00.537) 0:00:01.280 ****** 2026-01-10 14:04:34.859180 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2026-01-10 14:04:34.859196 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2026-01-10 14:04:34.859214 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2026-01-10 14:04:34.859232 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2026-01-10 14:04:34.859251 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2026-01-10 14:04:34.859262 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2026-01-10 14:04:34.859274 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2026-01-10 14:04:34.859284 | orchestrator | 2026-01-10 14:04:34.859296 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2026-01-10 14:04:34.859308 | orchestrator | Saturday 10 January 2026 14:04:33 +0000 (0:00:05.885) 0:00:07.165 ****** 2026-01-10 14:04:34.859319 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:04:34.859330 | orchestrator | 2026-01-10 14:04:34.859342 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2026-01-10 14:04:34.859354 | orchestrator | Saturday 10 January 2026 14:04:34 +0000 (0:00:00.082) 0:00:07.247 ****** 2026-01-10 14:04:34.859366 | orchestrator | changed: [testbed-manager] 2026-01-10 14:04:34.859376 | orchestrator | 2026-01-10 14:04:34.859393 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-10 14:04:34.859412 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-10 14:04:34.859433 | orchestrator | 2026-01-10 14:04:34.859447 | orchestrator | 2026-01-10 14:04:34.859461 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-10 14:04:34.859474 | orchestrator | Saturday 10 January 2026 14:04:34 +0000 (0:00:00.566) 0:00:07.814 ****** 2026-01-10 14:04:34.859516 | orchestrator | =============================================================================== 2026-01-10 14:04:34.859534 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 5.89s 2026-01-10 14:04:34.859549 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.57s 2026-01-10 14:04:34.859565 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.57s 2026-01-10 14:04:34.859582 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.54s 2026-01-10 14:04:34.859599 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.08s 2026-01-10 14:04:35.163870 | orchestrator | + osism apply known-hosts 2026-01-10 14:04:47.344569 | orchestrator | 2026-01-10 14:04:47 | INFO  | Task b1e481e1-8fbf-402e-bd2e-d06ed22db0eb (known-hosts) was prepared for execution. 2026-01-10 14:04:47.344692 | orchestrator | 2026-01-10 14:04:47 | INFO  | It takes a moment until task b1e481e1-8fbf-402e-bd2e-d06ed22db0eb (known-hosts) has been started and output is visible here. 2026-01-10 14:05:04.599402 | orchestrator | 2026-01-10 14:05:04.599544 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2026-01-10 14:05:04.599564 | orchestrator | 2026-01-10 14:05:04.599577 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2026-01-10 14:05:04.599591 | orchestrator | Saturday 10 January 2026 14:04:51 +0000 (0:00:00.164) 0:00:00.164 ****** 2026-01-10 14:05:04.599602 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-01-10 14:05:04.599635 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-01-10 14:05:04.599646 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-01-10 14:05:04.599658 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-01-10 14:05:04.599669 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-01-10 14:05:04.599680 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-01-10 14:05:04.599691 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-01-10 14:05:04.599701 | orchestrator | 2026-01-10 14:05:04.599714 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2026-01-10 14:05:04.599735 | orchestrator | Saturday 10 January 2026 14:04:57 +0000 (0:00:06.029) 0:00:06.194 ****** 2026-01-10 14:05:04.599749 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-01-10 14:05:04.599762 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-01-10 14:05:04.599773 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-01-10 14:05:04.599784 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-01-10 14:05:04.599796 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-01-10 14:05:04.599807 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-01-10 14:05:04.599818 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-01-10 14:05:04.599829 | orchestrator | 2026-01-10 14:05:04.599841 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-10 14:05:04.599852 | orchestrator | Saturday 10 January 2026 14:04:57 +0000 (0:00:00.174) 0:00:06.369 ****** 2026-01-10 14:05:04.599864 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFLNWPsANzEkH6Sw8+Qdg1O19VhAhF7KEV4oi7kEE3vHtIpZaw/w3AmSFzOApNzYazG3JqxVQjiac1rLOdJrFxc=) 2026-01-10 14:05:04.599880 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCw5WxUclt0a2kPSgG9H8vENpDXBnHkQ2il8ayhZL+iKCFvEqnqac4rPU25eqp4PyIEciciHReMPOFWJuMgGJQfSfG0OVCWCO4mrZvionbmS5vrigFuwsryBKAnZTJGafkQHFG1AAwfs9/3N8mBW2b4+TYvUH2UawCH9LJUH9Zm39I3ffa2XYJ/VB8EKjzQT6K8e8QCktMRnr/MqCKwNw54rhMAdGo2ciZrQwEjMyJPhXac1LSWOq2eRHfwM2mk9O4tKhSfvM7DkVuPTfCCJmmlY7NonwTxs11bF6+fxtWeyvI9EplDQyFZxYdSWLiz9Zo5zrJB1q1muxE1shR4KtBwnRAoqYWUS13O0P2zH4ZB2DP9aRV15EJO97B4sOA8nUGh763Ump5B6BQbs5CthpHmH0EKlesvdU/EmFRBoIGs0oWMDmtZjfhQBQ7k13osmnazkE6BeHh0cJsGfE16psGmNfRDwN3rLoFwffh2yWtYS4KF0sDPlaIUG4SiRKpwuiE=) 2026-01-10 14:05:04.599895 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAdcGbWQ6352DEQmhwOUj5UUIJsb4MejciD9+qIAPrM4) 2026-01-10 14:05:04.599908 | orchestrator | 2026-01-10 14:05:04.599920 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-10 14:05:04.599931 | orchestrator | Saturday 10 January 2026 14:04:59 +0000 (0:00:01.197) 0:00:07.566 ****** 2026-01-10 14:05:04.599944 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLEY8JNnWYSACp9GoyE6ej164PMW9k7NYgJMrDhuXJdMwIdSxXL7gj6bbq0RUwk7wzI7SXOhev1OgVRTLgxP4wg=) 2026-01-10 14:05:04.599965 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHoHuFG/WORMU7CojP9YoVEZOSSr0royxf90vDMN/b5x) 2026-01-10 14:05:04.600006 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDOEPgr517zGVgNppIOsvitQAZpktd8WmdTLIIFQmF3ScVRbw5dKQbZkv1W6n5tMjzv0VtjhOSA/WF8SA4uZIZX9rEklg6hIi1vFUQWt2xq1eSBVB6UUGLrcAwhSD705SS6nfqcSZrbvldAZ5s03LvsMYFnknSsLvzsvc0P4NQT8w2M8eAZntZzKFapeSU1IZt252f7ZYPTvVxhWWU2uLCvR1cAStYQ0JWVqrFORSYZ1Q3t1DU1bpiOjgBChesGxqKs6kVv+ACGHNU4/bOTUHpd//l7AzXiyshtqKAc9k2g3m5qusRfFZhpW/Bw0+q6TcktZ5w2eBMCxszR4WysUKKhdocqMGv2Bdez+SHndR+0gHz5s4oEJ2W/YdCGf4aDtU4+/rSDcf9RVN9+wQY0IIeqzA20ngmim2UC5n2E3+2PUs2s+FiE5SFPND60W28wy+4i2Mxu/gf0UXdiionQ5NYHfEEm444F9aGLJnmsHzim8BUCJKLIkiiYmGU7hy7STyE=) 2026-01-10 14:05:04.600021 | orchestrator | 2026-01-10 14:05:04.600034 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-10 14:05:04.600047 | orchestrator | Saturday 10 January 2026 14:05:00 +0000 (0:00:01.068) 0:00:08.635 ****** 2026-01-10 14:05:04.600061 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQChEnE5tfMqqQ+Vx39i8qcaaIibpBJQPP7AcYlKMjdyHfeMAsPm27i1eLUjdOL+RnI3sUredismLk3VoEE3F8scFieY4/5imzteBY17Bnrt6aPZQ2lIgzKRbUKegtoGMvQBtUd1lCKqk/TOfax32C7kHsNz6SDhC9vA98jOlSwBCSF+1V/hn+maDDl3hEfeenOkbF8eOm/4BSt4xfTqHqkiVa8GIlAIMqqz4suB7I5gIaC/JQ3aTSEu0tJZXQfGfQ1X2xYQli9/ozSyWvlHRt8ey512s6R/vQhQFsl/j9hnp4t/ZJh7B4UeRe6htZ+nw8ZfGhGnb+Z1aVzTXUfAvLvu6OSYA0fwYIhdNhu2Xpuj3tIUcfaJ6GgIrT+t/S6I6kTI5bTsu8KUZanQeOYQrUx1jPnD4aRPE2UG7lypcjFJUPSy0AImI3GVojI6+FZYwHZlA5ZvJyeP+0ewhwLo3Kb7b5wo/2+Gcb9gvBSKR19BOK7NqJvYlMugjST5Zdp2zQ8=) 2026-01-10 14:05:04.600137 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGZ+n0ARD2ecF7XkUE099bz0SIL0IpgCOA/9odusZjepxUtHJA21RQ56WdKK2sTh59u1t9WbEDG4hFKCB0295Dk=) 2026-01-10 14:05:04.600151 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPM9tNu2YK8LcM45973wi2icXayOyYMFgMM9aoKf+xox) 2026-01-10 14:05:04.600163 | orchestrator | 2026-01-10 14:05:04.600176 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-10 14:05:04.600189 | orchestrator | Saturday 10 January 2026 14:05:01 +0000 (0:00:01.118) 0:00:09.754 ****** 2026-01-10 14:05:04.600202 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBys5sehuEJN+hNCF53yr5GKYvBpjHG2SD93BV5bVN2R) 2026-01-10 14:05:04.600220 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDx8ztvDQz4cx+DlS7RMhvbKhPAvKKKEYby/cyALE932uinn58yR6O/5QpArURCEmM2VKaD1hp5ksoW7FMHX5OxXJtS52hA+5eM3uzkzItQm8hSzdlnlU1BBFbABC4osBn/bQUVgiWxsWnwf7sn104Yuw3hKQUfmvlrWyaFQznO1P8HtIzDx5gdemOSOgsTk7RVLfgni6FAaqB2ajLxoO+dvRLf1imjWc7tiOoAdknZAQO8GQgjLaXAybm85/n4xjvUUOF3Ajv+fkHGNVRLV56wsQVQnuLV7Cx0SIhRYOfzY4rWg47LPh1SeRgZP5D9TVaZYBRYrEHdMoFWPWiZJ4NmdwANWTJde6yEz8FpEBEyO9dNQELSGN7gInITJNxhzhuvACQ/huuUBTonhaEUjBvIdSqkt6hGuDuaGY2zvH+rozr1ZCSB7gfV9VKUoD/7744qQ8fao1NOgAgxz2ccV73lfM9sGWuh7XFa4QyJ18JrfJhYDNyyGro+enzhOixXVtU=) 2026-01-10 14:05:04.600234 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKVGTAIiGMsWp+FKteiZeIQJsifKApy0RZDfn6dnSEWb1LtEewkdXWBcUrzaiaNO7tyIEqf1A3D5owAmQ8jzD20=) 2026-01-10 14:05:04.600247 | orchestrator | 2026-01-10 14:05:04.600260 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-10 14:05:04.600273 | orchestrator | Saturday 10 January 2026 14:05:02 +0000 (0:00:01.100) 0:00:10.854 ****** 2026-01-10 14:05:04.600287 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKA+qwBqi8kW/OG5xd9LTrPNiwByVyiSP9gohnOT5kCg) 2026-01-10 14:05:04.600300 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC5/vq1mKoiXlJVvmSVbzSUA5YU6bFttEZjISKJu8sXTL1wwxafRQB0jDJvRkoZBAjN5pfUr8sFZb47yLUsBUXfTyoddK6ax9j06oLjttWDGBCvYOPxMrfusvLBrI8XK2TC8nUE23OTQbVcAVQXmQpyFo9vtVFT9r/2Dsozo3EbH3kH+n/T5h7biL8ENv/wI84GYtz5NXyzAox2G+PxuNjQgXPwBAqqPvCLRKHsShRdrMzJSO7X/ii4tJCY71aEG4RGi+1uWKnag8XxjYqVQFObArCvqP2p81/C0tPv5mmjHTW7Ysrnw5FJKmIL5tFPSaX5EmIsGxEg6XXSAp22q8bE9ZHxr+l4YhV9wyaeFY/5D192+3XKDdyHxmU5wNA0Antadu9bktGNUws1SQ+B5fRCdYme+YzKYRv47pJTEqGBRAxMZLaVfL3G1/1lWJ9zqV1OhYne1wSNp1EpmkIXJkyD67tPcLJ0WMEBijiUcW5NOfCmAkjf0DSYjJaP5oIpCvM=) 2026-01-10 14:05:04.600319 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBIxjg0YUNB9aaRNxuhOjv2mE05LiqVhZZU9KzKoLPwyNoN/yu4/smBf2WZ1tn2ZuwJvgNERPeqWSzOcHaudDg+E=) 2026-01-10 14:05:04.600330 | orchestrator | 2026-01-10 14:05:04.600341 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-10 14:05:04.600352 | orchestrator | Saturday 10 January 2026 14:05:03 +0000 (0:00:01.120) 0:00:11.975 ****** 2026-01-10 14:05:04.600373 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCyVGxxO9vzUw1Y1b7lvXIpSdP6450NDiKcb9w64rgr4lMtdIU38yKycUrhmd37RvgLRcLaqmSlTgfkcupi5A5cxMq8H25clLpmzgZsoRmErc7mhqpyo76AOkeBHOw4mNbFjfK7be89KxjMGWCYOrTnx1imdvGD5zNdt4GedDliDbz2RDrkNjm/5HuQR+2QvgWyBdMZhkcJm1lrWZSgDpYSAS7VXDQ4Frs35hiGcyHcTtD91C6gtXvSB0X17WAcShU3mzCeBd/MlPTWtZrMZ1RecWBg0lpk14ph1FHIgh7SVK0tNn+2u/gaPM+caKHq9q4kNLDciNUEqdrK4SQ+mTUzWIRaYUrGBAYRamE0R9jghdtNgXntjf4hbBpc1ml5ozqHAg2nWZ/djd50/qYTUY6K+8uw2ZC8PU4+uPfqWOESN8KwopxBYEqiMf5sTU22o+OpCmE0g4CiJPsVUzacfMjJNP+512RTd0BwNVdN/tYy7WLBoMfE5oJz+UBy3TugDpM=) 2026-01-10 14:05:15.594391 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOj19B5tdDaZwU5osfbAVW3wnUoCiOZFBBu3BgjSzwVtmidn1tFVgIFEdoPsV+j1k1xLnT9334Dapjiu50HtxhY=) 2026-01-10 14:05:15.594538 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHhPm7EaLFvg9rhj/gVXWYyLD+NxIXZ0qTBbBc8yUjiQ) 2026-01-10 14:05:15.594560 | orchestrator | 2026-01-10 14:05:15.594574 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-10 14:05:15.594588 | orchestrator | Saturday 10 January 2026 14:05:04 +0000 (0:00:01.145) 0:00:13.121 ****** 2026-01-10 14:05:15.594602 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC15fQp1upJzd+SisyEy+EmMSeBJ1Zcg/uaVkFIjbos4OxNyOeB+HcybOt0gxp7DF5jD+h79beMcmcaOjNPYpmbRbpcN+BFwNk6DAxF0GmnaXfUMmifgnDj+Q6tKHZbiqizFczPxLlEzGiY7HfBMqxfWzM02Sc04ieb3hdAt98ruDvoeSoEYb04KIl1hbpPCjn/BFc4JjV5XPozGlyn8iUDgTZ4tMyend45WwJan2AtlUxacDctutDhPqWJTd5IXbp3h9OoWr/vOs2dp3ISfWQqBCgzNU3+7PzmeqR7ZyotmhEM7lPjtsF6kq1X6eQR3zAAEbe/hWypH+RIm2BfLb/n7mKmmx7QBe4h55HAp2TaDiH2AZF4Z+cuD5/7FBFiRU50K0c2RyrevUG9p8TFEZ2WfuFPXhe94KZXi13CYdeK30teFte5qcxix2Gn+bgxSx+m0N0lqtP7DxZVAstsYGThtbukuFSWigoa0XJQ8x6kVUSJhg2zay54y87QXQN5uZs=) 2026-01-10 14:05:15.594617 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFSusGcdqMysYZrOliLZYKIaAPFngKX1tFO8PPxmXGAtulhb6V2B0WsStqjaLMpUkV5nMuI7WgP7ncqJ4tO0T3Y=) 2026-01-10 14:05:15.594629 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJ1Jz+H8BWmBiK/4iijDGxGy1DXclIXaftdvRgpCNYcJ) 2026-01-10 14:05:15.594640 | orchestrator | 2026-01-10 14:05:15.594652 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2026-01-10 14:05:15.594665 | orchestrator | Saturday 10 January 2026 14:05:05 +0000 (0:00:01.097) 0:00:14.218 ****** 2026-01-10 14:05:15.594676 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-01-10 14:05:15.594694 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-01-10 14:05:15.594713 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-01-10 14:05:15.594752 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-01-10 14:05:15.594774 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-01-10 14:05:15.594821 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-01-10 14:05:15.594833 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-01-10 14:05:15.594844 | orchestrator | 2026-01-10 14:05:15.594856 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2026-01-10 14:05:15.594868 | orchestrator | Saturday 10 January 2026 14:05:10 +0000 (0:00:05.264) 0:00:19.483 ****** 2026-01-10 14:05:15.594880 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-01-10 14:05:15.594894 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-01-10 14:05:15.594905 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-01-10 14:05:15.594917 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-01-10 14:05:15.594928 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-01-10 14:05:15.594939 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-01-10 14:05:15.594950 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-01-10 14:05:15.594961 | orchestrator | 2026-01-10 14:05:15.594973 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-10 14:05:15.594984 | orchestrator | Saturday 10 January 2026 14:05:11 +0000 (0:00:00.177) 0:00:19.660 ****** 2026-01-10 14:05:15.594996 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAdcGbWQ6352DEQmhwOUj5UUIJsb4MejciD9+qIAPrM4) 2026-01-10 14:05:15.595036 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCw5WxUclt0a2kPSgG9H8vENpDXBnHkQ2il8ayhZL+iKCFvEqnqac4rPU25eqp4PyIEciciHReMPOFWJuMgGJQfSfG0OVCWCO4mrZvionbmS5vrigFuwsryBKAnZTJGafkQHFG1AAwfs9/3N8mBW2b4+TYvUH2UawCH9LJUH9Zm39I3ffa2XYJ/VB8EKjzQT6K8e8QCktMRnr/MqCKwNw54rhMAdGo2ciZrQwEjMyJPhXac1LSWOq2eRHfwM2mk9O4tKhSfvM7DkVuPTfCCJmmlY7NonwTxs11bF6+fxtWeyvI9EplDQyFZxYdSWLiz9Zo5zrJB1q1muxE1shR4KtBwnRAoqYWUS13O0P2zH4ZB2DP9aRV15EJO97B4sOA8nUGh763Ump5B6BQbs5CthpHmH0EKlesvdU/EmFRBoIGs0oWMDmtZjfhQBQ7k13osmnazkE6BeHh0cJsGfE16psGmNfRDwN3rLoFwffh2yWtYS4KF0sDPlaIUG4SiRKpwuiE=) 2026-01-10 14:05:15.595050 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFLNWPsANzEkH6Sw8+Qdg1O19VhAhF7KEV4oi7kEE3vHtIpZaw/w3AmSFzOApNzYazG3JqxVQjiac1rLOdJrFxc=) 2026-01-10 14:05:15.595062 | orchestrator | 2026-01-10 14:05:15.595073 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-10 14:05:15.595085 | orchestrator | Saturday 10 January 2026 14:05:12 +0000 (0:00:01.128) 0:00:20.788 ****** 2026-01-10 14:05:15.595097 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDOEPgr517zGVgNppIOsvitQAZpktd8WmdTLIIFQmF3ScVRbw5dKQbZkv1W6n5tMjzv0VtjhOSA/WF8SA4uZIZX9rEklg6hIi1vFUQWt2xq1eSBVB6UUGLrcAwhSD705SS6nfqcSZrbvldAZ5s03LvsMYFnknSsLvzsvc0P4NQT8w2M8eAZntZzKFapeSU1IZt252f7ZYPTvVxhWWU2uLCvR1cAStYQ0JWVqrFORSYZ1Q3t1DU1bpiOjgBChesGxqKs6kVv+ACGHNU4/bOTUHpd//l7AzXiyshtqKAc9k2g3m5qusRfFZhpW/Bw0+q6TcktZ5w2eBMCxszR4WysUKKhdocqMGv2Bdez+SHndR+0gHz5s4oEJ2W/YdCGf4aDtU4+/rSDcf9RVN9+wQY0IIeqzA20ngmim2UC5n2E3+2PUs2s+FiE5SFPND60W28wy+4i2Mxu/gf0UXdiionQ5NYHfEEm444F9aGLJnmsHzim8BUCJKLIkiiYmGU7hy7STyE=) 2026-01-10 14:05:15.595117 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLEY8JNnWYSACp9GoyE6ej164PMW9k7NYgJMrDhuXJdMwIdSxXL7gj6bbq0RUwk7wzI7SXOhev1OgVRTLgxP4wg=) 2026-01-10 14:05:15.595128 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHoHuFG/WORMU7CojP9YoVEZOSSr0royxf90vDMN/b5x) 2026-01-10 14:05:15.595140 | orchestrator | 2026-01-10 14:05:15.595151 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-10 14:05:15.595163 | orchestrator | Saturday 10 January 2026 14:05:13 +0000 (0:00:01.096) 0:00:21.884 ****** 2026-01-10 14:05:15.595174 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPM9tNu2YK8LcM45973wi2icXayOyYMFgMM9aoKf+xox) 2026-01-10 14:05:15.595186 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQChEnE5tfMqqQ+Vx39i8qcaaIibpBJQPP7AcYlKMjdyHfeMAsPm27i1eLUjdOL+RnI3sUredismLk3VoEE3F8scFieY4/5imzteBY17Bnrt6aPZQ2lIgzKRbUKegtoGMvQBtUd1lCKqk/TOfax32C7kHsNz6SDhC9vA98jOlSwBCSF+1V/hn+maDDl3hEfeenOkbF8eOm/4BSt4xfTqHqkiVa8GIlAIMqqz4suB7I5gIaC/JQ3aTSEu0tJZXQfGfQ1X2xYQli9/ozSyWvlHRt8ey512s6R/vQhQFsl/j9hnp4t/ZJh7B4UeRe6htZ+nw8ZfGhGnb+Z1aVzTXUfAvLvu6OSYA0fwYIhdNhu2Xpuj3tIUcfaJ6GgIrT+t/S6I6kTI5bTsu8KUZanQeOYQrUx1jPnD4aRPE2UG7lypcjFJUPSy0AImI3GVojI6+FZYwHZlA5ZvJyeP+0ewhwLo3Kb7b5wo/2+Gcb9gvBSKR19BOK7NqJvYlMugjST5Zdp2zQ8=) 2026-01-10 14:05:15.595197 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGZ+n0ARD2ecF7XkUE099bz0SIL0IpgCOA/9odusZjepxUtHJA21RQ56WdKK2sTh59u1t9WbEDG4hFKCB0295Dk=) 2026-01-10 14:05:15.595209 | orchestrator | 2026-01-10 14:05:15.595220 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-10 14:05:15.595231 | orchestrator | Saturday 10 January 2026 14:05:14 +0000 (0:00:01.091) 0:00:22.975 ****** 2026-01-10 14:05:15.595243 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBys5sehuEJN+hNCF53yr5GKYvBpjHG2SD93BV5bVN2R) 2026-01-10 14:05:15.595260 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDx8ztvDQz4cx+DlS7RMhvbKhPAvKKKEYby/cyALE932uinn58yR6O/5QpArURCEmM2VKaD1hp5ksoW7FMHX5OxXJtS52hA+5eM3uzkzItQm8hSzdlnlU1BBFbABC4osBn/bQUVgiWxsWnwf7sn104Yuw3hKQUfmvlrWyaFQznO1P8HtIzDx5gdemOSOgsTk7RVLfgni6FAaqB2ajLxoO+dvRLf1imjWc7tiOoAdknZAQO8GQgjLaXAybm85/n4xjvUUOF3Ajv+fkHGNVRLV56wsQVQnuLV7Cx0SIhRYOfzY4rWg47LPh1SeRgZP5D9TVaZYBRYrEHdMoFWPWiZJ4NmdwANWTJde6yEz8FpEBEyO9dNQELSGN7gInITJNxhzhuvACQ/huuUBTonhaEUjBvIdSqkt6hGuDuaGY2zvH+rozr1ZCSB7gfV9VKUoD/7744qQ8fao1NOgAgxz2ccV73lfM9sGWuh7XFa4QyJ18JrfJhYDNyyGro+enzhOixXVtU=) 2026-01-10 14:05:15.595283 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKVGTAIiGMsWp+FKteiZeIQJsifKApy0RZDfn6dnSEWb1LtEewkdXWBcUrzaiaNO7tyIEqf1A3D5owAmQ8jzD20=) 2026-01-10 14:05:20.251833 | orchestrator | 2026-01-10 14:05:20.251946 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-10 14:05:20.251965 | orchestrator | Saturday 10 January 2026 14:05:15 +0000 (0:00:01.140) 0:00:24.116 ****** 2026-01-10 14:05:20.251977 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKA+qwBqi8kW/OG5xd9LTrPNiwByVyiSP9gohnOT5kCg) 2026-01-10 14:05:20.251993 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC5/vq1mKoiXlJVvmSVbzSUA5YU6bFttEZjISKJu8sXTL1wwxafRQB0jDJvRkoZBAjN5pfUr8sFZb47yLUsBUXfTyoddK6ax9j06oLjttWDGBCvYOPxMrfusvLBrI8XK2TC8nUE23OTQbVcAVQXmQpyFo9vtVFT9r/2Dsozo3EbH3kH+n/T5h7biL8ENv/wI84GYtz5NXyzAox2G+PxuNjQgXPwBAqqPvCLRKHsShRdrMzJSO7X/ii4tJCY71aEG4RGi+1uWKnag8XxjYqVQFObArCvqP2p81/C0tPv5mmjHTW7Ysrnw5FJKmIL5tFPSaX5EmIsGxEg6XXSAp22q8bE9ZHxr+l4YhV9wyaeFY/5D192+3XKDdyHxmU5wNA0Antadu9bktGNUws1SQ+B5fRCdYme+YzKYRv47pJTEqGBRAxMZLaVfL3G1/1lWJ9zqV1OhYne1wSNp1EpmkIXJkyD67tPcLJ0WMEBijiUcW5NOfCmAkjf0DSYjJaP5oIpCvM=) 2026-01-10 14:05:20.252033 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBIxjg0YUNB9aaRNxuhOjv2mE05LiqVhZZU9KzKoLPwyNoN/yu4/smBf2WZ1tn2ZuwJvgNERPeqWSzOcHaudDg+E=) 2026-01-10 14:05:20.252050 | orchestrator | 2026-01-10 14:05:20.252069 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-10 14:05:20.252088 | orchestrator | Saturday 10 January 2026 14:05:16 +0000 (0:00:01.116) 0:00:25.233 ****** 2026-01-10 14:05:20.252132 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHhPm7EaLFvg9rhj/gVXWYyLD+NxIXZ0qTBbBc8yUjiQ) 2026-01-10 14:05:20.252149 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCyVGxxO9vzUw1Y1b7lvXIpSdP6450NDiKcb9w64rgr4lMtdIU38yKycUrhmd37RvgLRcLaqmSlTgfkcupi5A5cxMq8H25clLpmzgZsoRmErc7mhqpyo76AOkeBHOw4mNbFjfK7be89KxjMGWCYOrTnx1imdvGD5zNdt4GedDliDbz2RDrkNjm/5HuQR+2QvgWyBdMZhkcJm1lrWZSgDpYSAS7VXDQ4Frs35hiGcyHcTtD91C6gtXvSB0X17WAcShU3mzCeBd/MlPTWtZrMZ1RecWBg0lpk14ph1FHIgh7SVK0tNn+2u/gaPM+caKHq9q4kNLDciNUEqdrK4SQ+mTUzWIRaYUrGBAYRamE0R9jghdtNgXntjf4hbBpc1ml5ozqHAg2nWZ/djd50/qYTUY6K+8uw2ZC8PU4+uPfqWOESN8KwopxBYEqiMf5sTU22o+OpCmE0g4CiJPsVUzacfMjJNP+512RTd0BwNVdN/tYy7WLBoMfE5oJz+UBy3TugDpM=) 2026-01-10 14:05:20.252161 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOj19B5tdDaZwU5osfbAVW3wnUoCiOZFBBu3BgjSzwVtmidn1tFVgIFEdoPsV+j1k1xLnT9334Dapjiu50HtxhY=) 2026-01-10 14:05:20.252172 | orchestrator | 2026-01-10 14:05:20.252184 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-10 14:05:20.252195 | orchestrator | Saturday 10 January 2026 14:05:17 +0000 (0:00:01.165) 0:00:26.398 ****** 2026-01-10 14:05:20.252206 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC15fQp1upJzd+SisyEy+EmMSeBJ1Zcg/uaVkFIjbos4OxNyOeB+HcybOt0gxp7DF5jD+h79beMcmcaOjNPYpmbRbpcN+BFwNk6DAxF0GmnaXfUMmifgnDj+Q6tKHZbiqizFczPxLlEzGiY7HfBMqxfWzM02Sc04ieb3hdAt98ruDvoeSoEYb04KIl1hbpPCjn/BFc4JjV5XPozGlyn8iUDgTZ4tMyend45WwJan2AtlUxacDctutDhPqWJTd5IXbp3h9OoWr/vOs2dp3ISfWQqBCgzNU3+7PzmeqR7ZyotmhEM7lPjtsF6kq1X6eQR3zAAEbe/hWypH+RIm2BfLb/n7mKmmx7QBe4h55HAp2TaDiH2AZF4Z+cuD5/7FBFiRU50K0c2RyrevUG9p8TFEZ2WfuFPXhe94KZXi13CYdeK30teFte5qcxix2Gn+bgxSx+m0N0lqtP7DxZVAstsYGThtbukuFSWigoa0XJQ8x6kVUSJhg2zay54y87QXQN5uZs=) 2026-01-10 14:05:20.252217 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFSusGcdqMysYZrOliLZYKIaAPFngKX1tFO8PPxmXGAtulhb6V2B0WsStqjaLMpUkV5nMuI7WgP7ncqJ4tO0T3Y=) 2026-01-10 14:05:20.252229 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJ1Jz+H8BWmBiK/4iijDGxGy1DXclIXaftdvRgpCNYcJ) 2026-01-10 14:05:20.252240 | orchestrator | 2026-01-10 14:05:20.252251 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2026-01-10 14:05:20.252262 | orchestrator | Saturday 10 January 2026 14:05:18 +0000 (0:00:01.072) 0:00:27.470 ****** 2026-01-10 14:05:20.252274 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-01-10 14:05:20.252285 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-01-10 14:05:20.252296 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-01-10 14:05:20.252307 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-01-10 14:05:20.252318 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-01-10 14:05:20.252329 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-01-10 14:05:20.252340 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-01-10 14:05:20.252350 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:05:20.252362 | orchestrator | 2026-01-10 14:05:20.252394 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2026-01-10 14:05:20.252416 | orchestrator | Saturday 10 January 2026 14:05:19 +0000 (0:00:00.177) 0:00:27.648 ****** 2026-01-10 14:05:20.252429 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:05:20.252442 | orchestrator | 2026-01-10 14:05:20.252454 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2026-01-10 14:05:20.252467 | orchestrator | Saturday 10 January 2026 14:05:19 +0000 (0:00:00.064) 0:00:27.713 ****** 2026-01-10 14:05:20.252480 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:05:20.252492 | orchestrator | 2026-01-10 14:05:20.252536 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2026-01-10 14:05:20.252548 | orchestrator | Saturday 10 January 2026 14:05:19 +0000 (0:00:00.062) 0:00:27.775 ****** 2026-01-10 14:05:20.252561 | orchestrator | changed: [testbed-manager] 2026-01-10 14:05:20.252573 | orchestrator | 2026-01-10 14:05:20.252584 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-10 14:05:20.252595 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-01-10 14:05:20.252607 | orchestrator | 2026-01-10 14:05:20.252618 | orchestrator | 2026-01-10 14:05:20.252629 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-10 14:05:20.252640 | orchestrator | Saturday 10 January 2026 14:05:20 +0000 (0:00:00.760) 0:00:28.535 ****** 2026-01-10 14:05:20.252650 | orchestrator | =============================================================================== 2026-01-10 14:05:20.252661 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 6.03s 2026-01-10 14:05:20.252672 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 5.26s 2026-01-10 14:05:20.252683 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.20s 2026-01-10 14:05:20.252694 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.17s 2026-01-10 14:05:20.252705 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.15s 2026-01-10 14:05:20.252716 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.14s 2026-01-10 14:05:20.252727 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.13s 2026-01-10 14:05:20.252737 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.12s 2026-01-10 14:05:20.252748 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.12s 2026-01-10 14:05:20.252759 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.12s 2026-01-10 14:05:20.252770 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.10s 2026-01-10 14:05:20.252781 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.10s 2026-01-10 14:05:20.252791 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.10s 2026-01-10 14:05:20.252810 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.09s 2026-01-10 14:05:20.252821 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.07s 2026-01-10 14:05:20.252832 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.07s 2026-01-10 14:05:20.252843 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.76s 2026-01-10 14:05:20.252854 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.18s 2026-01-10 14:05:20.252864 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.18s 2026-01-10 14:05:20.252875 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.17s 2026-01-10 14:05:20.620739 | orchestrator | + osism apply squid 2026-01-10 14:05:32.851450 | orchestrator | 2026-01-10 14:05:32 | INFO  | Task bc6df519-996d-4417-9d13-8eb77822062c (squid) was prepared for execution. 2026-01-10 14:05:32.851610 | orchestrator | 2026-01-10 14:05:32 | INFO  | It takes a moment until task bc6df519-996d-4417-9d13-8eb77822062c (squid) has been started and output is visible here. 2026-01-10 14:07:31.337126 | orchestrator | 2026-01-10 14:07:31.337278 | orchestrator | PLAY [Apply role squid] ******************************************************** 2026-01-10 14:07:31.337296 | orchestrator | 2026-01-10 14:07:31.337309 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2026-01-10 14:07:31.337328 | orchestrator | Saturday 10 January 2026 14:05:37 +0000 (0:00:00.167) 0:00:00.167 ****** 2026-01-10 14:07:31.337348 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2026-01-10 14:07:31.337368 | orchestrator | 2026-01-10 14:07:31.337387 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2026-01-10 14:07:31.337408 | orchestrator | Saturday 10 January 2026 14:05:37 +0000 (0:00:00.081) 0:00:00.249 ****** 2026-01-10 14:07:31.337430 | orchestrator | ok: [testbed-manager] 2026-01-10 14:07:31.337444 | orchestrator | 2026-01-10 14:07:31.337455 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2026-01-10 14:07:31.337467 | orchestrator | Saturday 10 January 2026 14:05:38 +0000 (0:00:01.536) 0:00:01.786 ****** 2026-01-10 14:07:31.337480 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2026-01-10 14:07:31.337491 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2026-01-10 14:07:31.337503 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2026-01-10 14:07:31.337515 | orchestrator | 2026-01-10 14:07:31.337710 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2026-01-10 14:07:31.337727 | orchestrator | Saturday 10 January 2026 14:05:39 +0000 (0:00:01.194) 0:00:02.980 ****** 2026-01-10 14:07:31.337740 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2026-01-10 14:07:31.337753 | orchestrator | 2026-01-10 14:07:31.337766 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2026-01-10 14:07:31.337779 | orchestrator | Saturday 10 January 2026 14:05:41 +0000 (0:00:01.095) 0:00:04.076 ****** 2026-01-10 14:07:31.337792 | orchestrator | ok: [testbed-manager] 2026-01-10 14:07:31.337804 | orchestrator | 2026-01-10 14:07:31.337818 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2026-01-10 14:07:31.337831 | orchestrator | Saturday 10 January 2026 14:05:41 +0000 (0:00:00.379) 0:00:04.455 ****** 2026-01-10 14:07:31.337843 | orchestrator | changed: [testbed-manager] 2026-01-10 14:07:31.337854 | orchestrator | 2026-01-10 14:07:31.337865 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2026-01-10 14:07:31.337876 | orchestrator | Saturday 10 January 2026 14:05:42 +0000 (0:00:00.980) 0:00:05.436 ****** 2026-01-10 14:07:31.337887 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2026-01-10 14:07:31.337900 | orchestrator | ok: [testbed-manager] 2026-01-10 14:07:31.337911 | orchestrator | 2026-01-10 14:07:31.337922 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2026-01-10 14:07:31.337933 | orchestrator | Saturday 10 January 2026 14:06:18 +0000 (0:00:35.753) 0:00:41.189 ****** 2026-01-10 14:07:31.337944 | orchestrator | changed: [testbed-manager] 2026-01-10 14:07:31.337955 | orchestrator | 2026-01-10 14:07:31.337966 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2026-01-10 14:07:31.337977 | orchestrator | Saturday 10 January 2026 14:06:30 +0000 (0:00:12.060) 0:00:53.250 ****** 2026-01-10 14:07:31.337988 | orchestrator | Pausing for 60 seconds 2026-01-10 14:07:31.337999 | orchestrator | changed: [testbed-manager] 2026-01-10 14:07:31.338010 | orchestrator | 2026-01-10 14:07:31.338083 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2026-01-10 14:07:31.338095 | orchestrator | Saturday 10 January 2026 14:07:30 +0000 (0:01:00.081) 0:01:53.332 ****** 2026-01-10 14:07:31.338106 | orchestrator | ok: [testbed-manager] 2026-01-10 14:07:31.338117 | orchestrator | 2026-01-10 14:07:31.338128 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2026-01-10 14:07:31.338172 | orchestrator | Saturday 10 January 2026 14:07:30 +0000 (0:00:00.081) 0:01:53.413 ****** 2026-01-10 14:07:31.338184 | orchestrator | changed: [testbed-manager] 2026-01-10 14:07:31.338195 | orchestrator | 2026-01-10 14:07:31.338206 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-10 14:07:31.338218 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-10 14:07:31.338229 | orchestrator | 2026-01-10 14:07:31.338241 | orchestrator | 2026-01-10 14:07:31.338251 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-10 14:07:31.338262 | orchestrator | Saturday 10 January 2026 14:07:31 +0000 (0:00:00.640) 0:01:54.053 ****** 2026-01-10 14:07:31.338273 | orchestrator | =============================================================================== 2026-01-10 14:07:31.338285 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.08s 2026-01-10 14:07:31.338304 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 35.75s 2026-01-10 14:07:31.338323 | orchestrator | osism.services.squid : Restart squid service --------------------------- 12.06s 2026-01-10 14:07:31.338340 | orchestrator | osism.services.squid : Install required packages ------------------------ 1.54s 2026-01-10 14:07:31.338357 | orchestrator | osism.services.squid : Create required directories ---------------------- 1.19s 2026-01-10 14:07:31.338375 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 1.10s 2026-01-10 14:07:31.338390 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 0.98s 2026-01-10 14:07:31.338405 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.64s 2026-01-10 14:07:31.338420 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.38s 2026-01-10 14:07:31.338437 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.08s 2026-01-10 14:07:31.338454 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.08s 2026-01-10 14:07:31.640638 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-01-10 14:07:31.640764 | orchestrator | + /opt/configuration/scripts/set-kolla-namespace.sh kolla 2026-01-10 14:07:31.649977 | orchestrator | + set -e 2026-01-10 14:07:31.650074 | orchestrator | + NAMESPACE=kolla 2026-01-10 14:07:31.650091 | orchestrator | + sed -i 's#docker_namespace: .*#docker_namespace: kolla#g' /opt/configuration/inventory/group_vars/all/kolla.yml 2026-01-10 14:07:31.655996 | orchestrator | ++ semver latest 9.0.0 2026-01-10 14:07:31.713893 | orchestrator | + [[ -1 -lt 0 ]] 2026-01-10 14:07:31.714009 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-01-10 14:07:31.714656 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2026-01-10 14:07:43.799783 | orchestrator | 2026-01-10 14:07:43 | INFO  | Task a5398e80-0174-4a65-bdb5-c4f6dc41a709 (operator) was prepared for execution. 2026-01-10 14:07:43.799927 | orchestrator | 2026-01-10 14:07:43 | INFO  | It takes a moment until task a5398e80-0174-4a65-bdb5-c4f6dc41a709 (operator) has been started and output is visible here. 2026-01-10 14:08:00.372104 | orchestrator | 2026-01-10 14:08:00.372247 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2026-01-10 14:08:00.372273 | orchestrator | 2026-01-10 14:08:00.372293 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-01-10 14:08:00.372313 | orchestrator | Saturday 10 January 2026 14:07:48 +0000 (0:00:00.139) 0:00:00.139 ****** 2026-01-10 14:08:00.372335 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:08:00.372354 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:08:00.372374 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:08:00.372392 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:08:00.372411 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:08:00.372436 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:08:00.372455 | orchestrator | 2026-01-10 14:08:00.372473 | orchestrator | TASK [Do not require tty for all users] **************************************** 2026-01-10 14:08:00.372492 | orchestrator | Saturday 10 January 2026 14:07:51 +0000 (0:00:03.305) 0:00:03.444 ****** 2026-01-10 14:08:00.372595 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:08:00.372616 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:08:00.372636 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:08:00.372656 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:08:00.372676 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:08:00.372695 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:08:00.372714 | orchestrator | 2026-01-10 14:08:00.372734 | orchestrator | PLAY [Apply role operator] ***************************************************** 2026-01-10 14:08:00.372753 | orchestrator | 2026-01-10 14:08:00.372774 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-01-10 14:08:00.372794 | orchestrator | Saturday 10 January 2026 14:07:52 +0000 (0:00:00.865) 0:00:04.310 ****** 2026-01-10 14:08:00.372813 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:08:00.372833 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:08:00.372852 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:08:00.372871 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:08:00.372891 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:08:00.372910 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:08:00.372930 | orchestrator | 2026-01-10 14:08:00.372950 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-01-10 14:08:00.372969 | orchestrator | Saturday 10 January 2026 14:07:52 +0000 (0:00:00.177) 0:00:04.487 ****** 2026-01-10 14:08:00.372989 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:08:00.373009 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:08:00.373028 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:08:00.373048 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:08:00.373068 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:08:00.373087 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:08:00.373107 | orchestrator | 2026-01-10 14:08:00.373126 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-01-10 14:08:00.373172 | orchestrator | Saturday 10 January 2026 14:07:52 +0000 (0:00:00.166) 0:00:04.654 ****** 2026-01-10 14:08:00.373193 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:08:00.373215 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:08:00.373235 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:08:00.373254 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:08:00.373274 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:08:00.373294 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:08:00.373313 | orchestrator | 2026-01-10 14:08:00.373333 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-01-10 14:08:00.373352 | orchestrator | Saturday 10 January 2026 14:07:53 +0000 (0:00:00.643) 0:00:05.297 ****** 2026-01-10 14:08:00.373372 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:08:00.373392 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:08:00.373412 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:08:00.373431 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:08:00.373450 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:08:00.373470 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:08:00.373490 | orchestrator | 2026-01-10 14:08:00.373509 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-01-10 14:08:00.373593 | orchestrator | Saturday 10 January 2026 14:07:54 +0000 (0:00:00.840) 0:00:06.138 ****** 2026-01-10 14:08:00.373616 | orchestrator | changed: [testbed-node-0] => (item=adm) 2026-01-10 14:08:00.373635 | orchestrator | changed: [testbed-node-2] => (item=adm) 2026-01-10 14:08:00.373653 | orchestrator | changed: [testbed-node-1] => (item=adm) 2026-01-10 14:08:00.373672 | orchestrator | changed: [testbed-node-4] => (item=adm) 2026-01-10 14:08:00.373691 | orchestrator | changed: [testbed-node-3] => (item=adm) 2026-01-10 14:08:00.373709 | orchestrator | changed: [testbed-node-5] => (item=adm) 2026-01-10 14:08:00.373728 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2026-01-10 14:08:00.373746 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2026-01-10 14:08:00.373764 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2026-01-10 14:08:00.373782 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2026-01-10 14:08:00.373813 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2026-01-10 14:08:00.373832 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2026-01-10 14:08:00.373850 | orchestrator | 2026-01-10 14:08:00.373868 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-01-10 14:08:00.373887 | orchestrator | Saturday 10 January 2026 14:07:55 +0000 (0:00:01.392) 0:00:07.531 ****** 2026-01-10 14:08:00.373905 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:08:00.373924 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:08:00.373942 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:08:00.373961 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:08:00.373979 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:08:00.373997 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:08:00.374093 | orchestrator | 2026-01-10 14:08:00.374117 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-01-10 14:08:00.374136 | orchestrator | Saturday 10 January 2026 14:07:56 +0000 (0:00:01.311) 0:00:08.842 ****** 2026-01-10 14:08:00.374155 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2026-01-10 14:08:00.374174 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2026-01-10 14:08:00.374192 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2026-01-10 14:08:00.374211 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2026-01-10 14:08:00.374253 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2026-01-10 14:08:00.374272 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2026-01-10 14:08:00.374291 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2026-01-10 14:08:00.374309 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2026-01-10 14:08:00.374327 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2026-01-10 14:08:00.374345 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2026-01-10 14:08:00.374397 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2026-01-10 14:08:00.374418 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2026-01-10 14:08:00.374438 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2026-01-10 14:08:00.374458 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2026-01-10 14:08:00.374477 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2026-01-10 14:08:00.374496 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2026-01-10 14:08:00.374515 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2026-01-10 14:08:00.374559 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2026-01-10 14:08:00.374578 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2026-01-10 14:08:00.374597 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2026-01-10 14:08:00.374615 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2026-01-10 14:08:00.374634 | orchestrator | 2026-01-10 14:08:00.374653 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-01-10 14:08:00.374672 | orchestrator | Saturday 10 January 2026 14:07:58 +0000 (0:00:01.371) 0:00:10.214 ****** 2026-01-10 14:08:00.374691 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:08:00.374710 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:08:00.374728 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:08:00.374746 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:08:00.374765 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:08:00.374784 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:08:00.374802 | orchestrator | 2026-01-10 14:08:00.374820 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-01-10 14:08:00.374840 | orchestrator | Saturday 10 January 2026 14:07:58 +0000 (0:00:00.176) 0:00:10.391 ****** 2026-01-10 14:08:00.374875 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:08:00.374902 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:08:00.374921 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:08:00.374944 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:08:00.374964 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:08:00.374983 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:08:00.375002 | orchestrator | 2026-01-10 14:08:00.375023 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-01-10 14:08:00.375042 | orchestrator | Saturday 10 January 2026 14:07:58 +0000 (0:00:00.194) 0:00:10.585 ****** 2026-01-10 14:08:00.375062 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:08:00.375083 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:08:00.375105 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:08:00.375121 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:08:00.375138 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:08:00.375155 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:08:00.375172 | orchestrator | 2026-01-10 14:08:00.375189 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-01-10 14:08:00.375206 | orchestrator | Saturday 10 January 2026 14:07:59 +0000 (0:00:00.626) 0:00:11.211 ****** 2026-01-10 14:08:00.375223 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:08:00.375239 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:08:00.375256 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:08:00.375273 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:08:00.375289 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:08:00.375307 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:08:00.375324 | orchestrator | 2026-01-10 14:08:00.375341 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-01-10 14:08:00.375358 | orchestrator | Saturday 10 January 2026 14:07:59 +0000 (0:00:00.169) 0:00:11.380 ****** 2026-01-10 14:08:00.375375 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-01-10 14:08:00.375392 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:08:00.375409 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-01-10 14:08:00.375426 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-01-10 14:08:00.375442 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:08:00.375459 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-01-10 14:08:00.375475 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:08:00.375492 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:08:00.375509 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-01-10 14:08:00.375526 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:08:00.375566 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-01-10 14:08:00.375583 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:08:00.375600 | orchestrator | 2026-01-10 14:08:00.375616 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-01-10 14:08:00.375633 | orchestrator | Saturday 10 January 2026 14:07:59 +0000 (0:00:00.733) 0:00:12.114 ****** 2026-01-10 14:08:00.375649 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:08:00.375665 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:08:00.375682 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:08:00.375699 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:08:00.375716 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:08:00.375732 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:08:00.375749 | orchestrator | 2026-01-10 14:08:00.375765 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-01-10 14:08:00.375782 | orchestrator | Saturday 10 January 2026 14:08:00 +0000 (0:00:00.190) 0:00:12.305 ****** 2026-01-10 14:08:00.375799 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:08:00.375815 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:08:00.375832 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:08:00.375848 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:08:00.375877 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:08:01.745126 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:08:01.745262 | orchestrator | 2026-01-10 14:08:01.745281 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-01-10 14:08:01.745295 | orchestrator | Saturday 10 January 2026 14:08:00 +0000 (0:00:00.182) 0:00:12.487 ****** 2026-01-10 14:08:01.745307 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:08:01.745318 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:08:01.745329 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:08:01.745340 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:08:01.745351 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:08:01.745362 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:08:01.745372 | orchestrator | 2026-01-10 14:08:01.745384 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-01-10 14:08:01.745395 | orchestrator | Saturday 10 January 2026 14:08:00 +0000 (0:00:00.157) 0:00:12.644 ****** 2026-01-10 14:08:01.745406 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:08:01.745417 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:08:01.745428 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:08:01.745439 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:08:01.745450 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:08:01.745461 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:08:01.745471 | orchestrator | 2026-01-10 14:08:01.745482 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-01-10 14:08:01.745493 | orchestrator | Saturday 10 January 2026 14:08:01 +0000 (0:00:00.725) 0:00:13.370 ****** 2026-01-10 14:08:01.745504 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:08:01.745515 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:08:01.745526 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:08:01.745577 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:08:01.745598 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:08:01.745617 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:08:01.745633 | orchestrator | 2026-01-10 14:08:01.745645 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-10 14:08:01.745657 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-10 14:08:01.745670 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-10 14:08:01.745700 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-10 14:08:01.745712 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-10 14:08:01.745723 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-10 14:08:01.745734 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-10 14:08:01.745746 | orchestrator | 2026-01-10 14:08:01.745757 | orchestrator | 2026-01-10 14:08:01.745768 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-10 14:08:01.745779 | orchestrator | Saturday 10 January 2026 14:08:01 +0000 (0:00:00.232) 0:00:13.603 ****** 2026-01-10 14:08:01.745790 | orchestrator | =============================================================================== 2026-01-10 14:08:01.745801 | orchestrator | Gathering Facts --------------------------------------------------------- 3.31s 2026-01-10 14:08:01.745812 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 1.39s 2026-01-10 14:08:01.745823 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.37s 2026-01-10 14:08:01.745835 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 1.31s 2026-01-10 14:08:01.745854 | orchestrator | Do not require tty for all users ---------------------------------------- 0.87s 2026-01-10 14:08:01.745865 | orchestrator | osism.commons.operator : Create user ------------------------------------ 0.84s 2026-01-10 14:08:01.745876 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.73s 2026-01-10 14:08:01.745887 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.73s 2026-01-10 14:08:01.745897 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.64s 2026-01-10 14:08:01.745908 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.63s 2026-01-10 14:08:01.745919 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.23s 2026-01-10 14:08:01.745930 | orchestrator | osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file --- 0.19s 2026-01-10 14:08:01.745941 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.19s 2026-01-10 14:08:01.745952 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.18s 2026-01-10 14:08:01.745962 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.18s 2026-01-10 14:08:01.745973 | orchestrator | osism.commons.operator : Set custom environment variables in .bashrc configuration file --- 0.18s 2026-01-10 14:08:01.745984 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.17s 2026-01-10 14:08:01.745995 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.17s 2026-01-10 14:08:01.746006 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.16s 2026-01-10 14:08:02.078926 | orchestrator | + osism apply --environment custom facts 2026-01-10 14:08:04.039634 | orchestrator | 2026-01-10 14:08:04 | INFO  | Trying to run play facts in environment custom 2026-01-10 14:08:14.175888 | orchestrator | 2026-01-10 14:08:14 | INFO  | Task 3ade39c6-a1e0-47b2-a940-372ce23fd123 (facts) was prepared for execution. 2026-01-10 14:08:14.175992 | orchestrator | 2026-01-10 14:08:14 | INFO  | It takes a moment until task 3ade39c6-a1e0-47b2-a940-372ce23fd123 (facts) has been started and output is visible here. 2026-01-10 14:08:58.433601 | orchestrator | 2026-01-10 14:08:58.433756 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2026-01-10 14:08:58.433775 | orchestrator | 2026-01-10 14:08:58.433787 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-01-10 14:08:58.433799 | orchestrator | Saturday 10 January 2026 14:08:18 +0000 (0:00:00.097) 0:00:00.098 ****** 2026-01-10 14:08:58.433809 | orchestrator | ok: [testbed-manager] 2026-01-10 14:08:58.433821 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:08:58.433832 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:08:58.433842 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:08:58.433853 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:08:58.433862 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:08:58.433872 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:08:58.433883 | orchestrator | 2026-01-10 14:08:58.433893 | orchestrator | TASK [Copy fact file] ********************************************************** 2026-01-10 14:08:58.433903 | orchestrator | Saturday 10 January 2026 14:08:19 +0000 (0:00:01.413) 0:00:01.511 ****** 2026-01-10 14:08:58.433913 | orchestrator | ok: [testbed-manager] 2026-01-10 14:08:58.433922 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:08:58.433933 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:08:58.433942 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:08:58.433953 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:08:58.433963 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:08:58.433972 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:08:58.433982 | orchestrator | 2026-01-10 14:08:58.433992 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2026-01-10 14:08:58.434002 | orchestrator | 2026-01-10 14:08:58.434012 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-01-10 14:08:58.434119 | orchestrator | Saturday 10 January 2026 14:08:20 +0000 (0:00:01.204) 0:00:02.716 ****** 2026-01-10 14:08:58.434132 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:08:58.434144 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:08:58.434156 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:08:58.434168 | orchestrator | 2026-01-10 14:08:58.434198 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-01-10 14:08:58.434212 | orchestrator | Saturday 10 January 2026 14:08:21 +0000 (0:00:00.118) 0:00:02.834 ****** 2026-01-10 14:08:58.434225 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:08:58.434238 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:08:58.434250 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:08:58.434262 | orchestrator | 2026-01-10 14:08:58.434275 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-01-10 14:08:58.434288 | orchestrator | Saturday 10 January 2026 14:08:21 +0000 (0:00:00.209) 0:00:03.043 ****** 2026-01-10 14:08:58.434300 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:08:58.434313 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:08:58.434325 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:08:58.434338 | orchestrator | 2026-01-10 14:08:58.434350 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-01-10 14:08:58.434363 | orchestrator | Saturday 10 January 2026 14:08:21 +0000 (0:00:00.218) 0:00:03.262 ****** 2026-01-10 14:08:58.434378 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-10 14:08:58.434393 | orchestrator | 2026-01-10 14:08:58.434406 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-01-10 14:08:58.434417 | orchestrator | Saturday 10 January 2026 14:08:21 +0000 (0:00:00.149) 0:00:03.412 ****** 2026-01-10 14:08:58.434428 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:08:58.434439 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:08:58.434450 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:08:58.434461 | orchestrator | 2026-01-10 14:08:58.434472 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-01-10 14:08:58.434483 | orchestrator | Saturday 10 January 2026 14:08:22 +0000 (0:00:00.451) 0:00:03.864 ****** 2026-01-10 14:08:58.434494 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:08:58.434505 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:08:58.434516 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:08:58.434527 | orchestrator | 2026-01-10 14:08:58.434560 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-01-10 14:08:58.434572 | orchestrator | Saturday 10 January 2026 14:08:22 +0000 (0:00:00.144) 0:00:04.008 ****** 2026-01-10 14:08:58.434583 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:08:58.434594 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:08:58.434605 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:08:58.434616 | orchestrator | 2026-01-10 14:08:58.434627 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-01-10 14:08:58.434637 | orchestrator | Saturday 10 January 2026 14:08:23 +0000 (0:00:01.065) 0:00:05.074 ****** 2026-01-10 14:08:58.434648 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:08:58.434659 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:08:58.434670 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:08:58.434681 | orchestrator | 2026-01-10 14:08:58.434692 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-01-10 14:08:58.434703 | orchestrator | Saturday 10 January 2026 14:08:23 +0000 (0:00:00.462) 0:00:05.537 ****** 2026-01-10 14:08:58.434714 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:08:58.434725 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:08:58.434736 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:08:58.434747 | orchestrator | 2026-01-10 14:08:58.434758 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-01-10 14:08:58.434769 | orchestrator | Saturday 10 January 2026 14:08:24 +0000 (0:00:01.093) 0:00:06.630 ****** 2026-01-10 14:08:58.434780 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:08:58.434800 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:08:58.434811 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:08:58.434822 | orchestrator | 2026-01-10 14:08:58.434833 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2026-01-10 14:08:58.434843 | orchestrator | Saturday 10 January 2026 14:08:41 +0000 (0:00:16.487) 0:00:23.117 ****** 2026-01-10 14:08:58.434854 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:08:58.434865 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:08:58.434876 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:08:58.434887 | orchestrator | 2026-01-10 14:08:58.434898 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2026-01-10 14:08:58.434929 | orchestrator | Saturday 10 January 2026 14:08:41 +0000 (0:00:00.107) 0:00:23.225 ****** 2026-01-10 14:08:58.434941 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:08:58.434952 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:08:58.434963 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:08:58.434973 | orchestrator | 2026-01-10 14:08:58.434984 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-01-10 14:08:58.434995 | orchestrator | Saturday 10 January 2026 14:08:49 +0000 (0:00:07.735) 0:00:30.960 ****** 2026-01-10 14:08:58.435006 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:08:58.435017 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:08:58.435028 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:08:58.435038 | orchestrator | 2026-01-10 14:08:58.435050 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-01-10 14:08:58.435060 | orchestrator | Saturday 10 January 2026 14:08:49 +0000 (0:00:00.515) 0:00:31.476 ****** 2026-01-10 14:08:58.435072 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2026-01-10 14:08:58.435083 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2026-01-10 14:08:58.435093 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2026-01-10 14:08:58.435104 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2026-01-10 14:08:58.435115 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2026-01-10 14:08:58.435126 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2026-01-10 14:08:58.435136 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2026-01-10 14:08:58.435147 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2026-01-10 14:08:58.435158 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2026-01-10 14:08:58.435169 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2026-01-10 14:08:58.435180 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2026-01-10 14:08:58.435191 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2026-01-10 14:08:58.435202 | orchestrator | 2026-01-10 14:08:58.435213 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-01-10 14:08:58.435224 | orchestrator | Saturday 10 January 2026 14:08:53 +0000 (0:00:03.539) 0:00:35.016 ****** 2026-01-10 14:08:58.435235 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:08:58.435246 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:08:58.435257 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:08:58.435267 | orchestrator | 2026-01-10 14:08:58.435278 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-01-10 14:08:58.435289 | orchestrator | 2026-01-10 14:08:58.435300 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-01-10 14:08:58.435311 | orchestrator | Saturday 10 January 2026 14:08:54 +0000 (0:00:01.364) 0:00:36.380 ****** 2026-01-10 14:08:58.435322 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:08:58.435333 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:08:58.435343 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:08:58.435354 | orchestrator | ok: [testbed-manager] 2026-01-10 14:08:58.435365 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:08:58.435384 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:08:58.435395 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:08:58.435405 | orchestrator | 2026-01-10 14:08:58.435417 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-10 14:08:58.435474 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-10 14:08:58.435487 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-10 14:08:58.435500 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-10 14:08:58.435511 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-10 14:08:58.435523 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-10 14:08:58.435614 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-10 14:08:58.435628 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-10 14:08:58.435639 | orchestrator | 2026-01-10 14:08:58.435651 | orchestrator | 2026-01-10 14:08:58.435662 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-10 14:08:58.435674 | orchestrator | Saturday 10 January 2026 14:08:58 +0000 (0:00:03.758) 0:00:40.139 ****** 2026-01-10 14:08:58.435685 | orchestrator | =============================================================================== 2026-01-10 14:08:58.435696 | orchestrator | osism.commons.repository : Update package cache ------------------------ 16.49s 2026-01-10 14:08:58.435707 | orchestrator | Install required packages (Debian) -------------------------------------- 7.74s 2026-01-10 14:08:58.435717 | orchestrator | Gathers facts about hosts ----------------------------------------------- 3.76s 2026-01-10 14:08:58.435728 | orchestrator | Copy fact files --------------------------------------------------------- 3.54s 2026-01-10 14:08:58.435739 | orchestrator | Create custom facts directory ------------------------------------------- 1.41s 2026-01-10 14:08:58.435750 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 1.36s 2026-01-10 14:08:58.435770 | orchestrator | Copy fact file ---------------------------------------------------------- 1.20s 2026-01-10 14:08:58.685738 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 1.09s 2026-01-10 14:08:58.685866 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 1.07s 2026-01-10 14:08:58.685880 | orchestrator | Create custom facts directory ------------------------------------------- 0.52s 2026-01-10 14:08:58.685893 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.46s 2026-01-10 14:08:58.685904 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.45s 2026-01-10 14:08:58.685915 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.22s 2026-01-10 14:08:58.685926 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.21s 2026-01-10 14:08:58.685937 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.15s 2026-01-10 14:08:58.685949 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.14s 2026-01-10 14:08:58.685960 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.12s 2026-01-10 14:08:58.685971 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.11s 2026-01-10 14:08:59.054095 | orchestrator | + osism apply bootstrap 2026-01-10 14:09:11.203305 | orchestrator | 2026-01-10 14:09:11 | INFO  | Task c098f5be-61c5-476c-bfb4-64c0ff4f0b9c (bootstrap) was prepared for execution. 2026-01-10 14:09:11.203433 | orchestrator | 2026-01-10 14:09:11 | INFO  | It takes a moment until task c098f5be-61c5-476c-bfb4-64c0ff4f0b9c (bootstrap) has been started and output is visible here. 2026-01-10 14:09:27.673292 | orchestrator | 2026-01-10 14:09:27.694528 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2026-01-10 14:09:27.694650 | orchestrator | 2026-01-10 14:09:27.694665 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2026-01-10 14:09:27.694677 | orchestrator | Saturday 10 January 2026 14:09:15 +0000 (0:00:00.156) 0:00:00.156 ****** 2026-01-10 14:09:27.694689 | orchestrator | ok: [testbed-manager] 2026-01-10 14:09:27.694702 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:09:27.694713 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:09:27.694724 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:09:27.694735 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:09:27.694745 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:09:27.694756 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:09:27.694767 | orchestrator | 2026-01-10 14:09:27.694778 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-01-10 14:09:27.694789 | orchestrator | 2026-01-10 14:09:27.694800 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-01-10 14:09:27.694812 | orchestrator | Saturday 10 January 2026 14:09:15 +0000 (0:00:00.259) 0:00:00.416 ****** 2026-01-10 14:09:27.694822 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:09:27.694833 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:09:27.694846 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:09:27.694857 | orchestrator | ok: [testbed-manager] 2026-01-10 14:09:27.694868 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:09:27.694878 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:09:27.694889 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:09:27.694900 | orchestrator | 2026-01-10 14:09:27.694911 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2026-01-10 14:09:27.694922 | orchestrator | 2026-01-10 14:09:27.694933 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-01-10 14:09:27.694944 | orchestrator | Saturday 10 January 2026 14:09:19 +0000 (0:00:03.816) 0:00:04.232 ****** 2026-01-10 14:09:27.694956 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-01-10 14:09:27.694967 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-01-10 14:09:27.694978 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2026-01-10 14:09:27.694989 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-01-10 14:09:27.695000 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-10 14:09:27.695011 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-01-10 14:09:27.695022 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-10 14:09:27.695033 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-01-10 14:09:27.695044 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-10 14:09:27.695055 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-01-10 14:09:27.695066 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-01-10 14:09:27.695077 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2026-01-10 14:09:27.695104 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-01-10 14:09:27.695116 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-01-10 14:09:27.695126 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2026-01-10 14:09:27.695138 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:09:27.695149 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-01-10 14:09:27.695160 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-01-10 14:09:27.695171 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2026-01-10 14:09:27.695182 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-01-10 14:09:27.695228 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-01-10 14:09:27.695240 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-01-10 14:09:27.695251 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-01-10 14:09:27.695262 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-01-10 14:09:27.695272 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:09:27.695283 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-01-10 14:09:27.695294 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-01-10 14:09:27.695304 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-01-10 14:09:27.695315 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-01-10 14:09:27.695326 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2026-01-10 14:09:27.695336 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-01-10 14:09:27.695347 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-01-10 14:09:27.695358 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-01-10 14:09:27.695368 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-01-10 14:09:27.695379 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-01-10 14:09:27.695390 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-01-10 14:09:27.695400 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-01-10 14:09:27.695411 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-01-10 14:09:27.695421 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:09:27.695432 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-01-10 14:09:27.695443 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:09:27.695453 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-01-10 14:09:27.695464 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-01-10 14:09:27.695475 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:09:27.695486 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-01-10 14:09:27.695497 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2026-01-10 14:09:27.695508 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-01-10 14:09:27.695576 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-01-10 14:09:27.695590 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-01-10 14:09:27.695601 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-01-10 14:09:27.695611 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:09:27.695622 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-01-10 14:09:27.695633 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-01-10 14:09:27.695644 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-01-10 14:09:27.695654 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-01-10 14:09:27.695665 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:09:27.695676 | orchestrator | 2026-01-10 14:09:27.695687 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2026-01-10 14:09:27.695698 | orchestrator | 2026-01-10 14:09:27.695708 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2026-01-10 14:09:27.695719 | orchestrator | Saturday 10 January 2026 14:09:20 +0000 (0:00:00.533) 0:00:04.766 ****** 2026-01-10 14:09:27.695730 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:09:27.695741 | orchestrator | ok: [testbed-manager] 2026-01-10 14:09:27.695752 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:09:27.695763 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:09:27.695773 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:09:27.695784 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:09:27.695794 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:09:27.695805 | orchestrator | 2026-01-10 14:09:27.695816 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2026-01-10 14:09:27.695836 | orchestrator | Saturday 10 January 2026 14:09:21 +0000 (0:00:01.240) 0:00:06.007 ****** 2026-01-10 14:09:27.695847 | orchestrator | ok: [testbed-manager] 2026-01-10 14:09:27.695858 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:09:27.695877 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:09:27.695897 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:09:27.695915 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:09:27.695931 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:09:27.695947 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:09:27.695964 | orchestrator | 2026-01-10 14:09:27.695982 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2026-01-10 14:09:27.696002 | orchestrator | Saturday 10 January 2026 14:09:22 +0000 (0:00:01.240) 0:00:07.247 ****** 2026-01-10 14:09:27.696024 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:09:27.696048 | orchestrator | 2026-01-10 14:09:27.696061 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2026-01-10 14:09:27.696072 | orchestrator | Saturday 10 January 2026 14:09:22 +0000 (0:00:00.285) 0:00:07.532 ****** 2026-01-10 14:09:27.696083 | orchestrator | changed: [testbed-manager] 2026-01-10 14:09:27.696094 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:09:27.696105 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:09:27.696115 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:09:27.696126 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:09:27.696137 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:09:27.696147 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:09:27.696158 | orchestrator | 2026-01-10 14:09:27.696169 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2026-01-10 14:09:27.696180 | orchestrator | Saturday 10 January 2026 14:09:25 +0000 (0:00:02.152) 0:00:09.684 ****** 2026-01-10 14:09:27.696190 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:09:27.696203 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:09:27.696215 | orchestrator | 2026-01-10 14:09:27.696227 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2026-01-10 14:09:27.696237 | orchestrator | Saturday 10 January 2026 14:09:25 +0000 (0:00:00.302) 0:00:09.986 ****** 2026-01-10 14:09:27.696248 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:09:27.696259 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:09:27.696270 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:09:27.696280 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:09:27.696291 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:09:27.696301 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:09:27.696312 | orchestrator | 2026-01-10 14:09:27.696323 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2026-01-10 14:09:27.696334 | orchestrator | Saturday 10 January 2026 14:09:26 +0000 (0:00:01.023) 0:00:11.009 ****** 2026-01-10 14:09:27.696345 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:09:27.696355 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:09:27.696366 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:09:27.696376 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:09:27.696408 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:09:27.696419 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:09:27.696431 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:09:27.696441 | orchestrator | 2026-01-10 14:09:27.696452 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2026-01-10 14:09:27.696463 | orchestrator | Saturday 10 January 2026 14:09:27 +0000 (0:00:00.638) 0:00:11.648 ****** 2026-01-10 14:09:27.696474 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:09:27.696493 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:09:27.696504 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:09:27.696514 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:09:27.696525 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:09:27.696563 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:09:27.696577 | orchestrator | ok: [testbed-manager] 2026-01-10 14:09:27.696589 | orchestrator | 2026-01-10 14:09:27.696599 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-01-10 14:09:27.696611 | orchestrator | Saturday 10 January 2026 14:09:27 +0000 (0:00:00.431) 0:00:12.079 ****** 2026-01-10 14:09:27.696623 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:09:27.696640 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:09:27.696662 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:09:40.000348 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:09:40.000634 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:09:40.000671 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:09:40.000691 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:09:40.000712 | orchestrator | 2026-01-10 14:09:40.000736 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-01-10 14:09:40.000758 | orchestrator | Saturday 10 January 2026 14:09:27 +0000 (0:00:00.215) 0:00:12.295 ****** 2026-01-10 14:09:40.000779 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:09:40.000821 | orchestrator | 2026-01-10 14:09:40.000842 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-01-10 14:09:40.000863 | orchestrator | Saturday 10 January 2026 14:09:28 +0000 (0:00:00.289) 0:00:12.585 ****** 2026-01-10 14:09:40.000883 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:09:40.000902 | orchestrator | 2026-01-10 14:09:40.000954 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-01-10 14:09:40.000976 | orchestrator | Saturday 10 January 2026 14:09:28 +0000 (0:00:00.323) 0:00:12.908 ****** 2026-01-10 14:09:40.000995 | orchestrator | ok: [testbed-manager] 2026-01-10 14:09:40.001015 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:09:40.001033 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:09:40.001051 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:09:40.001070 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:09:40.001088 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:09:40.001107 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:09:40.001126 | orchestrator | 2026-01-10 14:09:40.001145 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-01-10 14:09:40.001164 | orchestrator | Saturday 10 January 2026 14:09:29 +0000 (0:00:01.415) 0:00:14.323 ****** 2026-01-10 14:09:40.001177 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:09:40.001190 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:09:40.001201 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:09:40.001212 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:09:40.001223 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:09:40.001234 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:09:40.001246 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:09:40.001257 | orchestrator | 2026-01-10 14:09:40.001268 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-01-10 14:09:40.001280 | orchestrator | Saturday 10 January 2026 14:09:30 +0000 (0:00:00.309) 0:00:14.633 ****** 2026-01-10 14:09:40.001291 | orchestrator | ok: [testbed-manager] 2026-01-10 14:09:40.001302 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:09:40.001314 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:09:40.001333 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:09:40.001351 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:09:40.001407 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:09:40.001428 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:09:40.001448 | orchestrator | 2026-01-10 14:09:40.001461 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-01-10 14:09:40.001472 | orchestrator | Saturday 10 January 2026 14:09:30 +0000 (0:00:00.529) 0:00:15.162 ****** 2026-01-10 14:09:40.001483 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:09:40.001494 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:09:40.001505 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:09:40.001516 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:09:40.001527 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:09:40.001571 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:09:40.001583 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:09:40.001594 | orchestrator | 2026-01-10 14:09:40.001605 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-01-10 14:09:40.001618 | orchestrator | Saturday 10 January 2026 14:09:30 +0000 (0:00:00.273) 0:00:15.436 ****** 2026-01-10 14:09:40.001629 | orchestrator | ok: [testbed-manager] 2026-01-10 14:09:40.001640 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:09:40.001650 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:09:40.001661 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:09:40.001672 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:09:40.001700 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:09:40.001711 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:09:40.001735 | orchestrator | 2026-01-10 14:09:40.001758 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-01-10 14:09:40.001781 | orchestrator | Saturday 10 January 2026 14:09:31 +0000 (0:00:00.602) 0:00:16.039 ****** 2026-01-10 14:09:40.001792 | orchestrator | ok: [testbed-manager] 2026-01-10 14:09:40.001815 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:09:40.001826 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:09:40.001837 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:09:40.001848 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:09:40.001859 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:09:40.001870 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:09:40.001881 | orchestrator | 2026-01-10 14:09:40.001892 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-01-10 14:09:40.001903 | orchestrator | Saturday 10 January 2026 14:09:32 +0000 (0:00:01.062) 0:00:17.101 ****** 2026-01-10 14:09:40.001914 | orchestrator | ok: [testbed-manager] 2026-01-10 14:09:40.001925 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:09:40.001936 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:09:40.001947 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:09:40.001958 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:09:40.001969 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:09:40.001980 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:09:40.001997 | orchestrator | 2026-01-10 14:09:40.002058 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-01-10 14:09:40.002074 | orchestrator | Saturday 10 January 2026 14:09:33 +0000 (0:00:01.161) 0:00:18.263 ****** 2026-01-10 14:09:40.002121 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:09:40.002134 | orchestrator | 2026-01-10 14:09:40.002146 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-01-10 14:09:40.002157 | orchestrator | Saturday 10 January 2026 14:09:34 +0000 (0:00:00.314) 0:00:18.578 ****** 2026-01-10 14:09:40.002168 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:09:40.002179 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:09:40.002190 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:09:40.002200 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:09:40.002211 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:09:40.002233 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:09:40.002244 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:09:40.002255 | orchestrator | 2026-01-10 14:09:40.002266 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-01-10 14:09:40.002277 | orchestrator | Saturday 10 January 2026 14:09:35 +0000 (0:00:01.271) 0:00:19.850 ****** 2026-01-10 14:09:40.002288 | orchestrator | ok: [testbed-manager] 2026-01-10 14:09:40.002299 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:09:40.002310 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:09:40.002327 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:09:40.002345 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:09:40.002362 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:09:40.002380 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:09:40.002399 | orchestrator | 2026-01-10 14:09:40.002418 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-01-10 14:09:40.002436 | orchestrator | Saturday 10 January 2026 14:09:35 +0000 (0:00:00.249) 0:00:20.099 ****** 2026-01-10 14:09:40.002453 | orchestrator | ok: [testbed-manager] 2026-01-10 14:09:40.002465 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:09:40.002476 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:09:40.002487 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:09:40.002498 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:09:40.002509 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:09:40.002520 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:09:40.002531 | orchestrator | 2026-01-10 14:09:40.002586 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-01-10 14:09:40.002614 | orchestrator | Saturday 10 January 2026 14:09:35 +0000 (0:00:00.224) 0:00:20.324 ****** 2026-01-10 14:09:40.002635 | orchestrator | ok: [testbed-manager] 2026-01-10 14:09:40.002674 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:09:40.002706 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:09:40.002724 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:09:40.002740 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:09:40.002756 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:09:40.002773 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:09:40.002789 | orchestrator | 2026-01-10 14:09:40.002805 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-01-10 14:09:40.002822 | orchestrator | Saturday 10 January 2026 14:09:36 +0000 (0:00:00.233) 0:00:20.557 ****** 2026-01-10 14:09:40.002842 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:09:40.002863 | orchestrator | 2026-01-10 14:09:40.002883 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-01-10 14:09:40.002896 | orchestrator | Saturday 10 January 2026 14:09:36 +0000 (0:00:00.317) 0:00:20.875 ****** 2026-01-10 14:09:40.002907 | orchestrator | ok: [testbed-manager] 2026-01-10 14:09:40.002918 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:09:40.002929 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:09:40.002940 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:09:40.002951 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:09:40.002962 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:09:40.002973 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:09:40.002984 | orchestrator | 2026-01-10 14:09:40.002995 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-01-10 14:09:40.003006 | orchestrator | Saturday 10 January 2026 14:09:36 +0000 (0:00:00.547) 0:00:21.423 ****** 2026-01-10 14:09:40.003017 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:09:40.003028 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:09:40.003039 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:09:40.003050 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:09:40.003061 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:09:40.003072 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:09:40.003083 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:09:40.003105 | orchestrator | 2026-01-10 14:09:40.003116 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-01-10 14:09:40.003127 | orchestrator | Saturday 10 January 2026 14:09:37 +0000 (0:00:00.238) 0:00:21.662 ****** 2026-01-10 14:09:40.003138 | orchestrator | ok: [testbed-manager] 2026-01-10 14:09:40.003149 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:09:40.003160 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:09:40.003171 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:09:40.003182 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:09:40.003193 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:09:40.003203 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:09:40.003214 | orchestrator | 2026-01-10 14:09:40.003225 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-01-10 14:09:40.003238 | orchestrator | Saturday 10 January 2026 14:09:38 +0000 (0:00:01.129) 0:00:22.791 ****** 2026-01-10 14:09:40.003257 | orchestrator | ok: [testbed-manager] 2026-01-10 14:09:40.003273 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:09:40.003300 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:09:40.003319 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:09:40.003336 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:09:40.003353 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:09:40.003371 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:09:40.003388 | orchestrator | 2026-01-10 14:09:40.003406 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-01-10 14:09:40.003424 | orchestrator | Saturday 10 January 2026 14:09:38 +0000 (0:00:00.575) 0:00:23.367 ****** 2026-01-10 14:09:40.003440 | orchestrator | ok: [testbed-manager] 2026-01-10 14:09:40.003458 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:09:40.003477 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:09:40.003492 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:09:40.003523 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:10:22.369057 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:10:22.369225 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:10:22.369243 | orchestrator | 2026-01-10 14:10:22.369257 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-01-10 14:10:22.369271 | orchestrator | Saturday 10 January 2026 14:09:39 +0000 (0:00:01.167) 0:00:24.534 ****** 2026-01-10 14:10:22.369282 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:10:22.369294 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:10:22.369306 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:10:22.369316 | orchestrator | changed: [testbed-manager] 2026-01-10 14:10:22.369328 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:10:22.369339 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:10:22.369350 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:10:22.369361 | orchestrator | 2026-01-10 14:10:22.369373 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2026-01-10 14:10:22.369384 | orchestrator | Saturday 10 January 2026 14:09:57 +0000 (0:00:17.268) 0:00:41.803 ****** 2026-01-10 14:10:22.369395 | orchestrator | ok: [testbed-manager] 2026-01-10 14:10:22.369408 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:10:22.369419 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:10:22.369430 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:10:22.369441 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:10:22.369452 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:10:22.369463 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:10:22.369474 | orchestrator | 2026-01-10 14:10:22.369485 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2026-01-10 14:10:22.369496 | orchestrator | Saturday 10 January 2026 14:09:57 +0000 (0:00:00.221) 0:00:42.024 ****** 2026-01-10 14:10:22.369507 | orchestrator | ok: [testbed-manager] 2026-01-10 14:10:22.369518 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:10:22.369529 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:10:22.369572 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:10:22.369585 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:10:22.369596 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:10:22.369608 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:10:22.369652 | orchestrator | 2026-01-10 14:10:22.369664 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2026-01-10 14:10:22.369677 | orchestrator | Saturday 10 January 2026 14:09:57 +0000 (0:00:00.220) 0:00:42.245 ****** 2026-01-10 14:10:22.369688 | orchestrator | ok: [testbed-manager] 2026-01-10 14:10:22.369701 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:10:22.369713 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:10:22.369725 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:10:22.369737 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:10:22.369749 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:10:22.369761 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:10:22.369773 | orchestrator | 2026-01-10 14:10:22.369786 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2026-01-10 14:10:22.369798 | orchestrator | Saturday 10 January 2026 14:09:57 +0000 (0:00:00.226) 0:00:42.471 ****** 2026-01-10 14:10:22.369814 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:10:22.369829 | orchestrator | 2026-01-10 14:10:22.369842 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2026-01-10 14:10:22.369855 | orchestrator | Saturday 10 January 2026 14:09:58 +0000 (0:00:00.301) 0:00:42.772 ****** 2026-01-10 14:10:22.369867 | orchestrator | ok: [testbed-manager] 2026-01-10 14:10:22.369879 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:10:22.369891 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:10:22.369904 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:10:22.369917 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:10:22.369927 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:10:22.369938 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:10:22.369949 | orchestrator | 2026-01-10 14:10:22.369960 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2026-01-10 14:10:22.369971 | orchestrator | Saturday 10 January 2026 14:10:00 +0000 (0:00:01.818) 0:00:44.591 ****** 2026-01-10 14:10:22.369982 | orchestrator | changed: [testbed-manager] 2026-01-10 14:10:22.369993 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:10:22.370077 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:10:22.370104 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:10:22.370123 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:10:22.370143 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:10:22.370164 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:10:22.370183 | orchestrator | 2026-01-10 14:10:22.370195 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2026-01-10 14:10:22.370207 | orchestrator | Saturday 10 January 2026 14:10:01 +0000 (0:00:01.075) 0:00:45.667 ****** 2026-01-10 14:10:22.370217 | orchestrator | ok: [testbed-manager] 2026-01-10 14:10:22.370228 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:10:22.370239 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:10:22.370250 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:10:22.370260 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:10:22.370271 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:10:22.370281 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:10:22.370292 | orchestrator | 2026-01-10 14:10:22.370303 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2026-01-10 14:10:22.370314 | orchestrator | Saturday 10 January 2026 14:10:02 +0000 (0:00:00.913) 0:00:46.580 ****** 2026-01-10 14:10:22.370326 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:10:22.370339 | orchestrator | 2026-01-10 14:10:22.370350 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2026-01-10 14:10:22.370362 | orchestrator | Saturday 10 January 2026 14:10:02 +0000 (0:00:00.300) 0:00:46.880 ****** 2026-01-10 14:10:22.370373 | orchestrator | changed: [testbed-manager] 2026-01-10 14:10:22.370395 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:10:22.370406 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:10:22.370417 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:10:22.370428 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:10:22.370445 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:10:22.370455 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:10:22.370466 | orchestrator | 2026-01-10 14:10:22.370498 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2026-01-10 14:10:22.370510 | orchestrator | Saturday 10 January 2026 14:10:03 +0000 (0:00:01.158) 0:00:48.038 ****** 2026-01-10 14:10:22.370520 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:10:22.370553 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:10:22.370569 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:10:22.370580 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:10:22.370591 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:10:22.370601 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:10:22.370612 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:10:22.370623 | orchestrator | 2026-01-10 14:10:22.370634 | orchestrator | TASK [osism.services.rsyslog : Include logrotate tasks] ************************ 2026-01-10 14:10:22.370645 | orchestrator | Saturday 10 January 2026 14:10:03 +0000 (0:00:00.219) 0:00:48.258 ****** 2026-01-10 14:10:22.370657 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/logrotate.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:10:22.370668 | orchestrator | 2026-01-10 14:10:22.370679 | orchestrator | TASK [osism.services.rsyslog : Ensure logrotate package is installed] ********** 2026-01-10 14:10:22.370690 | orchestrator | Saturday 10 January 2026 14:10:04 +0000 (0:00:00.322) 0:00:48.580 ****** 2026-01-10 14:10:22.370701 | orchestrator | ok: [testbed-manager] 2026-01-10 14:10:22.370712 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:10:22.370723 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:10:22.370733 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:10:22.370744 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:10:22.370755 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:10:22.370765 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:10:22.370776 | orchestrator | 2026-01-10 14:10:22.370787 | orchestrator | TASK [osism.services.rsyslog : Configure logrotate for rsyslog] **************** 2026-01-10 14:10:22.370797 | orchestrator | Saturday 10 January 2026 14:10:05 +0000 (0:00:01.654) 0:00:50.235 ****** 2026-01-10 14:10:22.370808 | orchestrator | changed: [testbed-manager] 2026-01-10 14:10:22.370819 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:10:22.370830 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:10:22.370840 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:10:22.370851 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:10:22.370862 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:10:22.370872 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:10:22.370883 | orchestrator | 2026-01-10 14:10:22.370894 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2026-01-10 14:10:22.370904 | orchestrator | Saturday 10 January 2026 14:10:06 +0000 (0:00:01.177) 0:00:51.412 ****** 2026-01-10 14:10:22.370915 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:10:22.370926 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:10:22.370937 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:10:22.370947 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:10:22.370958 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:10:22.370969 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:10:22.370979 | orchestrator | changed: [testbed-manager] 2026-01-10 14:10:22.370990 | orchestrator | 2026-01-10 14:10:22.371001 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2026-01-10 14:10:22.371011 | orchestrator | Saturday 10 January 2026 14:10:19 +0000 (0:00:12.848) 0:01:04.260 ****** 2026-01-10 14:10:22.371022 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:10:22.371045 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:10:22.371065 | orchestrator | ok: [testbed-manager] 2026-01-10 14:10:22.371082 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:10:22.371101 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:10:22.371120 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:10:22.371139 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:10:22.371159 | orchestrator | 2026-01-10 14:10:22.371178 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2026-01-10 14:10:22.371197 | orchestrator | Saturday 10 January 2026 14:10:20 +0000 (0:00:00.940) 0:01:05.201 ****** 2026-01-10 14:10:22.371208 | orchestrator | ok: [testbed-manager] 2026-01-10 14:10:22.371219 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:10:22.371230 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:10:22.371240 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:10:22.371251 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:10:22.371262 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:10:22.371272 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:10:22.371283 | orchestrator | 2026-01-10 14:10:22.371294 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2026-01-10 14:10:22.371305 | orchestrator | Saturday 10 January 2026 14:10:21 +0000 (0:00:00.925) 0:01:06.126 ****** 2026-01-10 14:10:22.371316 | orchestrator | ok: [testbed-manager] 2026-01-10 14:10:22.371327 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:10:22.371337 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:10:22.371348 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:10:22.371359 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:10:22.371370 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:10:22.371380 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:10:22.371391 | orchestrator | 2026-01-10 14:10:22.371402 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2026-01-10 14:10:22.371413 | orchestrator | Saturday 10 January 2026 14:10:21 +0000 (0:00:00.230) 0:01:06.356 ****** 2026-01-10 14:10:22.371424 | orchestrator | ok: [testbed-manager] 2026-01-10 14:10:22.371434 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:10:22.371445 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:10:22.371456 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:10:22.371466 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:10:22.371477 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:10:22.371488 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:10:22.371499 | orchestrator | 2026-01-10 14:10:22.371510 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2026-01-10 14:10:22.371521 | orchestrator | Saturday 10 January 2026 14:10:22 +0000 (0:00:00.254) 0:01:06.611 ****** 2026-01-10 14:10:22.371593 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:10:22.371610 | orchestrator | 2026-01-10 14:10:22.371631 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2026-01-10 14:12:43.360342 | orchestrator | Saturday 10 January 2026 14:10:22 +0000 (0:00:00.296) 0:01:06.908 ****** 2026-01-10 14:12:43.360458 | orchestrator | ok: [testbed-manager] 2026-01-10 14:12:43.360476 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:12:43.360488 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:12:43.360500 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:12:43.360551 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:12:43.360570 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:12:43.360588 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:12:43.360608 | orchestrator | 2026-01-10 14:12:43.360630 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2026-01-10 14:12:43.360651 | orchestrator | Saturday 10 January 2026 14:10:23 +0000 (0:00:01.565) 0:01:08.474 ****** 2026-01-10 14:12:43.360670 | orchestrator | changed: [testbed-manager] 2026-01-10 14:12:43.360685 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:12:43.360697 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:12:43.360708 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:12:43.360747 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:12:43.360759 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:12:43.360770 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:12:43.360781 | orchestrator | 2026-01-10 14:12:43.360793 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2026-01-10 14:12:43.360805 | orchestrator | Saturday 10 January 2026 14:10:24 +0000 (0:00:00.623) 0:01:09.097 ****** 2026-01-10 14:12:43.360816 | orchestrator | ok: [testbed-manager] 2026-01-10 14:12:43.360827 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:12:43.360838 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:12:43.360849 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:12:43.360860 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:12:43.360870 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:12:43.360882 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:12:43.360894 | orchestrator | 2026-01-10 14:12:43.360907 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2026-01-10 14:12:43.360919 | orchestrator | Saturday 10 January 2026 14:10:24 +0000 (0:00:00.246) 0:01:09.343 ****** 2026-01-10 14:12:43.360931 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:12:43.360944 | orchestrator | ok: [testbed-manager] 2026-01-10 14:12:43.360956 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:12:43.360968 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:12:43.360980 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:12:43.360992 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:12:43.361004 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:12:43.361016 | orchestrator | 2026-01-10 14:12:43.361029 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2026-01-10 14:12:43.361041 | orchestrator | Saturday 10 January 2026 14:10:25 +0000 (0:00:01.202) 0:01:10.545 ****** 2026-01-10 14:12:43.361053 | orchestrator | changed: [testbed-manager] 2026-01-10 14:12:43.361066 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:12:43.361079 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:12:43.361091 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:12:43.361103 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:12:43.361116 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:12:43.361128 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:12:43.361140 | orchestrator | 2026-01-10 14:12:43.361153 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2026-01-10 14:12:43.361166 | orchestrator | Saturday 10 January 2026 14:10:27 +0000 (0:00:01.756) 0:01:12.302 ****** 2026-01-10 14:12:43.361178 | orchestrator | ok: [testbed-manager] 2026-01-10 14:12:43.361190 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:12:43.361203 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:12:43.361215 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:12:43.361227 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:12:43.361239 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:12:43.361250 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:12:43.361261 | orchestrator | 2026-01-10 14:12:43.361272 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2026-01-10 14:12:43.361283 | orchestrator | Saturday 10 January 2026 14:10:30 +0000 (0:00:02.276) 0:01:14.578 ****** 2026-01-10 14:12:43.361294 | orchestrator | ok: [testbed-manager] 2026-01-10 14:12:43.361305 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:12:43.361316 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:12:43.361327 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:12:43.361337 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:12:43.361348 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:12:43.361359 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:12:43.361369 | orchestrator | 2026-01-10 14:12:43.361380 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2026-01-10 14:12:43.361391 | orchestrator | Saturday 10 January 2026 14:11:07 +0000 (0:00:37.056) 0:01:51.635 ****** 2026-01-10 14:12:43.361402 | orchestrator | changed: [testbed-manager] 2026-01-10 14:12:43.361413 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:12:43.361424 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:12:43.361442 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:12:43.361454 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:12:43.361464 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:12:43.361475 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:12:43.361486 | orchestrator | 2026-01-10 14:12:43.361497 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2026-01-10 14:12:43.361508 | orchestrator | Saturday 10 January 2026 14:12:26 +0000 (0:01:19.176) 0:03:10.812 ****** 2026-01-10 14:12:43.361582 | orchestrator | ok: [testbed-manager] 2026-01-10 14:12:43.361601 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:12:43.361620 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:12:43.361639 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:12:43.361659 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:12:43.361678 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:12:43.361696 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:12:43.361710 | orchestrator | 2026-01-10 14:12:43.361721 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2026-01-10 14:12:43.361732 | orchestrator | Saturday 10 January 2026 14:12:27 +0000 (0:00:01.694) 0:03:12.506 ****** 2026-01-10 14:12:43.361743 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:12:43.361754 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:12:43.361765 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:12:43.361776 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:12:43.361787 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:12:43.361797 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:12:43.361808 | orchestrator | changed: [testbed-manager] 2026-01-10 14:12:43.361819 | orchestrator | 2026-01-10 14:12:43.361830 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2026-01-10 14:12:43.361841 | orchestrator | Saturday 10 January 2026 14:12:41 +0000 (0:00:13.091) 0:03:25.598 ****** 2026-01-10 14:12:43.361889 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2026-01-10 14:12:43.361907 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2026-01-10 14:12:43.361922 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2026-01-10 14:12:43.361942 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-01-10 14:12:43.361953 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'network', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-01-10 14:12:43.361975 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2026-01-10 14:12:43.361986 | orchestrator | 2026-01-10 14:12:43.362001 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2026-01-10 14:12:43.362013 | orchestrator | Saturday 10 January 2026 14:12:41 +0000 (0:00:00.427) 0:03:26.026 ****** 2026-01-10 14:12:43.362076 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-01-10 14:12:43.362087 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-01-10 14:12:43.362098 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:12:43.362109 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:12:43.362120 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-01-10 14:12:43.362131 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-01-10 14:12:43.362141 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:12:43.362152 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:12:43.362163 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-01-10 14:12:43.362174 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-01-10 14:12:43.362185 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-01-10 14:12:43.362196 | orchestrator | 2026-01-10 14:12:43.362207 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2026-01-10 14:12:43.362218 | orchestrator | Saturday 10 January 2026 14:12:43 +0000 (0:00:01.775) 0:03:27.801 ****** 2026-01-10 14:12:43.362240 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-01-10 14:12:43.362253 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-01-10 14:12:43.362264 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-01-10 14:12:43.362275 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-01-10 14:12:43.362290 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-01-10 14:12:43.362309 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-01-10 14:12:50.463380 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-01-10 14:12:50.463465 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-01-10 14:12:50.463473 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-01-10 14:12:50.463478 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-01-10 14:12:50.463483 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-01-10 14:12:50.463487 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-01-10 14:12:50.463491 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-01-10 14:12:50.463495 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-01-10 14:12:50.463499 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-01-10 14:12:50.463503 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-01-10 14:12:50.463582 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-01-10 14:12:50.463588 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:12:50.463593 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-01-10 14:12:50.463597 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-01-10 14:12:50.463601 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-01-10 14:12:50.463605 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-01-10 14:12:50.463608 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-01-10 14:12:50.463612 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:12:50.463617 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-01-10 14:12:50.463621 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-01-10 14:12:50.463625 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-01-10 14:12:50.463628 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-01-10 14:12:50.463632 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-01-10 14:12:50.463636 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-01-10 14:12:50.463640 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-01-10 14:12:50.463643 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-01-10 14:12:50.463647 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-01-10 14:12:50.463651 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-01-10 14:12:50.463655 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-01-10 14:12:50.463659 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:12:50.463663 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-01-10 14:12:50.463667 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-01-10 14:12:50.463670 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-01-10 14:12:50.463674 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-01-10 14:12:50.463678 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-01-10 14:12:50.463682 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-01-10 14:12:50.463685 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-01-10 14:12:50.463689 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:12:50.463693 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-01-10 14:12:50.463697 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-01-10 14:12:50.463701 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-01-10 14:12:50.463704 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-01-10 14:12:50.463718 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-01-10 14:12:50.463734 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-01-10 14:12:50.463742 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-01-10 14:12:50.463746 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-01-10 14:12:50.463750 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-01-10 14:12:50.463754 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-01-10 14:12:50.463758 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-01-10 14:12:50.463762 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-01-10 14:12:50.463766 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-01-10 14:12:50.463769 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-01-10 14:12:50.463773 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-01-10 14:12:50.463777 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-01-10 14:12:50.463781 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-01-10 14:12:50.463785 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-01-10 14:12:50.463789 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-01-10 14:12:50.463792 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-01-10 14:12:50.463796 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-01-10 14:12:50.463800 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-01-10 14:12:50.463804 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-01-10 14:12:50.463808 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-01-10 14:12:50.463812 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-01-10 14:12:50.463816 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-01-10 14:12:50.463820 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-01-10 14:12:50.463823 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-01-10 14:12:50.463827 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-01-10 14:12:50.463831 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-01-10 14:12:50.463835 | orchestrator | 2026-01-10 14:12:50.463839 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2026-01-10 14:12:50.463843 | orchestrator | Saturday 10 January 2026 14:12:48 +0000 (0:00:05.110) 0:03:32.912 ****** 2026-01-10 14:12:50.463847 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-01-10 14:12:50.463851 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-01-10 14:12:50.463855 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-01-10 14:12:50.463859 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-01-10 14:12:50.463863 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-01-10 14:12:50.463866 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-01-10 14:12:50.463870 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-01-10 14:12:50.463877 | orchestrator | 2026-01-10 14:12:50.463882 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2026-01-10 14:12:50.463886 | orchestrator | Saturday 10 January 2026 14:12:49 +0000 (0:00:01.562) 0:03:34.475 ****** 2026-01-10 14:12:50.463890 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-01-10 14:12:50.463894 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:12:50.463898 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-01-10 14:12:50.463901 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:12:50.463905 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-01-10 14:12:50.463909 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:12:50.463913 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-01-10 14:12:50.463917 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:12:50.463923 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-01-10 14:12:50.463928 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-01-10 14:12:50.463934 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-01-10 14:13:05.705875 | orchestrator | 2026-01-10 14:13:05.705984 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on network] ***************** 2026-01-10 14:13:05.706002 | orchestrator | Saturday 10 January 2026 14:12:50 +0000 (0:00:00.528) 0:03:35.004 ****** 2026-01-10 14:13:05.706083 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-01-10 14:13:05.706109 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:13:05.706129 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-01-10 14:13:05.706149 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-01-10 14:13:05.706168 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:13:05.706188 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:13:05.706208 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-01-10 14:13:05.706229 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:13:05.706248 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-01-10 14:13:05.706269 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-01-10 14:13:05.706288 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-01-10 14:13:05.706309 | orchestrator | 2026-01-10 14:13:05.706331 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2026-01-10 14:13:05.706352 | orchestrator | Saturday 10 January 2026 14:12:52 +0000 (0:00:01.624) 0:03:36.629 ****** 2026-01-10 14:13:05.706374 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-01-10 14:13:05.706393 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:13:05.706410 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-01-10 14:13:05.706430 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:13:05.706457 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-01-10 14:13:05.706477 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:13:05.706495 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-01-10 14:13:05.706543 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:13:05.706563 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-01-10 14:13:05.706615 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-01-10 14:13:05.706635 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-01-10 14:13:05.706653 | orchestrator | 2026-01-10 14:13:05.706670 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2026-01-10 14:13:05.706688 | orchestrator | Saturday 10 January 2026 14:12:52 +0000 (0:00:00.740) 0:03:37.369 ****** 2026-01-10 14:13:05.706706 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:13:05.706724 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:13:05.706742 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:13:05.706760 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:13:05.706778 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:13:05.706798 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:13:05.706816 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:13:05.706833 | orchestrator | 2026-01-10 14:13:05.706850 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2026-01-10 14:13:05.706869 | orchestrator | Saturday 10 January 2026 14:12:53 +0000 (0:00:00.325) 0:03:37.694 ****** 2026-01-10 14:13:05.706888 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:13:05.706908 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:13:05.706927 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:13:05.706944 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:13:05.706963 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:13:05.706982 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:13:05.706999 | orchestrator | ok: [testbed-manager] 2026-01-10 14:13:05.707018 | orchestrator | 2026-01-10 14:13:05.707038 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2026-01-10 14:13:05.707057 | orchestrator | Saturday 10 January 2026 14:12:58 +0000 (0:00:05.633) 0:03:43.328 ****** 2026-01-10 14:13:05.707077 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2026-01-10 14:13:05.707095 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2026-01-10 14:13:05.707114 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:13:05.707134 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:13:05.707151 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2026-01-10 14:13:05.707170 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2026-01-10 14:13:05.707189 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:13:05.707208 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:13:05.707228 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2026-01-10 14:13:05.707247 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2026-01-10 14:13:05.707266 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:13:05.707285 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:13:05.707303 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2026-01-10 14:13:05.707321 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:13:05.707340 | orchestrator | 2026-01-10 14:13:05.707358 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2026-01-10 14:13:05.707377 | orchestrator | Saturday 10 January 2026 14:12:59 +0000 (0:00:00.351) 0:03:43.679 ****** 2026-01-10 14:13:05.707394 | orchestrator | ok: [testbed-node-3] => (item=cron) 2026-01-10 14:13:05.707414 | orchestrator | ok: [testbed-node-5] => (item=cron) 2026-01-10 14:13:05.707434 | orchestrator | ok: [testbed-node-0] => (item=cron) 2026-01-10 14:13:05.707479 | orchestrator | ok: [testbed-node-4] => (item=cron) 2026-01-10 14:13:05.707569 | orchestrator | ok: [testbed-node-1] => (item=cron) 2026-01-10 14:13:05.707591 | orchestrator | ok: [testbed-node-2] => (item=cron) 2026-01-10 14:13:05.707612 | orchestrator | ok: [testbed-manager] => (item=cron) 2026-01-10 14:13:05.707631 | orchestrator | 2026-01-10 14:13:05.707650 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2026-01-10 14:13:05.707670 | orchestrator | Saturday 10 January 2026 14:13:01 +0000 (0:00:01.934) 0:03:45.613 ****** 2026-01-10 14:13:05.707691 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:13:05.707732 | orchestrator | 2026-01-10 14:13:05.707752 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2026-01-10 14:13:05.707772 | orchestrator | Saturday 10 January 2026 14:13:01 +0000 (0:00:00.437) 0:03:46.050 ****** 2026-01-10 14:13:05.707791 | orchestrator | ok: [testbed-manager] 2026-01-10 14:13:05.707810 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:13:05.707829 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:13:05.707848 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:13:05.707867 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:13:05.707886 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:13:05.707905 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:13:05.707924 | orchestrator | 2026-01-10 14:13:05.707942 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2026-01-10 14:13:05.707960 | orchestrator | Saturday 10 January 2026 14:13:02 +0000 (0:00:01.294) 0:03:47.345 ****** 2026-01-10 14:13:05.707979 | orchestrator | ok: [testbed-manager] 2026-01-10 14:13:05.707998 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:13:05.708016 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:13:05.708034 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:13:05.708053 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:13:05.708071 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:13:05.708090 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:13:05.708109 | orchestrator | 2026-01-10 14:13:05.708127 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2026-01-10 14:13:05.708146 | orchestrator | Saturday 10 January 2026 14:13:03 +0000 (0:00:00.648) 0:03:47.994 ****** 2026-01-10 14:13:05.708164 | orchestrator | changed: [testbed-manager] 2026-01-10 14:13:05.708183 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:13:05.708202 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:13:05.708220 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:13:05.708236 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:13:05.708247 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:13:05.708258 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:13:05.708269 | orchestrator | 2026-01-10 14:13:05.708280 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2026-01-10 14:13:05.708291 | orchestrator | Saturday 10 January 2026 14:13:04 +0000 (0:00:00.645) 0:03:48.640 ****** 2026-01-10 14:13:05.708320 | orchestrator | ok: [testbed-manager] 2026-01-10 14:13:05.708332 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:13:05.708343 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:13:05.708354 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:13:05.708365 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:13:05.708375 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:13:05.708386 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:13:05.708397 | orchestrator | 2026-01-10 14:13:05.708408 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2026-01-10 14:13:05.708419 | orchestrator | Saturday 10 January 2026 14:13:04 +0000 (0:00:00.591) 0:03:49.231 ****** 2026-01-10 14:13:05.708434 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1768052870.1498318, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-10 14:13:05.708450 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1768052885.1208897, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-10 14:13:05.708477 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1768052871.407452, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-10 14:13:05.708582 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1768052875.0149825, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-10 14:13:10.867303 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1768052876.6088543, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-10 14:13:10.867437 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1768052877.906002, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-10 14:13:10.867449 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1768052871.5440784, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-10 14:13:10.867459 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-10 14:13:10.867468 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-10 14:13:10.867580 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-10 14:13:10.867609 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-10 14:13:10.867635 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-10 14:13:10.867644 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-10 14:13:10.867652 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-10 14:13:10.867661 | orchestrator | 2026-01-10 14:13:10.867672 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2026-01-10 14:13:10.867683 | orchestrator | Saturday 10 January 2026 14:13:05 +0000 (0:00:01.009) 0:03:50.241 ****** 2026-01-10 14:13:10.867691 | orchestrator | changed: [testbed-manager] 2026-01-10 14:13:10.867701 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:13:10.867709 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:13:10.867717 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:13:10.867725 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:13:10.867733 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:13:10.867740 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:13:10.867748 | orchestrator | 2026-01-10 14:13:10.867757 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2026-01-10 14:13:10.867773 | orchestrator | Saturday 10 January 2026 14:13:06 +0000 (0:00:01.106) 0:03:51.348 ****** 2026-01-10 14:13:10.867781 | orchestrator | changed: [testbed-manager] 2026-01-10 14:13:10.867789 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:13:10.867797 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:13:10.867805 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:13:10.867813 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:13:10.867821 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:13:10.867831 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:13:10.867840 | orchestrator | 2026-01-10 14:13:10.867849 | orchestrator | TASK [osism.commons.motd : Copy issue.net file] ******************************** 2026-01-10 14:13:10.867858 | orchestrator | Saturday 10 January 2026 14:13:07 +0000 (0:00:01.186) 0:03:52.534 ****** 2026-01-10 14:13:10.867867 | orchestrator | changed: [testbed-manager] 2026-01-10 14:13:10.867876 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:13:10.867885 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:13:10.867894 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:13:10.867903 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:13:10.867912 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:13:10.867921 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:13:10.867930 | orchestrator | 2026-01-10 14:13:10.867939 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2026-01-10 14:13:10.867948 | orchestrator | Saturday 10 January 2026 14:13:09 +0000 (0:00:01.283) 0:03:53.817 ****** 2026-01-10 14:13:10.867957 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:13:10.867967 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:13:10.867975 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:13:10.867983 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:13:10.867991 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:13:10.867999 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:13:10.868007 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:13:10.868015 | orchestrator | 2026-01-10 14:13:10.868023 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2026-01-10 14:13:10.868031 | orchestrator | Saturday 10 January 2026 14:13:09 +0000 (0:00:00.273) 0:03:54.091 ****** 2026-01-10 14:13:10.868039 | orchestrator | ok: [testbed-manager] 2026-01-10 14:13:10.868052 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:13:10.868061 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:13:10.868068 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:13:10.868082 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:13:10.868095 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:13:10.868106 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:13:10.868117 | orchestrator | 2026-01-10 14:13:10.868129 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2026-01-10 14:13:10.868141 | orchestrator | Saturday 10 January 2026 14:13:10 +0000 (0:00:00.852) 0:03:54.943 ****** 2026-01-10 14:13:10.868157 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:13:10.868172 | orchestrator | 2026-01-10 14:13:10.868185 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2026-01-10 14:13:10.868206 | orchestrator | Saturday 10 January 2026 14:13:10 +0000 (0:00:00.463) 0:03:55.407 ****** 2026-01-10 14:14:29.114679 | orchestrator | ok: [testbed-manager] 2026-01-10 14:14:29.114803 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:14:29.114819 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:14:29.114830 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:14:29.114841 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:14:29.114851 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:14:29.114861 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:14:29.114871 | orchestrator | 2026-01-10 14:14:29.114883 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2026-01-10 14:14:29.114922 | orchestrator | Saturday 10 January 2026 14:13:19 +0000 (0:00:08.411) 0:04:03.818 ****** 2026-01-10 14:14:29.114968 | orchestrator | ok: [testbed-manager] 2026-01-10 14:14:29.114980 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:14:29.114990 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:14:29.115000 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:14:29.115009 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:14:29.115019 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:14:29.115029 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:14:29.115038 | orchestrator | 2026-01-10 14:14:29.115049 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2026-01-10 14:14:29.115058 | orchestrator | Saturday 10 January 2026 14:13:20 +0000 (0:00:01.401) 0:04:05.220 ****** 2026-01-10 14:14:29.115068 | orchestrator | ok: [testbed-manager] 2026-01-10 14:14:29.115078 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:14:29.115087 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:14:29.115097 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:14:29.115106 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:14:29.115116 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:14:29.115125 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:14:29.115135 | orchestrator | 2026-01-10 14:14:29.115145 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2026-01-10 14:14:29.115154 | orchestrator | Saturday 10 January 2026 14:13:21 +0000 (0:00:01.163) 0:04:06.384 ****** 2026-01-10 14:14:29.115164 | orchestrator | ok: [testbed-manager] 2026-01-10 14:14:29.115173 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:14:29.115183 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:14:29.115192 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:14:29.115202 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:14:29.115213 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:14:29.115224 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:14:29.115234 | orchestrator | 2026-01-10 14:14:29.115246 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2026-01-10 14:14:29.115257 | orchestrator | Saturday 10 January 2026 14:13:22 +0000 (0:00:00.300) 0:04:06.684 ****** 2026-01-10 14:14:29.115268 | orchestrator | ok: [testbed-manager] 2026-01-10 14:14:29.115279 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:14:29.115289 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:14:29.115300 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:14:29.115311 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:14:29.115322 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:14:29.115332 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:14:29.115342 | orchestrator | 2026-01-10 14:14:29.115353 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2026-01-10 14:14:29.115364 | orchestrator | Saturday 10 January 2026 14:13:22 +0000 (0:00:00.328) 0:04:07.013 ****** 2026-01-10 14:14:29.115374 | orchestrator | ok: [testbed-manager] 2026-01-10 14:14:29.115385 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:14:29.115395 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:14:29.115406 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:14:29.115445 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:14:29.115457 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:14:29.115468 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:14:29.115479 | orchestrator | 2026-01-10 14:14:29.115490 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2026-01-10 14:14:29.115500 | orchestrator | Saturday 10 January 2026 14:13:22 +0000 (0:00:00.301) 0:04:07.314 ****** 2026-01-10 14:14:29.115511 | orchestrator | ok: [testbed-manager] 2026-01-10 14:14:29.115522 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:14:29.115533 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:14:29.115544 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:14:29.115555 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:14:29.115566 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:14:29.115575 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:14:29.115585 | orchestrator | 2026-01-10 14:14:29.115594 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2026-01-10 14:14:29.115613 | orchestrator | Saturday 10 January 2026 14:13:28 +0000 (0:00:05.301) 0:04:12.616 ****** 2026-01-10 14:14:29.115625 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:14:29.115637 | orchestrator | 2026-01-10 14:14:29.115647 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2026-01-10 14:14:29.115657 | orchestrator | Saturday 10 January 2026 14:13:28 +0000 (0:00:00.371) 0:04:12.988 ****** 2026-01-10 14:14:29.115667 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2026-01-10 14:14:29.115684 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2026-01-10 14:14:29.115702 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2026-01-10 14:14:29.115719 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:14:29.115737 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2026-01-10 14:14:29.115754 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2026-01-10 14:14:29.115772 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2026-01-10 14:14:29.115782 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:14:29.115792 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2026-01-10 14:14:29.115802 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:14:29.115812 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2026-01-10 14:14:29.115822 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2026-01-10 14:14:29.115831 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:14:29.115841 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2026-01-10 14:14:29.115852 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2026-01-10 14:14:29.115861 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:14:29.115888 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2026-01-10 14:14:29.115899 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:14:29.115909 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2026-01-10 14:14:29.115919 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2026-01-10 14:14:29.115929 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:14:29.115938 | orchestrator | 2026-01-10 14:14:29.115948 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2026-01-10 14:14:29.115958 | orchestrator | Saturday 10 January 2026 14:13:28 +0000 (0:00:00.273) 0:04:13.262 ****** 2026-01-10 14:14:29.115969 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:14:29.115979 | orchestrator | 2026-01-10 14:14:29.115989 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2026-01-10 14:14:29.115999 | orchestrator | Saturday 10 January 2026 14:13:29 +0000 (0:00:00.370) 0:04:13.632 ****** 2026-01-10 14:14:29.116009 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2026-01-10 14:14:29.116019 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:14:29.116028 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2026-01-10 14:14:29.116038 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:14:29.116048 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2026-01-10 14:14:29.116057 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2026-01-10 14:14:29.116067 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:14:29.116077 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:14:29.116086 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2026-01-10 14:14:29.116096 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2026-01-10 14:14:29.116106 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:14:29.116122 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:14:29.116132 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2026-01-10 14:14:29.116142 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:14:29.116151 | orchestrator | 2026-01-10 14:14:29.116161 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2026-01-10 14:14:29.116171 | orchestrator | Saturday 10 January 2026 14:13:29 +0000 (0:00:00.295) 0:04:13.927 ****** 2026-01-10 14:14:29.116199 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:14:29.116210 | orchestrator | 2026-01-10 14:14:29.116220 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2026-01-10 14:14:29.116230 | orchestrator | Saturday 10 January 2026 14:13:29 +0000 (0:00:00.355) 0:04:14.283 ****** 2026-01-10 14:14:29.116239 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:14:29.116249 | orchestrator | changed: [testbed-manager] 2026-01-10 14:14:29.116259 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:14:29.116268 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:14:29.116278 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:14:29.116287 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:14:29.116297 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:14:29.116307 | orchestrator | 2026-01-10 14:14:29.116316 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2026-01-10 14:14:29.116326 | orchestrator | Saturday 10 January 2026 14:14:04 +0000 (0:00:35.220) 0:04:49.504 ****** 2026-01-10 14:14:29.116335 | orchestrator | changed: [testbed-manager] 2026-01-10 14:14:29.116345 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:14:29.116354 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:14:29.116364 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:14:29.116374 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:14:29.116383 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:14:29.116393 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:14:29.116402 | orchestrator | 2026-01-10 14:14:29.116469 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2026-01-10 14:14:29.116483 | orchestrator | Saturday 10 January 2026 14:14:13 +0000 (0:00:08.114) 0:04:57.618 ****** 2026-01-10 14:14:29.116493 | orchestrator | changed: [testbed-manager] 2026-01-10 14:14:29.116502 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:14:29.116512 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:14:29.116522 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:14:29.116531 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:14:29.116541 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:14:29.116550 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:14:29.116560 | orchestrator | 2026-01-10 14:14:29.116569 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2026-01-10 14:14:29.116584 | orchestrator | Saturday 10 January 2026 14:14:21 +0000 (0:00:08.225) 0:05:05.843 ****** 2026-01-10 14:14:29.116594 | orchestrator | ok: [testbed-manager] 2026-01-10 14:14:29.116604 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:14:29.116613 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:14:29.116623 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:14:29.116632 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:14:29.116642 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:14:29.116651 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:14:29.116661 | orchestrator | 2026-01-10 14:14:29.116671 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2026-01-10 14:14:29.116680 | orchestrator | Saturday 10 January 2026 14:14:23 +0000 (0:00:01.905) 0:05:07.749 ****** 2026-01-10 14:14:29.116690 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:14:29.116700 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:14:29.116717 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:14:29.116734 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:14:29.116762 | orchestrator | changed: [testbed-manager] 2026-01-10 14:14:29.116780 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:14:29.116797 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:14:29.116814 | orchestrator | 2026-01-10 14:14:29.116833 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2026-01-10 14:14:40.434079 | orchestrator | Saturday 10 January 2026 14:14:29 +0000 (0:00:05.896) 0:05:13.646 ****** 2026-01-10 14:14:40.434189 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:14:40.434208 | orchestrator | 2026-01-10 14:14:40.434223 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2026-01-10 14:14:40.434240 | orchestrator | Saturday 10 January 2026 14:14:29 +0000 (0:00:00.449) 0:05:14.095 ****** 2026-01-10 14:14:40.434257 | orchestrator | changed: [testbed-manager] 2026-01-10 14:14:40.434277 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:14:40.434293 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:14:40.434309 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:14:40.434326 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:14:40.434342 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:14:40.434357 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:14:40.434367 | orchestrator | 2026-01-10 14:14:40.434377 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2026-01-10 14:14:40.434388 | orchestrator | Saturday 10 January 2026 14:14:30 +0000 (0:00:00.740) 0:05:14.836 ****** 2026-01-10 14:14:40.434398 | orchestrator | ok: [testbed-manager] 2026-01-10 14:14:40.434440 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:14:40.434450 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:14:40.434460 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:14:40.434470 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:14:40.434480 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:14:40.434490 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:14:40.434500 | orchestrator | 2026-01-10 14:14:40.434510 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2026-01-10 14:14:40.434520 | orchestrator | Saturday 10 January 2026 14:14:31 +0000 (0:00:01.688) 0:05:16.524 ****** 2026-01-10 14:14:40.434530 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:14:40.434540 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:14:40.434549 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:14:40.434559 | orchestrator | changed: [testbed-manager] 2026-01-10 14:14:40.434570 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:14:40.434581 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:14:40.434592 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:14:40.434603 | orchestrator | 2026-01-10 14:14:40.434614 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2026-01-10 14:14:40.434625 | orchestrator | Saturday 10 January 2026 14:14:32 +0000 (0:00:00.824) 0:05:17.349 ****** 2026-01-10 14:14:40.434636 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:14:40.434646 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:14:40.434657 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:14:40.434668 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:14:40.434678 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:14:40.434689 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:14:40.434700 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:14:40.434711 | orchestrator | 2026-01-10 14:14:40.434723 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2026-01-10 14:14:40.434735 | orchestrator | Saturday 10 January 2026 14:14:33 +0000 (0:00:00.285) 0:05:17.634 ****** 2026-01-10 14:14:40.434746 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:14:40.434757 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:14:40.434767 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:14:40.434777 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:14:40.434786 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:14:40.434821 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:14:40.434832 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:14:40.434843 | orchestrator | 2026-01-10 14:14:40.434854 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2026-01-10 14:14:40.434872 | orchestrator | Saturday 10 January 2026 14:14:33 +0000 (0:00:00.400) 0:05:18.034 ****** 2026-01-10 14:14:40.434892 | orchestrator | ok: [testbed-manager] 2026-01-10 14:14:40.434909 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:14:40.434929 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:14:40.434949 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:14:40.434967 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:14:40.434986 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:14:40.435005 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:14:40.435026 | orchestrator | 2026-01-10 14:14:40.435046 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2026-01-10 14:14:40.435058 | orchestrator | Saturday 10 January 2026 14:14:33 +0000 (0:00:00.313) 0:05:18.348 ****** 2026-01-10 14:14:40.435069 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:14:40.435080 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:14:40.435091 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:14:40.435102 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:14:40.435112 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:14:40.435123 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:14:40.435133 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:14:40.435144 | orchestrator | 2026-01-10 14:14:40.435170 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2026-01-10 14:14:40.435183 | orchestrator | Saturday 10 January 2026 14:14:34 +0000 (0:00:00.289) 0:05:18.637 ****** 2026-01-10 14:14:40.435194 | orchestrator | ok: [testbed-manager] 2026-01-10 14:14:40.435205 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:14:40.435215 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:14:40.435226 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:14:40.435237 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:14:40.435247 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:14:40.435258 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:14:40.435268 | orchestrator | 2026-01-10 14:14:40.435279 | orchestrator | TASK [osism.services.docker : Print used docker version] *********************** 2026-01-10 14:14:40.435290 | orchestrator | Saturday 10 January 2026 14:14:34 +0000 (0:00:00.326) 0:05:18.964 ****** 2026-01-10 14:14:40.435301 | orchestrator | ok: [testbed-manager] =>  2026-01-10 14:14:40.435312 | orchestrator |  docker_version: 5:27.5.1 2026-01-10 14:14:40.435323 | orchestrator | ok: [testbed-node-3] =>  2026-01-10 14:14:40.435333 | orchestrator |  docker_version: 5:27.5.1 2026-01-10 14:14:40.435344 | orchestrator | ok: [testbed-node-4] =>  2026-01-10 14:14:40.435355 | orchestrator |  docker_version: 5:27.5.1 2026-01-10 14:14:40.435366 | orchestrator | ok: [testbed-node-5] =>  2026-01-10 14:14:40.435377 | orchestrator |  docker_version: 5:27.5.1 2026-01-10 14:14:40.435438 | orchestrator | ok: [testbed-node-0] =>  2026-01-10 14:14:40.435450 | orchestrator |  docker_version: 5:27.5.1 2026-01-10 14:14:40.435461 | orchestrator | ok: [testbed-node-1] =>  2026-01-10 14:14:40.435472 | orchestrator |  docker_version: 5:27.5.1 2026-01-10 14:14:40.435483 | orchestrator | ok: [testbed-node-2] =>  2026-01-10 14:14:40.435494 | orchestrator |  docker_version: 5:27.5.1 2026-01-10 14:14:40.435505 | orchestrator | 2026-01-10 14:14:40.435516 | orchestrator | TASK [osism.services.docker : Print used docker cli version] ******************* 2026-01-10 14:14:40.435527 | orchestrator | Saturday 10 January 2026 14:14:34 +0000 (0:00:00.285) 0:05:19.249 ****** 2026-01-10 14:14:40.435538 | orchestrator | ok: [testbed-manager] =>  2026-01-10 14:14:40.435549 | orchestrator |  docker_cli_version: 5:27.5.1 2026-01-10 14:14:40.435559 | orchestrator | ok: [testbed-node-3] =>  2026-01-10 14:14:40.435570 | orchestrator |  docker_cli_version: 5:27.5.1 2026-01-10 14:14:40.435581 | orchestrator | ok: [testbed-node-4] =>  2026-01-10 14:14:40.435592 | orchestrator |  docker_cli_version: 5:27.5.1 2026-01-10 14:14:40.435603 | orchestrator | ok: [testbed-node-5] =>  2026-01-10 14:14:40.435622 | orchestrator |  docker_cli_version: 5:27.5.1 2026-01-10 14:14:40.435633 | orchestrator | ok: [testbed-node-0] =>  2026-01-10 14:14:40.435644 | orchestrator |  docker_cli_version: 5:27.5.1 2026-01-10 14:14:40.435654 | orchestrator | ok: [testbed-node-1] =>  2026-01-10 14:14:40.435665 | orchestrator |  docker_cli_version: 5:27.5.1 2026-01-10 14:14:40.435676 | orchestrator | ok: [testbed-node-2] =>  2026-01-10 14:14:40.435687 | orchestrator |  docker_cli_version: 5:27.5.1 2026-01-10 14:14:40.435697 | orchestrator | 2026-01-10 14:14:40.435709 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2026-01-10 14:14:40.435720 | orchestrator | Saturday 10 January 2026 14:14:35 +0000 (0:00:00.301) 0:05:19.551 ****** 2026-01-10 14:14:40.435731 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:14:40.435742 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:14:40.435753 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:14:40.435763 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:14:40.435774 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:14:40.435785 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:14:40.435795 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:14:40.435806 | orchestrator | 2026-01-10 14:14:40.435817 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2026-01-10 14:14:40.435828 | orchestrator | Saturday 10 January 2026 14:14:35 +0000 (0:00:00.291) 0:05:19.842 ****** 2026-01-10 14:14:40.435839 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:14:40.435850 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:14:40.435860 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:14:40.435871 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:14:40.435882 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:14:40.435893 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:14:40.435903 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:14:40.435914 | orchestrator | 2026-01-10 14:14:40.435925 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2026-01-10 14:14:40.435936 | orchestrator | Saturday 10 January 2026 14:14:35 +0000 (0:00:00.269) 0:05:20.111 ****** 2026-01-10 14:14:40.435949 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:14:40.435962 | orchestrator | 2026-01-10 14:14:40.435974 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2026-01-10 14:14:40.435985 | orchestrator | Saturday 10 January 2026 14:14:35 +0000 (0:00:00.423) 0:05:20.535 ****** 2026-01-10 14:14:40.435996 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:14:40.436007 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:14:40.436017 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:14:40.436028 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:14:40.436040 | orchestrator | ok: [testbed-manager] 2026-01-10 14:14:40.436054 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:14:40.436074 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:14:40.436093 | orchestrator | 2026-01-10 14:14:40.436114 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2026-01-10 14:14:40.436134 | orchestrator | Saturday 10 January 2026 14:14:37 +0000 (0:00:01.033) 0:05:21.568 ****** 2026-01-10 14:14:40.436152 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:14:40.436172 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:14:40.436193 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:14:40.436213 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:14:40.436226 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:14:40.436237 | orchestrator | ok: [testbed-manager] 2026-01-10 14:14:40.436248 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:14:40.436258 | orchestrator | 2026-01-10 14:14:40.436270 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2026-01-10 14:14:40.436281 | orchestrator | Saturday 10 January 2026 14:14:40 +0000 (0:00:03.016) 0:05:24.585 ****** 2026-01-10 14:14:40.436302 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2026-01-10 14:14:40.436313 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2026-01-10 14:14:40.436331 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2026-01-10 14:14:40.436342 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2026-01-10 14:14:40.436353 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2026-01-10 14:14:40.436364 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2026-01-10 14:14:40.436375 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:14:40.436385 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2026-01-10 14:14:40.436396 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2026-01-10 14:14:40.436461 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2026-01-10 14:14:40.436474 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:14:40.436485 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2026-01-10 14:14:40.436496 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2026-01-10 14:14:40.436507 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2026-01-10 14:14:40.436517 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:14:40.436528 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2026-01-10 14:14:40.436548 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2026-01-10 14:15:43.856829 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2026-01-10 14:15:43.856948 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:15:43.856965 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2026-01-10 14:15:43.856977 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2026-01-10 14:15:43.856988 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2026-01-10 14:15:43.856999 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:15:43.857010 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:15:43.857022 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2026-01-10 14:15:43.857033 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2026-01-10 14:15:43.857044 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2026-01-10 14:15:43.857055 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:15:43.857066 | orchestrator | 2026-01-10 14:15:43.857079 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2026-01-10 14:15:43.857092 | orchestrator | Saturday 10 January 2026 14:14:40 +0000 (0:00:00.617) 0:05:25.202 ****** 2026-01-10 14:15:43.857103 | orchestrator | ok: [testbed-manager] 2026-01-10 14:15:43.857132 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:15:43.857144 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:15:43.857167 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:15:43.857178 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:15:43.857189 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:15:43.857200 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:15:43.857210 | orchestrator | 2026-01-10 14:15:43.857222 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2026-01-10 14:15:43.857233 | orchestrator | Saturday 10 January 2026 14:14:47 +0000 (0:00:07.098) 0:05:32.301 ****** 2026-01-10 14:15:43.857245 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:15:43.857255 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:15:43.857266 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:15:43.857277 | orchestrator | ok: [testbed-manager] 2026-01-10 14:15:43.857288 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:15:43.857299 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:15:43.857310 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:15:43.857320 | orchestrator | 2026-01-10 14:15:43.857331 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2026-01-10 14:15:43.857342 | orchestrator | Saturday 10 January 2026 14:14:48 +0000 (0:00:01.078) 0:05:33.379 ****** 2026-01-10 14:15:43.857354 | orchestrator | ok: [testbed-manager] 2026-01-10 14:15:43.857445 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:15:43.857465 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:15:43.857477 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:15:43.857490 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:15:43.857503 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:15:43.857515 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:15:43.857528 | orchestrator | 2026-01-10 14:15:43.857540 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2026-01-10 14:15:43.857552 | orchestrator | Saturday 10 January 2026 14:14:57 +0000 (0:00:08.821) 0:05:42.201 ****** 2026-01-10 14:15:43.857565 | orchestrator | changed: [testbed-manager] 2026-01-10 14:15:43.857577 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:15:43.857589 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:15:43.857601 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:15:43.857614 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:15:43.857626 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:15:43.857639 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:15:43.857651 | orchestrator | 2026-01-10 14:15:43.857663 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2026-01-10 14:15:43.857675 | orchestrator | Saturday 10 January 2026 14:15:01 +0000 (0:00:03.652) 0:05:45.853 ****** 2026-01-10 14:15:43.857687 | orchestrator | ok: [testbed-manager] 2026-01-10 14:15:43.857699 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:15:43.857711 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:15:43.857723 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:15:43.857734 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:15:43.857745 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:15:43.857755 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:15:43.857766 | orchestrator | 2026-01-10 14:15:43.857777 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2026-01-10 14:15:43.857788 | orchestrator | Saturday 10 January 2026 14:15:02 +0000 (0:00:01.349) 0:05:47.203 ****** 2026-01-10 14:15:43.857798 | orchestrator | ok: [testbed-manager] 2026-01-10 14:15:43.857809 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:15:43.857820 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:15:43.857830 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:15:43.857841 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:15:43.857852 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:15:43.857862 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:15:43.857873 | orchestrator | 2026-01-10 14:15:43.857884 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2026-01-10 14:15:43.857895 | orchestrator | Saturday 10 January 2026 14:15:04 +0000 (0:00:01.595) 0:05:48.799 ****** 2026-01-10 14:15:43.857921 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:15:43.857933 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:15:43.857943 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:15:43.857955 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:15:43.857966 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:15:43.857976 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:15:43.857987 | orchestrator | changed: [testbed-manager] 2026-01-10 14:15:43.857998 | orchestrator | 2026-01-10 14:15:43.858009 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2026-01-10 14:15:43.858092 | orchestrator | Saturday 10 January 2026 14:15:04 +0000 (0:00:00.582) 0:05:49.381 ****** 2026-01-10 14:15:43.858107 | orchestrator | ok: [testbed-manager] 2026-01-10 14:15:43.858118 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:15:43.858129 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:15:43.858139 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:15:43.858150 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:15:43.858161 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:15:43.858172 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:15:43.858183 | orchestrator | 2026-01-10 14:15:43.858194 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2026-01-10 14:15:43.858235 | orchestrator | Saturday 10 January 2026 14:15:15 +0000 (0:00:10.307) 0:05:59.689 ****** 2026-01-10 14:15:43.858248 | orchestrator | changed: [testbed-manager] 2026-01-10 14:15:43.858259 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:15:43.858270 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:15:43.858280 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:15:43.858291 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:15:43.858302 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:15:43.858312 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:15:43.858323 | orchestrator | 2026-01-10 14:15:43.858335 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2026-01-10 14:15:43.858346 | orchestrator | Saturday 10 January 2026 14:15:16 +0000 (0:00:00.973) 0:06:00.663 ****** 2026-01-10 14:15:43.858357 | orchestrator | ok: [testbed-manager] 2026-01-10 14:15:43.858389 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:15:43.858400 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:15:43.858411 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:15:43.858422 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:15:43.858433 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:15:43.858443 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:15:43.858454 | orchestrator | 2026-01-10 14:15:43.858465 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2026-01-10 14:15:43.858476 | orchestrator | Saturday 10 January 2026 14:15:25 +0000 (0:00:09.689) 0:06:10.352 ****** 2026-01-10 14:15:43.858487 | orchestrator | ok: [testbed-manager] 2026-01-10 14:15:43.858498 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:15:43.858509 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:15:43.858519 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:15:43.858530 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:15:43.858541 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:15:43.858552 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:15:43.858562 | orchestrator | 2026-01-10 14:15:43.858574 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2026-01-10 14:15:43.858584 | orchestrator | Saturday 10 January 2026 14:15:37 +0000 (0:00:11.344) 0:06:21.697 ****** 2026-01-10 14:15:43.858596 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2026-01-10 14:15:43.858607 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2026-01-10 14:15:43.858617 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2026-01-10 14:15:43.858628 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2026-01-10 14:15:43.858639 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2026-01-10 14:15:43.858650 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2026-01-10 14:15:43.858661 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2026-01-10 14:15:43.858672 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2026-01-10 14:15:43.858698 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2026-01-10 14:15:43.858710 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2026-01-10 14:15:43.858731 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2026-01-10 14:15:43.858743 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2026-01-10 14:15:43.858754 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2026-01-10 14:15:43.858765 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2026-01-10 14:15:43.858775 | orchestrator | 2026-01-10 14:15:43.858787 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2026-01-10 14:15:43.858797 | orchestrator | Saturday 10 January 2026 14:15:38 +0000 (0:00:01.263) 0:06:22.961 ****** 2026-01-10 14:15:43.858808 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:15:43.858819 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:15:43.858830 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:15:43.858840 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:15:43.858851 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:15:43.858862 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:15:43.858880 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:15:43.858891 | orchestrator | 2026-01-10 14:15:43.858902 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2026-01-10 14:15:43.858913 | orchestrator | Saturday 10 January 2026 14:15:38 +0000 (0:00:00.537) 0:06:23.498 ****** 2026-01-10 14:15:43.858924 | orchestrator | ok: [testbed-manager] 2026-01-10 14:15:43.858935 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:15:43.858946 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:15:43.858957 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:15:43.858967 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:15:43.858978 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:15:43.858989 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:15:43.859000 | orchestrator | 2026-01-10 14:15:43.859011 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2026-01-10 14:15:43.859022 | orchestrator | Saturday 10 January 2026 14:15:42 +0000 (0:00:03.857) 0:06:27.356 ****** 2026-01-10 14:15:43.859034 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:15:43.859044 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:15:43.859055 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:15:43.859066 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:15:43.859077 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:15:43.859088 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:15:43.859098 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:15:43.859109 | orchestrator | 2026-01-10 14:15:43.859121 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2026-01-10 14:15:43.859132 | orchestrator | Saturday 10 January 2026 14:15:43 +0000 (0:00:00.528) 0:06:27.885 ****** 2026-01-10 14:15:43.859143 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2026-01-10 14:15:43.859154 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2026-01-10 14:15:43.859165 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:15:43.859175 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2026-01-10 14:15:43.859186 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2026-01-10 14:15:43.859197 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:15:43.859208 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2026-01-10 14:15:43.859219 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2026-01-10 14:15:43.859230 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:15:43.859249 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2026-01-10 14:16:03.642302 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2026-01-10 14:16:03.642429 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:16:03.642444 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2026-01-10 14:16:03.642501 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2026-01-10 14:16:03.642513 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:16:03.642522 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2026-01-10 14:16:03.642532 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2026-01-10 14:16:03.642541 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:16:03.642550 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2026-01-10 14:16:03.642559 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2026-01-10 14:16:03.642568 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:16:03.642577 | orchestrator | 2026-01-10 14:16:03.642588 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2026-01-10 14:16:03.642598 | orchestrator | Saturday 10 January 2026 14:15:44 +0000 (0:00:00.793) 0:06:28.679 ****** 2026-01-10 14:16:03.642607 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:16:03.642616 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:16:03.642625 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:16:03.642634 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:16:03.642666 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:16:03.642675 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:16:03.642684 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:16:03.642693 | orchestrator | 2026-01-10 14:16:03.642702 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2026-01-10 14:16:03.642712 | orchestrator | Saturday 10 January 2026 14:15:44 +0000 (0:00:00.563) 0:06:29.242 ****** 2026-01-10 14:16:03.642721 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:16:03.642730 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:16:03.642738 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:16:03.642747 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:16:03.642756 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:16:03.642765 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:16:03.642773 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:16:03.642782 | orchestrator | 2026-01-10 14:16:03.642792 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2026-01-10 14:16:03.642801 | orchestrator | Saturday 10 January 2026 14:15:45 +0000 (0:00:00.525) 0:06:29.767 ****** 2026-01-10 14:16:03.642809 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:16:03.642818 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:16:03.642827 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:16:03.642836 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:16:03.642846 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:16:03.642856 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:16:03.642866 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:16:03.642876 | orchestrator | 2026-01-10 14:16:03.642886 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2026-01-10 14:16:03.642896 | orchestrator | Saturday 10 January 2026 14:15:45 +0000 (0:00:00.563) 0:06:30.331 ****** 2026-01-10 14:16:03.642906 | orchestrator | ok: [testbed-manager] 2026-01-10 14:16:03.642916 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:16:03.642926 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:16:03.642937 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:16:03.642947 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:16:03.642957 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:16:03.642967 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:16:03.642977 | orchestrator | 2026-01-10 14:16:03.642987 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2026-01-10 14:16:03.642997 | orchestrator | Saturday 10 January 2026 14:15:47 +0000 (0:00:01.980) 0:06:32.312 ****** 2026-01-10 14:16:03.643008 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:16:03.643021 | orchestrator | 2026-01-10 14:16:03.643031 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2026-01-10 14:16:03.643042 | orchestrator | Saturday 10 January 2026 14:15:48 +0000 (0:00:00.890) 0:06:33.202 ****** 2026-01-10 14:16:03.643051 | orchestrator | ok: [testbed-manager] 2026-01-10 14:16:03.643077 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:16:03.643087 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:16:03.643097 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:16:03.643107 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:16:03.643117 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:16:03.643127 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:16:03.643137 | orchestrator | 2026-01-10 14:16:03.643146 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2026-01-10 14:16:03.643162 | orchestrator | Saturday 10 January 2026 14:15:49 +0000 (0:00:00.849) 0:06:34.051 ****** 2026-01-10 14:16:03.643172 | orchestrator | ok: [testbed-manager] 2026-01-10 14:16:03.643182 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:16:03.643192 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:16:03.643201 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:16:03.643211 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:16:03.643226 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:16:03.643242 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:16:03.643261 | orchestrator | 2026-01-10 14:16:03.643285 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2026-01-10 14:16:03.643299 | orchestrator | Saturday 10 January 2026 14:15:50 +0000 (0:00:00.871) 0:06:34.923 ****** 2026-01-10 14:16:03.643315 | orchestrator | ok: [testbed-manager] 2026-01-10 14:16:03.643330 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:16:03.643363 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:16:03.643379 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:16:03.643391 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:16:03.643403 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:16:03.643416 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:16:03.643429 | orchestrator | 2026-01-10 14:16:03.643445 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2026-01-10 14:16:03.643482 | orchestrator | Saturday 10 January 2026 14:15:51 +0000 (0:00:01.545) 0:06:36.469 ****** 2026-01-10 14:16:03.643498 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:16:03.643518 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:16:03.643538 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:16:03.643552 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:16:03.643566 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:16:03.643580 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:16:03.643593 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:16:03.643608 | orchestrator | 2026-01-10 14:16:03.643623 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2026-01-10 14:16:03.643639 | orchestrator | Saturday 10 January 2026 14:15:53 +0000 (0:00:01.376) 0:06:37.846 ****** 2026-01-10 14:16:03.643654 | orchestrator | ok: [testbed-manager] 2026-01-10 14:16:03.643669 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:16:03.643684 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:16:03.643699 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:16:03.643715 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:16:03.643729 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:16:03.643744 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:16:03.643760 | orchestrator | 2026-01-10 14:16:03.643774 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2026-01-10 14:16:03.643788 | orchestrator | Saturday 10 January 2026 14:15:54 +0000 (0:00:01.567) 0:06:39.413 ****** 2026-01-10 14:16:03.643803 | orchestrator | changed: [testbed-manager] 2026-01-10 14:16:03.643818 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:16:03.643833 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:16:03.643849 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:16:03.643864 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:16:03.643879 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:16:03.643895 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:16:03.643909 | orchestrator | 2026-01-10 14:16:03.643923 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2026-01-10 14:16:03.643939 | orchestrator | Saturday 10 January 2026 14:15:56 +0000 (0:00:01.415) 0:06:40.828 ****** 2026-01-10 14:16:03.643949 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:16:03.643959 | orchestrator | 2026-01-10 14:16:03.643968 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2026-01-10 14:16:03.643977 | orchestrator | Saturday 10 January 2026 14:15:57 +0000 (0:00:01.130) 0:06:41.959 ****** 2026-01-10 14:16:03.643986 | orchestrator | ok: [testbed-manager] 2026-01-10 14:16:03.643995 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:16:03.644003 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:16:03.644012 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:16:03.644021 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:16:03.644030 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:16:03.644051 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:16:03.644061 | orchestrator | 2026-01-10 14:16:03.644069 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2026-01-10 14:16:03.644078 | orchestrator | Saturday 10 January 2026 14:15:58 +0000 (0:00:01.354) 0:06:43.313 ****** 2026-01-10 14:16:03.644087 | orchestrator | ok: [testbed-manager] 2026-01-10 14:16:03.644096 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:16:03.644104 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:16:03.644113 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:16:03.644121 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:16:03.644130 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:16:03.644138 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:16:03.644147 | orchestrator | 2026-01-10 14:16:03.644156 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2026-01-10 14:16:03.644164 | orchestrator | Saturday 10 January 2026 14:15:59 +0000 (0:00:01.126) 0:06:44.440 ****** 2026-01-10 14:16:03.644173 | orchestrator | ok: [testbed-manager] 2026-01-10 14:16:03.644182 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:16:03.644190 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:16:03.644199 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:16:03.644208 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:16:03.644216 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:16:03.644225 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:16:03.644234 | orchestrator | 2026-01-10 14:16:03.644242 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2026-01-10 14:16:03.644251 | orchestrator | Saturday 10 January 2026 14:16:01 +0000 (0:00:01.153) 0:06:45.594 ****** 2026-01-10 14:16:03.644260 | orchestrator | ok: [testbed-manager] 2026-01-10 14:16:03.644268 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:16:03.644277 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:16:03.644286 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:16:03.644294 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:16:03.644303 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:16:03.644311 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:16:03.644320 | orchestrator | 2026-01-10 14:16:03.644329 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2026-01-10 14:16:03.644337 | orchestrator | Saturday 10 January 2026 14:16:02 +0000 (0:00:01.386) 0:06:46.981 ****** 2026-01-10 14:16:03.644370 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:16:03.644380 | orchestrator | 2026-01-10 14:16:03.644389 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-01-10 14:16:03.644398 | orchestrator | Saturday 10 January 2026 14:16:03 +0000 (0:00:00.891) 0:06:47.873 ****** 2026-01-10 14:16:03.644406 | orchestrator | 2026-01-10 14:16:03.644415 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-01-10 14:16:03.644424 | orchestrator | Saturday 10 January 2026 14:16:03 +0000 (0:00:00.041) 0:06:47.914 ****** 2026-01-10 14:16:03.644436 | orchestrator | 2026-01-10 14:16:03.644451 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-01-10 14:16:03.644465 | orchestrator | Saturday 10 January 2026 14:16:03 +0000 (0:00:00.048) 0:06:47.962 ****** 2026-01-10 14:16:03.644479 | orchestrator | 2026-01-10 14:16:03.644495 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-01-10 14:16:03.644521 | orchestrator | Saturday 10 January 2026 14:16:03 +0000 (0:00:00.040) 0:06:48.002 ****** 2026-01-10 14:16:29.718932 | orchestrator | 2026-01-10 14:16:29.719065 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-01-10 14:16:29.719089 | orchestrator | Saturday 10 January 2026 14:16:03 +0000 (0:00:00.039) 0:06:48.042 ****** 2026-01-10 14:16:29.719108 | orchestrator | 2026-01-10 14:16:29.719127 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-01-10 14:16:29.719145 | orchestrator | Saturday 10 January 2026 14:16:03 +0000 (0:00:00.047) 0:06:48.090 ****** 2026-01-10 14:16:29.719195 | orchestrator | 2026-01-10 14:16:29.719214 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-01-10 14:16:29.719231 | orchestrator | Saturday 10 January 2026 14:16:03 +0000 (0:00:00.042) 0:06:48.132 ****** 2026-01-10 14:16:29.719249 | orchestrator | 2026-01-10 14:16:29.719267 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-01-10 14:16:29.719283 | orchestrator | Saturday 10 January 2026 14:16:03 +0000 (0:00:00.040) 0:06:48.173 ****** 2026-01-10 14:16:29.719301 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:16:29.719355 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:16:29.719376 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:16:29.719394 | orchestrator | 2026-01-10 14:16:29.719412 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2026-01-10 14:16:29.719431 | orchestrator | Saturday 10 January 2026 14:16:04 +0000 (0:00:01.254) 0:06:49.428 ****** 2026-01-10 14:16:29.719450 | orchestrator | changed: [testbed-manager] 2026-01-10 14:16:29.719471 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:16:29.719490 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:16:29.719509 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:16:29.719527 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:16:29.719546 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:16:29.719565 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:16:29.719587 | orchestrator | 2026-01-10 14:16:29.719609 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart logrotate service] *********** 2026-01-10 14:16:29.719631 | orchestrator | Saturday 10 January 2026 14:16:06 +0000 (0:00:01.498) 0:06:50.926 ****** 2026-01-10 14:16:29.719652 | orchestrator | changed: [testbed-manager] 2026-01-10 14:16:29.719674 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:16:29.719696 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:16:29.719718 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:16:29.719740 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:16:29.719761 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:16:29.719783 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:16:29.719804 | orchestrator | 2026-01-10 14:16:29.719825 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2026-01-10 14:16:29.719843 | orchestrator | Saturday 10 January 2026 14:16:07 +0000 (0:00:01.203) 0:06:52.130 ****** 2026-01-10 14:16:29.719862 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:16:29.719879 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:16:29.719897 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:16:29.719913 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:16:29.719930 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:16:29.719948 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:16:29.719964 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:16:29.719982 | orchestrator | 2026-01-10 14:16:29.720000 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2026-01-10 14:16:29.720018 | orchestrator | Saturday 10 January 2026 14:16:09 +0000 (0:00:02.341) 0:06:54.471 ****** 2026-01-10 14:16:29.720037 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:16:29.720056 | orchestrator | 2026-01-10 14:16:29.720073 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2026-01-10 14:16:29.720090 | orchestrator | Saturday 10 January 2026 14:16:10 +0000 (0:00:00.110) 0:06:54.582 ****** 2026-01-10 14:16:29.720109 | orchestrator | ok: [testbed-manager] 2026-01-10 14:16:29.720128 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:16:29.720148 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:16:29.720166 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:16:29.720183 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:16:29.720200 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:16:29.720217 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:16:29.720236 | orchestrator | 2026-01-10 14:16:29.720255 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2026-01-10 14:16:29.720275 | orchestrator | Saturday 10 January 2026 14:16:11 +0000 (0:00:01.030) 0:06:55.613 ****** 2026-01-10 14:16:29.720317 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:16:29.720370 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:16:29.720381 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:16:29.720392 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:16:29.720403 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:16:29.720414 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:16:29.720425 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:16:29.720436 | orchestrator | 2026-01-10 14:16:29.720447 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2026-01-10 14:16:29.720458 | orchestrator | Saturday 10 January 2026 14:16:11 +0000 (0:00:00.547) 0:06:56.161 ****** 2026-01-10 14:16:29.720486 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:16:29.720500 | orchestrator | 2026-01-10 14:16:29.720511 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2026-01-10 14:16:29.720522 | orchestrator | Saturday 10 January 2026 14:16:12 +0000 (0:00:01.099) 0:06:57.260 ****** 2026-01-10 14:16:29.720533 | orchestrator | ok: [testbed-manager] 2026-01-10 14:16:29.720544 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:16:29.720556 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:16:29.720567 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:16:29.720577 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:16:29.720588 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:16:29.720599 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:16:29.720610 | orchestrator | 2026-01-10 14:16:29.720620 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2026-01-10 14:16:29.720632 | orchestrator | Saturday 10 January 2026 14:16:13 +0000 (0:00:00.868) 0:06:58.128 ****** 2026-01-10 14:16:29.720643 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2026-01-10 14:16:29.720677 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2026-01-10 14:16:29.720689 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2026-01-10 14:16:29.720701 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2026-01-10 14:16:29.720712 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2026-01-10 14:16:29.720722 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2026-01-10 14:16:29.720733 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2026-01-10 14:16:29.720744 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2026-01-10 14:16:29.720755 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2026-01-10 14:16:29.720766 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2026-01-10 14:16:29.720777 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2026-01-10 14:16:29.720788 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2026-01-10 14:16:29.720798 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2026-01-10 14:16:29.720809 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2026-01-10 14:16:29.720820 | orchestrator | 2026-01-10 14:16:29.720831 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2026-01-10 14:16:29.720843 | orchestrator | Saturday 10 January 2026 14:16:16 +0000 (0:00:02.584) 0:07:00.713 ****** 2026-01-10 14:16:29.720854 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:16:29.720865 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:16:29.720876 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:16:29.720886 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:16:29.720897 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:16:29.720908 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:16:29.720919 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:16:29.720930 | orchestrator | 2026-01-10 14:16:29.720941 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2026-01-10 14:16:29.720960 | orchestrator | Saturday 10 January 2026 14:16:16 +0000 (0:00:00.717) 0:07:01.431 ****** 2026-01-10 14:16:29.720973 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:16:29.720986 | orchestrator | 2026-01-10 14:16:29.720998 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2026-01-10 14:16:29.721009 | orchestrator | Saturday 10 January 2026 14:16:17 +0000 (0:00:00.846) 0:07:02.278 ****** 2026-01-10 14:16:29.721020 | orchestrator | ok: [testbed-manager] 2026-01-10 14:16:29.721031 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:16:29.721041 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:16:29.721052 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:16:29.721063 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:16:29.721074 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:16:29.721084 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:16:29.721095 | orchestrator | 2026-01-10 14:16:29.721106 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2026-01-10 14:16:29.721117 | orchestrator | Saturday 10 January 2026 14:16:18 +0000 (0:00:00.860) 0:07:03.138 ****** 2026-01-10 14:16:29.721128 | orchestrator | ok: [testbed-manager] 2026-01-10 14:16:29.721139 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:16:29.721149 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:16:29.721160 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:16:29.721254 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:16:29.721266 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:16:29.721277 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:16:29.721288 | orchestrator | 2026-01-10 14:16:29.721299 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2026-01-10 14:16:29.721310 | orchestrator | Saturday 10 January 2026 14:16:19 +0000 (0:00:01.030) 0:07:04.169 ****** 2026-01-10 14:16:29.721377 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:16:29.721391 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:16:29.721402 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:16:29.721413 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:16:29.721424 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:16:29.721435 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:16:29.721446 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:16:29.721457 | orchestrator | 2026-01-10 14:16:29.721468 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2026-01-10 14:16:29.721479 | orchestrator | Saturday 10 January 2026 14:16:20 +0000 (0:00:00.527) 0:07:04.696 ****** 2026-01-10 14:16:29.721490 | orchestrator | ok: [testbed-manager] 2026-01-10 14:16:29.721501 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:16:29.721511 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:16:29.721522 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:16:29.721533 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:16:29.721544 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:16:29.721562 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:16:29.721573 | orchestrator | 2026-01-10 14:16:29.721584 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2026-01-10 14:16:29.721595 | orchestrator | Saturday 10 January 2026 14:16:21 +0000 (0:00:01.512) 0:07:06.208 ****** 2026-01-10 14:16:29.721606 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:16:29.721617 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:16:29.721628 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:16:29.721639 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:16:29.721649 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:16:29.721660 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:16:29.721671 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:16:29.721682 | orchestrator | 2026-01-10 14:16:29.721693 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2026-01-10 14:16:29.721704 | orchestrator | Saturday 10 January 2026 14:16:22 +0000 (0:00:00.538) 0:07:06.746 ****** 2026-01-10 14:16:29.721722 | orchestrator | ok: [testbed-manager] 2026-01-10 14:16:29.721733 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:16:29.721744 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:16:29.721755 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:16:29.721766 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:16:29.721777 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:16:29.721798 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:17:01.725915 | orchestrator | 2026-01-10 14:17:01.726066 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2026-01-10 14:17:01.726078 | orchestrator | Saturday 10 January 2026 14:16:29 +0000 (0:00:07.504) 0:07:14.251 ****** 2026-01-10 14:17:01.726084 | orchestrator | ok: [testbed-manager] 2026-01-10 14:17:01.726091 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:17:01.726098 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:17:01.726103 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:17:01.726109 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:17:01.726114 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:17:01.726120 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:17:01.726125 | orchestrator | 2026-01-10 14:17:01.726131 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2026-01-10 14:17:01.726137 | orchestrator | Saturday 10 January 2026 14:16:31 +0000 (0:00:01.562) 0:07:15.813 ****** 2026-01-10 14:17:01.726142 | orchestrator | ok: [testbed-manager] 2026-01-10 14:17:01.726148 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:17:01.726153 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:17:01.726159 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:17:01.726164 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:17:01.726169 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:17:01.726175 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:17:01.726180 | orchestrator | 2026-01-10 14:17:01.726185 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2026-01-10 14:17:01.726190 | orchestrator | Saturday 10 January 2026 14:16:33 +0000 (0:00:01.797) 0:07:17.611 ****** 2026-01-10 14:17:01.726196 | orchestrator | ok: [testbed-manager] 2026-01-10 14:17:01.726201 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:17:01.726206 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:17:01.726211 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:17:01.726216 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:17:01.726222 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:17:01.726227 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:17:01.726232 | orchestrator | 2026-01-10 14:17:01.726237 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-01-10 14:17:01.726242 | orchestrator | Saturday 10 January 2026 14:16:34 +0000 (0:00:01.738) 0:07:19.349 ****** 2026-01-10 14:17:01.726248 | orchestrator | ok: [testbed-manager] 2026-01-10 14:17:01.726253 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:17:01.726258 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:17:01.726263 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:17:01.726269 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:17:01.726274 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:17:01.726279 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:17:01.726284 | orchestrator | 2026-01-10 14:17:01.726326 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-01-10 14:17:01.726332 | orchestrator | Saturday 10 January 2026 14:16:35 +0000 (0:00:00.840) 0:07:20.190 ****** 2026-01-10 14:17:01.726337 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:17:01.726342 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:17:01.726348 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:17:01.726353 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:17:01.726358 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:17:01.726363 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:17:01.726368 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:17:01.726374 | orchestrator | 2026-01-10 14:17:01.726380 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2026-01-10 14:17:01.726408 | orchestrator | Saturday 10 January 2026 14:16:36 +0000 (0:00:01.039) 0:07:21.230 ****** 2026-01-10 14:17:01.726414 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:17:01.726419 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:17:01.726425 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:17:01.726430 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:17:01.726435 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:17:01.726440 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:17:01.726445 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:17:01.726450 | orchestrator | 2026-01-10 14:17:01.726456 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2026-01-10 14:17:01.726461 | orchestrator | Saturday 10 January 2026 14:16:37 +0000 (0:00:00.514) 0:07:21.744 ****** 2026-01-10 14:17:01.726466 | orchestrator | ok: [testbed-manager] 2026-01-10 14:17:01.726471 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:17:01.726476 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:17:01.726482 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:17:01.726487 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:17:01.726492 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:17:01.726497 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:17:01.726502 | orchestrator | 2026-01-10 14:17:01.726507 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2026-01-10 14:17:01.726513 | orchestrator | Saturday 10 January 2026 14:16:37 +0000 (0:00:00.544) 0:07:22.288 ****** 2026-01-10 14:17:01.726518 | orchestrator | ok: [testbed-manager] 2026-01-10 14:17:01.726523 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:17:01.726528 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:17:01.726533 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:17:01.726538 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:17:01.726544 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:17:01.726549 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:17:01.726554 | orchestrator | 2026-01-10 14:17:01.726560 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2026-01-10 14:17:01.726565 | orchestrator | Saturday 10 January 2026 14:16:38 +0000 (0:00:00.559) 0:07:22.848 ****** 2026-01-10 14:17:01.726570 | orchestrator | ok: [testbed-manager] 2026-01-10 14:17:01.726576 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:17:01.726581 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:17:01.726586 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:17:01.726591 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:17:01.726597 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:17:01.726602 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:17:01.726607 | orchestrator | 2026-01-10 14:17:01.726612 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2026-01-10 14:17:01.726618 | orchestrator | Saturday 10 January 2026 14:16:39 +0000 (0:00:00.744) 0:07:23.592 ****** 2026-01-10 14:17:01.726623 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:17:01.726628 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:17:01.726633 | orchestrator | ok: [testbed-manager] 2026-01-10 14:17:01.726638 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:17:01.726643 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:17:01.726648 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:17:01.726653 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:17:01.726659 | orchestrator | 2026-01-10 14:17:01.726681 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2026-01-10 14:17:01.726686 | orchestrator | Saturday 10 January 2026 14:16:44 +0000 (0:00:05.258) 0:07:28.851 ****** 2026-01-10 14:17:01.726691 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:17:01.726696 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:17:01.726701 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:17:01.726706 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:17:01.726710 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:17:01.726715 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:17:01.726720 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:17:01.726730 | orchestrator | 2026-01-10 14:17:01.726735 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2026-01-10 14:17:01.726740 | orchestrator | Saturday 10 January 2026 14:16:44 +0000 (0:00:00.571) 0:07:29.422 ****** 2026-01-10 14:17:01.726748 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:17:01.726755 | orchestrator | 2026-01-10 14:17:01.726760 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2026-01-10 14:17:01.726778 | orchestrator | Saturday 10 January 2026 14:16:45 +0000 (0:00:01.053) 0:07:30.475 ****** 2026-01-10 14:17:01.726783 | orchestrator | ok: [testbed-manager] 2026-01-10 14:17:01.726788 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:17:01.726793 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:17:01.726797 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:17:01.726802 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:17:01.726807 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:17:01.726812 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:17:01.726818 | orchestrator | 2026-01-10 14:17:01.726825 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2026-01-10 14:17:01.726833 | orchestrator | Saturday 10 January 2026 14:16:47 +0000 (0:00:01.918) 0:07:32.394 ****** 2026-01-10 14:17:01.726842 | orchestrator | ok: [testbed-manager] 2026-01-10 14:17:01.726848 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:17:01.726855 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:17:01.726862 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:17:01.726869 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:17:01.726876 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:17:01.726883 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:17:01.726890 | orchestrator | 2026-01-10 14:17:01.726897 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2026-01-10 14:17:01.726904 | orchestrator | Saturday 10 January 2026 14:16:48 +0000 (0:00:01.107) 0:07:33.502 ****** 2026-01-10 14:17:01.726911 | orchestrator | ok: [testbed-manager] 2026-01-10 14:17:01.726918 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:17:01.726925 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:17:01.726933 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:17:01.726940 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:17:01.726948 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:17:01.726956 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:17:01.726964 | orchestrator | 2026-01-10 14:17:01.726973 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2026-01-10 14:17:01.726979 | orchestrator | Saturday 10 January 2026 14:16:49 +0000 (0:00:00.875) 0:07:34.377 ****** 2026-01-10 14:17:01.726984 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-01-10 14:17:01.726992 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-01-10 14:17:01.726997 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-01-10 14:17:01.727002 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-01-10 14:17:01.727007 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-01-10 14:17:01.727012 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-01-10 14:17:01.727017 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-01-10 14:17:01.727022 | orchestrator | 2026-01-10 14:17:01.727036 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2026-01-10 14:17:01.727042 | orchestrator | Saturday 10 January 2026 14:16:51 +0000 (0:00:01.979) 0:07:36.357 ****** 2026-01-10 14:17:01.727047 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:17:01.727052 | orchestrator | 2026-01-10 14:17:01.727057 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2026-01-10 14:17:01.727062 | orchestrator | Saturday 10 January 2026 14:16:52 +0000 (0:00:00.846) 0:07:37.204 ****** 2026-01-10 14:17:01.727067 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:17:01.727072 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:17:01.727076 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:17:01.727081 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:17:01.727086 | orchestrator | changed: [testbed-manager] 2026-01-10 14:17:01.727091 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:17:01.727096 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:17:01.727101 | orchestrator | 2026-01-10 14:17:01.727111 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2026-01-10 14:17:32.428370 | orchestrator | Saturday 10 January 2026 14:17:01 +0000 (0:00:09.050) 0:07:46.254 ****** 2026-01-10 14:17:32.428462 | orchestrator | ok: [testbed-manager] 2026-01-10 14:17:32.428473 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:17:32.428480 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:17:32.428487 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:17:32.428494 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:17:32.428501 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:17:32.428508 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:17:32.428515 | orchestrator | 2026-01-10 14:17:32.428523 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2026-01-10 14:17:32.428531 | orchestrator | Saturday 10 January 2026 14:17:03 +0000 (0:00:02.048) 0:07:48.302 ****** 2026-01-10 14:17:32.428538 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:17:32.428544 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:17:32.428551 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:17:32.428558 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:17:32.428564 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:17:32.428571 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:17:32.428578 | orchestrator | 2026-01-10 14:17:32.428585 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2026-01-10 14:17:32.428592 | orchestrator | Saturday 10 January 2026 14:17:05 +0000 (0:00:01.320) 0:07:49.623 ****** 2026-01-10 14:17:32.428599 | orchestrator | changed: [testbed-manager] 2026-01-10 14:17:32.428606 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:17:32.428613 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:17:32.428619 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:17:32.428626 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:17:32.428633 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:17:32.428639 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:17:32.428646 | orchestrator | 2026-01-10 14:17:32.428653 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2026-01-10 14:17:32.428660 | orchestrator | 2026-01-10 14:17:32.428666 | orchestrator | TASK [Include hardening role] ************************************************** 2026-01-10 14:17:32.428673 | orchestrator | Saturday 10 January 2026 14:17:06 +0000 (0:00:01.239) 0:07:50.862 ****** 2026-01-10 14:17:32.428680 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:17:32.428687 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:17:32.428693 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:17:32.428700 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:17:32.428707 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:17:32.428713 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:17:32.428720 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:17:32.428727 | orchestrator | 2026-01-10 14:17:32.428751 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2026-01-10 14:17:32.428759 | orchestrator | 2026-01-10 14:17:32.428765 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2026-01-10 14:17:32.428772 | orchestrator | Saturday 10 January 2026 14:17:07 +0000 (0:00:00.732) 0:07:51.594 ****** 2026-01-10 14:17:32.428778 | orchestrator | changed: [testbed-manager] 2026-01-10 14:17:32.428785 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:17:32.428792 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:17:32.428798 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:17:32.428805 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:17:32.428812 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:17:32.428818 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:17:32.428825 | orchestrator | 2026-01-10 14:17:32.428832 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2026-01-10 14:17:32.428838 | orchestrator | Saturday 10 January 2026 14:17:08 +0000 (0:00:01.353) 0:07:52.948 ****** 2026-01-10 14:17:32.428845 | orchestrator | ok: [testbed-manager] 2026-01-10 14:17:32.428851 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:17:32.428858 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:17:32.428865 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:17:32.428871 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:17:32.428878 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:17:32.428884 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:17:32.428892 | orchestrator | 2026-01-10 14:17:32.428899 | orchestrator | TASK [Include auditd role] ***************************************************** 2026-01-10 14:17:32.428907 | orchestrator | Saturday 10 January 2026 14:17:09 +0000 (0:00:01.457) 0:07:54.406 ****** 2026-01-10 14:17:32.428915 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:17:32.428983 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:17:32.428991 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:17:32.428999 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:17:32.429006 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:17:32.429014 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:17:32.429021 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:17:32.429028 | orchestrator | 2026-01-10 14:17:32.429036 | orchestrator | TASK [Include smartd role] ***************************************************** 2026-01-10 14:17:32.429044 | orchestrator | Saturday 10 January 2026 14:17:10 +0000 (0:00:00.518) 0:07:54.924 ****** 2026-01-10 14:17:32.429063 | orchestrator | included: osism.services.smartd for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:17:32.429073 | orchestrator | 2026-01-10 14:17:32.429081 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2026-01-10 14:17:32.429088 | orchestrator | Saturday 10 January 2026 14:17:11 +0000 (0:00:01.054) 0:07:55.979 ****** 2026-01-10 14:17:32.429097 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:17:32.429107 | orchestrator | 2026-01-10 14:17:32.429115 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2026-01-10 14:17:32.429123 | orchestrator | Saturday 10 January 2026 14:17:12 +0000 (0:00:00.841) 0:07:56.821 ****** 2026-01-10 14:17:32.429130 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:17:32.429138 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:17:32.429145 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:17:32.429152 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:17:32.429160 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:17:32.429167 | orchestrator | changed: [testbed-manager] 2026-01-10 14:17:32.429175 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:17:32.429182 | orchestrator | 2026-01-10 14:17:32.429203 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2026-01-10 14:17:32.429211 | orchestrator | Saturday 10 January 2026 14:17:20 +0000 (0:00:08.454) 0:08:05.275 ****** 2026-01-10 14:17:32.429226 | orchestrator | changed: [testbed-manager] 2026-01-10 14:17:32.429234 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:17:32.429241 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:17:32.429284 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:17:32.429291 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:17:32.429297 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:17:32.429304 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:17:32.429311 | orchestrator | 2026-01-10 14:17:32.429317 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2026-01-10 14:17:32.429324 | orchestrator | Saturday 10 January 2026 14:17:21 +0000 (0:00:00.816) 0:08:06.092 ****** 2026-01-10 14:17:32.429331 | orchestrator | changed: [testbed-manager] 2026-01-10 14:17:32.429338 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:17:32.429344 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:17:32.429351 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:17:32.429357 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:17:32.429364 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:17:32.429371 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:17:32.429377 | orchestrator | 2026-01-10 14:17:32.429384 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2026-01-10 14:17:32.429391 | orchestrator | Saturday 10 January 2026 14:17:22 +0000 (0:00:01.381) 0:08:07.473 ****** 2026-01-10 14:17:32.429397 | orchestrator | changed: [testbed-manager] 2026-01-10 14:17:32.429404 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:17:32.429411 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:17:32.429417 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:17:32.429424 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:17:32.429430 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:17:32.429437 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:17:32.429444 | orchestrator | 2026-01-10 14:17:32.429450 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2026-01-10 14:17:32.429457 | orchestrator | Saturday 10 January 2026 14:17:24 +0000 (0:00:01.968) 0:08:09.442 ****** 2026-01-10 14:17:32.429464 | orchestrator | changed: [testbed-manager] 2026-01-10 14:17:32.429471 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:17:32.429477 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:17:32.429484 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:17:32.429490 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:17:32.429497 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:17:32.429504 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:17:32.429510 | orchestrator | 2026-01-10 14:17:32.429517 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2026-01-10 14:17:32.429524 | orchestrator | Saturday 10 January 2026 14:17:26 +0000 (0:00:01.250) 0:08:10.692 ****** 2026-01-10 14:17:32.429530 | orchestrator | changed: [testbed-manager] 2026-01-10 14:17:32.429537 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:17:32.429544 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:17:32.429550 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:17:32.429557 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:17:32.429564 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:17:32.429570 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:17:32.429577 | orchestrator | 2026-01-10 14:17:32.429584 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2026-01-10 14:17:32.429591 | orchestrator | 2026-01-10 14:17:32.429597 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2026-01-10 14:17:32.429604 | orchestrator | Saturday 10 January 2026 14:17:27 +0000 (0:00:01.246) 0:08:11.939 ****** 2026-01-10 14:17:32.429611 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:17:32.429618 | orchestrator | 2026-01-10 14:17:32.429625 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-01-10 14:17:32.429637 | orchestrator | Saturday 10 January 2026 14:17:28 +0000 (0:00:00.883) 0:08:12.822 ****** 2026-01-10 14:17:32.429644 | orchestrator | ok: [testbed-manager] 2026-01-10 14:17:32.429650 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:17:32.429657 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:17:32.429664 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:17:32.429671 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:17:32.429677 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:17:32.429684 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:17:32.429691 | orchestrator | 2026-01-10 14:17:32.429697 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-01-10 14:17:32.429704 | orchestrator | Saturday 10 January 2026 14:17:29 +0000 (0:00:01.065) 0:08:13.888 ****** 2026-01-10 14:17:32.429711 | orchestrator | changed: [testbed-manager] 2026-01-10 14:17:32.429717 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:17:32.429724 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:17:32.429731 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:17:32.429737 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:17:32.429748 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:17:32.429755 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:17:32.429761 | orchestrator | 2026-01-10 14:17:32.429768 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2026-01-10 14:17:32.429775 | orchestrator | Saturday 10 January 2026 14:17:30 +0000 (0:00:01.191) 0:08:15.080 ****** 2026-01-10 14:17:32.429782 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:17:32.429789 | orchestrator | 2026-01-10 14:17:32.429795 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-01-10 14:17:32.429802 | orchestrator | Saturday 10 January 2026 14:17:31 +0000 (0:00:01.000) 0:08:16.081 ****** 2026-01-10 14:17:32.429809 | orchestrator | ok: [testbed-manager] 2026-01-10 14:17:32.429816 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:17:32.429822 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:17:32.429829 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:17:32.429836 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:17:32.429842 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:17:32.429849 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:17:32.429856 | orchestrator | 2026-01-10 14:17:32.429867 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-01-10 14:17:34.016131 | orchestrator | Saturday 10 January 2026 14:17:32 +0000 (0:00:00.875) 0:08:16.957 ****** 2026-01-10 14:17:34.016302 | orchestrator | changed: [testbed-manager] 2026-01-10 14:17:34.016316 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:17:34.016324 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:17:34.016332 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:17:34.016339 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:17:34.016346 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:17:34.016353 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:17:34.016360 | orchestrator | 2026-01-10 14:17:34.016368 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-10 14:17:34.016377 | orchestrator | testbed-manager : ok=168  changed=40  unreachable=0 failed=0 skipped=42  rescued=0 ignored=0 2026-01-10 14:17:34.016386 | orchestrator | testbed-node-0 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-01-10 14:17:34.016393 | orchestrator | testbed-node-1 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-01-10 14:17:34.016400 | orchestrator | testbed-node-2 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-01-10 14:17:34.016407 | orchestrator | testbed-node-3 : ok=175  changed=65  unreachable=0 failed=0 skipped=38  rescued=0 ignored=0 2026-01-10 14:17:34.016444 | orchestrator | testbed-node-4 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-01-10 14:17:34.016451 | orchestrator | testbed-node-5 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-01-10 14:17:34.016458 | orchestrator | 2026-01-10 14:17:34.016465 | orchestrator | 2026-01-10 14:17:34.016472 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-10 14:17:34.016479 | orchestrator | Saturday 10 January 2026 14:17:33 +0000 (0:00:01.119) 0:08:18.076 ****** 2026-01-10 14:17:34.016486 | orchestrator | =============================================================================== 2026-01-10 14:17:34.016493 | orchestrator | osism.commons.packages : Install required packages --------------------- 79.18s 2026-01-10 14:17:34.016499 | orchestrator | osism.commons.packages : Download required packages -------------------- 37.06s 2026-01-10 14:17:34.016506 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 35.22s 2026-01-10 14:17:34.016513 | orchestrator | osism.commons.repository : Update package cache ------------------------ 17.27s 2026-01-10 14:17:34.016519 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required -- 13.09s 2026-01-10 14:17:34.016527 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 12.85s 2026-01-10 14:17:34.016533 | orchestrator | osism.services.docker : Install docker package ------------------------- 11.34s 2026-01-10 14:17:34.016540 | orchestrator | osism.services.docker : Install containerd package --------------------- 10.31s 2026-01-10 14:17:34.016547 | orchestrator | osism.services.docker : Install docker-cli package ---------------------- 9.69s 2026-01-10 14:17:34.016553 | orchestrator | osism.services.lldpd : Install lldpd package ---------------------------- 9.05s 2026-01-10 14:17:34.016560 | orchestrator | osism.services.docker : Add repository ---------------------------------- 8.82s 2026-01-10 14:17:34.016567 | orchestrator | osism.services.smartd : Install smartmontools package ------------------- 8.45s 2026-01-10 14:17:34.016573 | orchestrator | osism.services.rng : Install rng package -------------------------------- 8.41s 2026-01-10 14:17:34.016580 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 8.23s 2026-01-10 14:17:34.016587 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 8.11s 2026-01-10 14:17:34.016594 | orchestrator | osism.commons.docker_compose : Install docker-compose-plugin package ---- 7.50s 2026-01-10 14:17:34.016600 | orchestrator | osism.services.docker : Install apt-transport-https package ------------- 7.10s 2026-01-10 14:17:34.016618 | orchestrator | osism.commons.cleanup : Remove dependencies that are no longer required --- 5.90s 2026-01-10 14:17:34.016626 | orchestrator | osism.commons.services : Populate service facts ------------------------- 5.63s 2026-01-10 14:17:34.016632 | orchestrator | osism.commons.cleanup : Populate service facts -------------------------- 5.30s 2026-01-10 14:17:34.324582 | orchestrator | + osism apply fail2ban 2026-01-10 14:17:47.267123 | orchestrator | 2026-01-10 14:17:47 | INFO  | Task eda9cddc-1da5-4f58-bfab-b0ce59c78d79 (fail2ban) was prepared for execution. 2026-01-10 14:17:47.267260 | orchestrator | 2026-01-10 14:17:47 | INFO  | It takes a moment until task eda9cddc-1da5-4f58-bfab-b0ce59c78d79 (fail2ban) has been started and output is visible here. 2026-01-10 14:18:09.114791 | orchestrator | 2026-01-10 14:18:09.114947 | orchestrator | PLAY [Apply role fail2ban] ***************************************************** 2026-01-10 14:18:09.114968 | orchestrator | 2026-01-10 14:18:09.114982 | orchestrator | TASK [osism.services.fail2ban : Include distribution specific install tasks] *** 2026-01-10 14:18:09.115000 | orchestrator | Saturday 10 January 2026 14:17:51 +0000 (0:00:00.258) 0:00:00.258 ****** 2026-01-10 14:18:09.115045 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/fail2ban/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-10 14:18:09.115110 | orchestrator | 2026-01-10 14:18:09.115130 | orchestrator | TASK [osism.services.fail2ban : Install fail2ban package] ********************** 2026-01-10 14:18:09.115147 | orchestrator | Saturday 10 January 2026 14:17:53 +0000 (0:00:01.151) 0:00:01.410 ****** 2026-01-10 14:18:09.115166 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:18:09.115188 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:18:09.115235 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:18:09.115254 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:18:09.115273 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:18:09.115292 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:18:09.115311 | orchestrator | changed: [testbed-manager] 2026-01-10 14:18:09.115331 | orchestrator | 2026-01-10 14:18:09.115350 | orchestrator | TASK [osism.services.fail2ban : Copy configuration files] ********************** 2026-01-10 14:18:09.115371 | orchestrator | Saturday 10 January 2026 14:18:03 +0000 (0:00:10.935) 0:00:12.346 ****** 2026-01-10 14:18:09.115390 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:18:09.115408 | orchestrator | changed: [testbed-manager] 2026-01-10 14:18:09.115419 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:18:09.115430 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:18:09.115442 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:18:09.115453 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:18:09.115464 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:18:09.115475 | orchestrator | 2026-01-10 14:18:09.115487 | orchestrator | TASK [osism.services.fail2ban : Manage fail2ban service] *********************** 2026-01-10 14:18:09.115498 | orchestrator | Saturday 10 January 2026 14:18:05 +0000 (0:00:01.535) 0:00:13.881 ****** 2026-01-10 14:18:09.115509 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:18:09.115522 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:18:09.115533 | orchestrator | ok: [testbed-manager] 2026-01-10 14:18:09.115544 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:18:09.115555 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:18:09.115566 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:18:09.115577 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:18:09.115588 | orchestrator | 2026-01-10 14:18:09.115599 | orchestrator | TASK [osism.services.fail2ban : Reload fail2ban configuration] ***************** 2026-01-10 14:18:09.115610 | orchestrator | Saturday 10 January 2026 14:18:07 +0000 (0:00:01.493) 0:00:15.375 ****** 2026-01-10 14:18:09.115622 | orchestrator | changed: [testbed-manager] 2026-01-10 14:18:09.115633 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:18:09.115644 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:18:09.115655 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:18:09.115666 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:18:09.115677 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:18:09.115689 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:18:09.115700 | orchestrator | 2026-01-10 14:18:09.115711 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-10 14:18:09.115722 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-10 14:18:09.115735 | orchestrator | testbed-node-0 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-10 14:18:09.115746 | orchestrator | testbed-node-1 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-10 14:18:09.115757 | orchestrator | testbed-node-2 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-10 14:18:09.115769 | orchestrator | testbed-node-3 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-10 14:18:09.115780 | orchestrator | testbed-node-4 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-10 14:18:09.115791 | orchestrator | testbed-node-5 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-10 14:18:09.115817 | orchestrator | 2026-01-10 14:18:09.115829 | orchestrator | 2026-01-10 14:18:09.115840 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-10 14:18:09.115852 | orchestrator | Saturday 10 January 2026 14:18:08 +0000 (0:00:01.658) 0:00:17.034 ****** 2026-01-10 14:18:09.115863 | orchestrator | =============================================================================== 2026-01-10 14:18:09.115889 | orchestrator | osism.services.fail2ban : Install fail2ban package --------------------- 10.94s 2026-01-10 14:18:09.115901 | orchestrator | osism.services.fail2ban : Reload fail2ban configuration ----------------- 1.66s 2026-01-10 14:18:09.115912 | orchestrator | osism.services.fail2ban : Copy configuration files ---------------------- 1.54s 2026-01-10 14:18:09.115923 | orchestrator | osism.services.fail2ban : Manage fail2ban service ----------------------- 1.49s 2026-01-10 14:18:09.115934 | orchestrator | osism.services.fail2ban : Include distribution specific install tasks --- 1.15s 2026-01-10 14:18:09.458564 | orchestrator | + [[ -e /etc/redhat-release ]] 2026-01-10 14:18:09.458665 | orchestrator | + osism apply network 2026-01-10 14:18:21.559763 | orchestrator | 2026-01-10 14:18:21 | INFO  | Task 7a88e06e-ada5-45e8-94da-9860488d93fe (network) was prepared for execution. 2026-01-10 14:18:21.561089 | orchestrator | 2026-01-10 14:18:21 | INFO  | It takes a moment until task 7a88e06e-ada5-45e8-94da-9860488d93fe (network) has been started and output is visible here. 2026-01-10 14:18:51.241422 | orchestrator | 2026-01-10 14:18:51.241575 | orchestrator | PLAY [Apply role network] ****************************************************** 2026-01-10 14:18:51.241601 | orchestrator | 2026-01-10 14:18:51.241656 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2026-01-10 14:18:51.241678 | orchestrator | Saturday 10 January 2026 14:18:25 +0000 (0:00:00.268) 0:00:00.268 ****** 2026-01-10 14:18:51.241696 | orchestrator | ok: [testbed-manager] 2026-01-10 14:18:51.241715 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:18:51.241732 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:18:51.241750 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:18:51.241767 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:18:51.241784 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:18:51.241801 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:18:51.241817 | orchestrator | 2026-01-10 14:18:51.241834 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2026-01-10 14:18:51.241850 | orchestrator | Saturday 10 January 2026 14:18:26 +0000 (0:00:00.741) 0:00:01.009 ****** 2026-01-10 14:18:51.241871 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-10 14:18:51.241890 | orchestrator | 2026-01-10 14:18:51.241907 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2026-01-10 14:18:51.241923 | orchestrator | Saturday 10 January 2026 14:18:27 +0000 (0:00:01.321) 0:00:02.331 ****** 2026-01-10 14:18:51.241940 | orchestrator | ok: [testbed-manager] 2026-01-10 14:18:51.241956 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:18:51.241974 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:18:51.241991 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:18:51.242007 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:18:51.242093 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:18:51.242113 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:18:51.242131 | orchestrator | 2026-01-10 14:18:51.242171 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2026-01-10 14:18:51.242189 | orchestrator | Saturday 10 January 2026 14:18:30 +0000 (0:00:02.086) 0:00:04.417 ****** 2026-01-10 14:18:51.242208 | orchestrator | ok: [testbed-manager] 2026-01-10 14:18:51.242225 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:18:51.242242 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:18:51.242260 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:18:51.242354 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:18:51.242375 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:18:51.242394 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:18:51.242412 | orchestrator | 2026-01-10 14:18:51.242430 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2026-01-10 14:18:51.242448 | orchestrator | Saturday 10 January 2026 14:18:31 +0000 (0:00:01.785) 0:00:06.203 ****** 2026-01-10 14:18:51.242464 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2026-01-10 14:18:51.242484 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2026-01-10 14:18:51.242502 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2026-01-10 14:18:51.242521 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2026-01-10 14:18:51.242537 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2026-01-10 14:18:51.242554 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2026-01-10 14:18:51.242572 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2026-01-10 14:18:51.242589 | orchestrator | 2026-01-10 14:18:51.242607 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2026-01-10 14:18:51.242625 | orchestrator | Saturday 10 January 2026 14:18:32 +0000 (0:00:01.021) 0:00:07.224 ****** 2026-01-10 14:18:51.242643 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-01-10 14:18:51.242661 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-10 14:18:51.242679 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-10 14:18:51.242697 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-01-10 14:18:51.242714 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-01-10 14:18:51.242732 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-01-10 14:18:51.242749 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-01-10 14:18:51.242766 | orchestrator | 2026-01-10 14:18:51.242783 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2026-01-10 14:18:51.242801 | orchestrator | Saturday 10 January 2026 14:18:36 +0000 (0:00:03.388) 0:00:10.612 ****** 2026-01-10 14:18:51.242820 | orchestrator | changed: [testbed-manager] 2026-01-10 14:18:51.242837 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:18:51.242854 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:18:51.242871 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:18:51.242889 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:18:51.242907 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:18:51.242926 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:18:51.242944 | orchestrator | 2026-01-10 14:18:51.242961 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2026-01-10 14:18:51.242979 | orchestrator | Saturday 10 January 2026 14:18:37 +0000 (0:00:01.703) 0:00:12.316 ****** 2026-01-10 14:18:51.242997 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-10 14:18:51.243015 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-10 14:18:51.243032 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-01-10 14:18:51.243049 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-01-10 14:18:51.243066 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-01-10 14:18:51.243083 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-01-10 14:18:51.243100 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-01-10 14:18:51.243119 | orchestrator | 2026-01-10 14:18:51.243168 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2026-01-10 14:18:51.243187 | orchestrator | Saturday 10 January 2026 14:18:39 +0000 (0:00:01.839) 0:00:14.155 ****** 2026-01-10 14:18:51.243204 | orchestrator | ok: [testbed-manager] 2026-01-10 14:18:51.243220 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:18:51.243236 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:18:51.243252 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:18:51.243267 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:18:51.243284 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:18:51.243302 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:18:51.243319 | orchestrator | 2026-01-10 14:18:51.243336 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2026-01-10 14:18:51.243395 | orchestrator | Saturday 10 January 2026 14:18:40 +0000 (0:00:01.168) 0:00:15.324 ****** 2026-01-10 14:18:51.243415 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:18:51.243432 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:18:51.243449 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:18:51.243466 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:18:51.243480 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:18:51.243493 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:18:51.243506 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:18:51.243520 | orchestrator | 2026-01-10 14:18:51.243552 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2026-01-10 14:18:51.243567 | orchestrator | Saturday 10 January 2026 14:18:41 +0000 (0:00:00.659) 0:00:15.983 ****** 2026-01-10 14:18:51.243580 | orchestrator | ok: [testbed-manager] 2026-01-10 14:18:51.243593 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:18:51.243606 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:18:51.243620 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:18:51.243635 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:18:51.243648 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:18:51.243662 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:18:51.243675 | orchestrator | 2026-01-10 14:18:51.243688 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2026-01-10 14:18:51.243702 | orchestrator | Saturday 10 January 2026 14:18:44 +0000 (0:00:02.406) 0:00:18.390 ****** 2026-01-10 14:18:51.243715 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:18:51.243729 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:18:51.243743 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:18:51.243756 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:18:51.243769 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:18:51.243782 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:18:51.243795 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/iptables.sh', 'src': '/opt/configuration/network/iptables.sh'}) 2026-01-10 14:18:51.243809 | orchestrator | 2026-01-10 14:18:51.243822 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2026-01-10 14:18:51.243834 | orchestrator | Saturday 10 January 2026 14:18:45 +0000 (0:00:00.994) 0:00:19.384 ****** 2026-01-10 14:18:51.243848 | orchestrator | ok: [testbed-manager] 2026-01-10 14:18:51.243862 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:18:51.243876 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:18:51.243889 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:18:51.243901 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:18:51.243915 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:18:51.243928 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:18:51.243941 | orchestrator | 2026-01-10 14:18:51.243955 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2026-01-10 14:18:51.243969 | orchestrator | Saturday 10 January 2026 14:18:46 +0000 (0:00:01.722) 0:00:21.106 ****** 2026-01-10 14:18:51.243983 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-10 14:18:51.243999 | orchestrator | 2026-01-10 14:18:51.244013 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-01-10 14:18:51.244027 | orchestrator | Saturday 10 January 2026 14:18:48 +0000 (0:00:01.289) 0:00:22.396 ****** 2026-01-10 14:18:51.244041 | orchestrator | ok: [testbed-manager] 2026-01-10 14:18:51.244055 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:18:51.244069 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:18:51.244082 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:18:51.244095 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:18:51.244107 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:18:51.244121 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:18:51.244186 | orchestrator | 2026-01-10 14:18:51.244202 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2026-01-10 14:18:51.244226 | orchestrator | Saturday 10 January 2026 14:18:49 +0000 (0:00:01.192) 0:00:23.589 ****** 2026-01-10 14:18:51.244239 | orchestrator | ok: [testbed-manager] 2026-01-10 14:18:51.244252 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:18:51.244265 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:18:51.244279 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:18:51.244293 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:18:51.244305 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:18:51.244319 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:18:51.244327 | orchestrator | 2026-01-10 14:18:51.244336 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-01-10 14:18:51.244344 | orchestrator | Saturday 10 January 2026 14:18:49 +0000 (0:00:00.703) 0:00:24.292 ****** 2026-01-10 14:18:51.244352 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2026-01-10 14:18:51.244360 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2026-01-10 14:18:51.244368 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2026-01-10 14:18:51.244377 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2026-01-10 14:18:51.244391 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2026-01-10 14:18:51.244410 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2026-01-10 14:18:51.244425 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2026-01-10 14:18:51.244439 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2026-01-10 14:18:51.244452 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2026-01-10 14:18:51.244466 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2026-01-10 14:18:51.244479 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2026-01-10 14:18:51.244492 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2026-01-10 14:18:51.244505 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2026-01-10 14:18:51.244519 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2026-01-10 14:18:51.244533 | orchestrator | 2026-01-10 14:18:51.244556 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2026-01-10 14:19:08.966541 | orchestrator | Saturday 10 January 2026 14:18:51 +0000 (0:00:01.293) 0:00:25.586 ****** 2026-01-10 14:19:08.966640 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:19:08.966657 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:19:08.966669 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:19:08.966680 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:19:08.966692 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:19:08.966703 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:19:08.966714 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:19:08.966725 | orchestrator | 2026-01-10 14:19:08.966737 | orchestrator | TASK [osism.commons.network : Include vxlan interfaces] ************************ 2026-01-10 14:19:08.966749 | orchestrator | Saturday 10 January 2026 14:18:51 +0000 (0:00:00.667) 0:00:26.253 ****** 2026-01-10 14:19:08.966762 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/vxlan-interfaces.yml for testbed-node-1, testbed-manager, testbed-node-0, testbed-node-2, testbed-node-3, testbed-node-5, testbed-node-4 2026-01-10 14:19:08.966776 | orchestrator | 2026-01-10 14:19:08.966788 | orchestrator | TASK [osism.commons.network : Create systemd networkd netdev files] ************ 2026-01-10 14:19:08.966799 | orchestrator | Saturday 10 January 2026 14:18:56 +0000 (0:00:05.002) 0:00:31.256 ****** 2026-01-10 14:19:08.966812 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2026-01-10 14:19:08.966824 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2026-01-10 14:19:08.966858 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2026-01-10 14:19:08.966871 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2026-01-10 14:19:08.966882 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2026-01-10 14:19:08.966894 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2026-01-10 14:19:08.966917 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2026-01-10 14:19:08.966939 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2026-01-10 14:19:08.966959 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2026-01-10 14:19:08.966970 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2026-01-10 14:19:08.966982 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2026-01-10 14:19:08.967010 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2026-01-10 14:19:08.967022 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2026-01-10 14:19:08.967034 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2026-01-10 14:19:08.967045 | orchestrator | 2026-01-10 14:19:08.967056 | orchestrator | TASK [osism.commons.network : Create systemd networkd network files] *********** 2026-01-10 14:19:08.967067 | orchestrator | Saturday 10 January 2026 14:19:02 +0000 (0:00:06.100) 0:00:37.357 ****** 2026-01-10 14:19:08.967087 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2026-01-10 14:19:08.967102 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2026-01-10 14:19:08.967148 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2026-01-10 14:19:08.967163 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2026-01-10 14:19:08.967176 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2026-01-10 14:19:08.967193 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2026-01-10 14:19:08.967214 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2026-01-10 14:19:08.967233 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2026-01-10 14:19:08.967255 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2026-01-10 14:19:08.967285 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2026-01-10 14:19:08.967308 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2026-01-10 14:19:08.967325 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2026-01-10 14:19:08.967351 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2026-01-10 14:19:23.317683 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2026-01-10 14:19:23.317804 | orchestrator | 2026-01-10 14:19:23.317818 | orchestrator | TASK [osism.commons.network : Include networkd cleanup tasks] ****************** 2026-01-10 14:19:23.317827 | orchestrator | Saturday 10 January 2026 14:19:08 +0000 (0:00:05.941) 0:00:43.299 ****** 2026-01-10 14:19:23.317836 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-networkd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-10 14:19:23.317844 | orchestrator | 2026-01-10 14:19:23.317851 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-01-10 14:19:23.317858 | orchestrator | Saturday 10 January 2026 14:19:10 +0000 (0:00:01.316) 0:00:44.615 ****** 2026-01-10 14:19:23.317865 | orchestrator | ok: [testbed-manager] 2026-01-10 14:19:23.317873 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:19:23.317880 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:19:23.317887 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:19:23.317894 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:19:23.317901 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:19:23.317908 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:19:23.317915 | orchestrator | 2026-01-10 14:19:23.317922 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-01-10 14:19:23.317929 | orchestrator | Saturday 10 January 2026 14:19:11 +0000 (0:00:01.251) 0:00:45.867 ****** 2026-01-10 14:19:23.317936 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.network)  2026-01-10 14:19:23.317944 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.network)  2026-01-10 14:19:23.317950 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-01-10 14:19:23.317957 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-01-10 14:19:23.317964 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.network)  2026-01-10 14:19:23.317971 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.network)  2026-01-10 14:19:23.317977 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-01-10 14:19:23.317984 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-01-10 14:19:23.317991 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:19:23.317999 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.network)  2026-01-10 14:19:23.318006 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.network)  2026-01-10 14:19:23.318013 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-01-10 14:19:23.318062 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-01-10 14:19:23.318069 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:19:23.318076 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.network)  2026-01-10 14:19:23.318083 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.network)  2026-01-10 14:19:23.318108 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-01-10 14:19:23.318116 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-01-10 14:19:23.318123 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:19:23.318129 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.network)  2026-01-10 14:19:23.318136 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.network)  2026-01-10 14:19:23.318143 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-01-10 14:19:23.318150 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-01-10 14:19:23.318157 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:19:23.318183 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.network)  2026-01-10 14:19:23.318191 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.network)  2026-01-10 14:19:23.318198 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-01-10 14:19:23.318205 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-01-10 14:19:23.318212 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:19:23.318219 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:19:23.318226 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.network)  2026-01-10 14:19:23.318233 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.network)  2026-01-10 14:19:23.318240 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-01-10 14:19:23.318247 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-01-10 14:19:23.318254 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:19:23.318261 | orchestrator | 2026-01-10 14:19:23.318269 | orchestrator | TASK [osism.commons.network : Include network extra init] ********************** 2026-01-10 14:19:23.318290 | orchestrator | Saturday 10 January 2026 14:19:12 +0000 (0:00:01.033) 0:00:46.900 ****** 2026-01-10 14:19:23.318298 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/network-extra-init.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-10 14:19:23.318306 | orchestrator | 2026-01-10 14:19:23.318313 | orchestrator | TASK [osism.commons.network : Deploy network-extra-init script] **************** 2026-01-10 14:19:23.318321 | orchestrator | Saturday 10 January 2026 14:19:13 +0000 (0:00:01.361) 0:00:48.262 ****** 2026-01-10 14:19:23.318328 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:19:23.318336 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:19:23.318344 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:19:23.318351 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:19:23.318358 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:19:23.318365 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:19:23.318372 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:19:23.318380 | orchestrator | 2026-01-10 14:19:23.318387 | orchestrator | TASK [osism.commons.network : Deploy network-extra-init systemd service] ******* 2026-01-10 14:19:23.318395 | orchestrator | Saturday 10 January 2026 14:19:14 +0000 (0:00:00.671) 0:00:48.934 ****** 2026-01-10 14:19:23.318402 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:19:23.318409 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:19:23.318416 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:19:23.318424 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:19:23.318431 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:19:23.318439 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:19:23.318446 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:19:23.318453 | orchestrator | 2026-01-10 14:19:23.318460 | orchestrator | TASK [osism.commons.network : Enable and start network-extra-init service] ***** 2026-01-10 14:19:23.318468 | orchestrator | Saturday 10 January 2026 14:19:15 +0000 (0:00:00.842) 0:00:49.776 ****** 2026-01-10 14:19:23.318476 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:19:23.318483 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:19:23.318490 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:19:23.318497 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:19:23.318504 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:19:23.318512 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:19:23.318519 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:19:23.318527 | orchestrator | 2026-01-10 14:19:23.318534 | orchestrator | TASK [osism.commons.network : Disable and stop network-extra-init service] ***** 2026-01-10 14:19:23.318542 | orchestrator | Saturday 10 January 2026 14:19:16 +0000 (0:00:00.636) 0:00:50.413 ****** 2026-01-10 14:19:23.318555 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:19:23.318562 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:19:23.318569 | orchestrator | ok: [testbed-manager] 2026-01-10 14:19:23.318577 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:19:23.318584 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:19:23.318591 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:19:23.318598 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:19:23.318606 | orchestrator | 2026-01-10 14:19:23.318613 | orchestrator | TASK [osism.commons.network : Remove network-extra-init systemd service] ******* 2026-01-10 14:19:23.318620 | orchestrator | Saturday 10 January 2026 14:19:18 +0000 (0:00:02.494) 0:00:52.907 ****** 2026-01-10 14:19:23.318627 | orchestrator | ok: [testbed-manager] 2026-01-10 14:19:23.318634 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:19:23.318641 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:19:23.318648 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:19:23.318654 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:19:23.318661 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:19:23.318668 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:19:23.318674 | orchestrator | 2026-01-10 14:19:23.318681 | orchestrator | TASK [osism.commons.network : Remove network-extra-init script] **************** 2026-01-10 14:19:23.318688 | orchestrator | Saturday 10 January 2026 14:19:19 +0000 (0:00:00.973) 0:00:53.881 ****** 2026-01-10 14:19:23.318695 | orchestrator | ok: [testbed-manager] 2026-01-10 14:19:23.318702 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:19:23.318709 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:19:23.318715 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:19:23.318722 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:19:23.318729 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:19:23.318735 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:19:23.318742 | orchestrator | 2026-01-10 14:19:23.318749 | orchestrator | RUNNING HANDLER [osism.commons.network : Reload systemd-networkd] ************** 2026-01-10 14:19:23.318756 | orchestrator | Saturday 10 January 2026 14:19:21 +0000 (0:00:02.299) 0:00:56.181 ****** 2026-01-10 14:19:23.318763 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:19:23.318770 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:19:23.318777 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:19:23.318783 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:19:23.318789 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:19:23.318794 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:19:23.318800 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:19:23.318806 | orchestrator | 2026-01-10 14:19:23.318817 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2026-01-10 14:19:23.318824 | orchestrator | Saturday 10 January 2026 14:19:22 +0000 (0:00:00.851) 0:00:57.032 ****** 2026-01-10 14:19:23.318832 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:19:23.318838 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:19:23.318845 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:19:23.318852 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:19:23.318859 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:19:23.318865 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:19:23.318872 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:19:23.318879 | orchestrator | 2026-01-10 14:19:23.318886 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-10 14:19:23.318894 | orchestrator | testbed-manager : ok=25  changed=5  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-01-10 14:19:23.318902 | orchestrator | testbed-node-0 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-01-10 14:19:23.318914 | orchestrator | testbed-node-1 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-01-10 14:19:23.748806 | orchestrator | testbed-node-2 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-01-10 14:19:23.748957 | orchestrator | testbed-node-3 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-01-10 14:19:23.748975 | orchestrator | testbed-node-4 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-01-10 14:19:23.748987 | orchestrator | testbed-node-5 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-01-10 14:19:23.748999 | orchestrator | 2026-01-10 14:19:23.749012 | orchestrator | 2026-01-10 14:19:23.749024 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-10 14:19:23.749037 | orchestrator | Saturday 10 January 2026 14:19:23 +0000 (0:00:00.631) 0:00:57.664 ****** 2026-01-10 14:19:23.749048 | orchestrator | =============================================================================== 2026-01-10 14:19:23.749064 | orchestrator | osism.commons.network : Create systemd networkd netdev files ------------ 6.10s 2026-01-10 14:19:23.749083 | orchestrator | osism.commons.network : Create systemd networkd network files ----------- 5.94s 2026-01-10 14:19:23.749151 | orchestrator | osism.commons.network : Include vxlan interfaces ------------------------ 5.00s 2026-01-10 14:19:23.749171 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 3.39s 2026-01-10 14:19:23.749189 | orchestrator | osism.commons.network : Disable and stop network-extra-init service ----- 2.49s 2026-01-10 14:19:23.749208 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 2.41s 2026-01-10 14:19:23.749226 | orchestrator | osism.commons.network : Remove network-extra-init script ---------------- 2.30s 2026-01-10 14:19:23.749245 | orchestrator | osism.commons.network : Install required packages ----------------------- 2.09s 2026-01-10 14:19:23.749264 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 1.84s 2026-01-10 14:19:23.749282 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 1.79s 2026-01-10 14:19:23.749300 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 1.72s 2026-01-10 14:19:23.749320 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.70s 2026-01-10 14:19:23.749340 | orchestrator | osism.commons.network : Include network extra init ---------------------- 1.36s 2026-01-10 14:19:23.749360 | orchestrator | osism.commons.network : Include type specific tasks --------------------- 1.32s 2026-01-10 14:19:23.749378 | orchestrator | osism.commons.network : Include networkd cleanup tasks ------------------ 1.32s 2026-01-10 14:19:23.749397 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.29s 2026-01-10 14:19:23.749416 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.29s 2026-01-10 14:19:23.749435 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.25s 2026-01-10 14:19:23.749455 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.19s 2026-01-10 14:19:23.749475 | orchestrator | osism.commons.network : Check if path for interface file exists --------- 1.17s 2026-01-10 14:19:24.108821 | orchestrator | + osism apply wireguard 2026-01-10 14:19:36.356062 | orchestrator | 2026-01-10 14:19:36 | INFO  | Task c8090d03-e41a-4986-9b6e-0df23f38903c (wireguard) was prepared for execution. 2026-01-10 14:19:36.356220 | orchestrator | 2026-01-10 14:19:36 | INFO  | It takes a moment until task c8090d03-e41a-4986-9b6e-0df23f38903c (wireguard) has been started and output is visible here. 2026-01-10 14:19:57.350180 | orchestrator | 2026-01-10 14:19:57.350305 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2026-01-10 14:19:57.350323 | orchestrator | 2026-01-10 14:19:57.350336 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2026-01-10 14:19:57.350348 | orchestrator | Saturday 10 January 2026 14:19:40 +0000 (0:00:00.227) 0:00:00.227 ****** 2026-01-10 14:19:57.350364 | orchestrator | ok: [testbed-manager] 2026-01-10 14:19:57.350378 | orchestrator | 2026-01-10 14:19:57.350416 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2026-01-10 14:19:57.350428 | orchestrator | Saturday 10 January 2026 14:19:42 +0000 (0:00:01.683) 0:00:01.911 ****** 2026-01-10 14:19:57.350439 | orchestrator | changed: [testbed-manager] 2026-01-10 14:19:57.350451 | orchestrator | 2026-01-10 14:19:57.350462 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2026-01-10 14:19:57.350473 | orchestrator | Saturday 10 January 2026 14:19:49 +0000 (0:00:07.035) 0:00:08.947 ****** 2026-01-10 14:19:57.350484 | orchestrator | changed: [testbed-manager] 2026-01-10 14:19:57.350495 | orchestrator | 2026-01-10 14:19:57.350506 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2026-01-10 14:19:57.350517 | orchestrator | Saturday 10 January 2026 14:19:50 +0000 (0:00:00.588) 0:00:09.535 ****** 2026-01-10 14:19:57.350528 | orchestrator | changed: [testbed-manager] 2026-01-10 14:19:57.350539 | orchestrator | 2026-01-10 14:19:57.350550 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2026-01-10 14:19:57.350561 | orchestrator | Saturday 10 January 2026 14:19:50 +0000 (0:00:00.447) 0:00:09.983 ****** 2026-01-10 14:19:57.350572 | orchestrator | ok: [testbed-manager] 2026-01-10 14:19:57.350583 | orchestrator | 2026-01-10 14:19:57.350594 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2026-01-10 14:19:57.350605 | orchestrator | Saturday 10 January 2026 14:19:51 +0000 (0:00:00.712) 0:00:10.696 ****** 2026-01-10 14:19:57.350616 | orchestrator | ok: [testbed-manager] 2026-01-10 14:19:57.350626 | orchestrator | 2026-01-10 14:19:57.350637 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2026-01-10 14:19:57.350648 | orchestrator | Saturday 10 January 2026 14:19:51 +0000 (0:00:00.431) 0:00:11.128 ****** 2026-01-10 14:19:57.350659 | orchestrator | ok: [testbed-manager] 2026-01-10 14:19:57.350670 | orchestrator | 2026-01-10 14:19:57.350681 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2026-01-10 14:19:57.350691 | orchestrator | Saturday 10 January 2026 14:19:52 +0000 (0:00:00.406) 0:00:11.534 ****** 2026-01-10 14:19:57.350704 | orchestrator | changed: [testbed-manager] 2026-01-10 14:19:57.350722 | orchestrator | 2026-01-10 14:19:57.350742 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2026-01-10 14:19:57.350753 | orchestrator | Saturday 10 January 2026 14:19:53 +0000 (0:00:01.203) 0:00:12.738 ****** 2026-01-10 14:19:57.350764 | orchestrator | changed: [testbed-manager] => (item=None) 2026-01-10 14:19:57.350776 | orchestrator | changed: [testbed-manager] 2026-01-10 14:19:57.350787 | orchestrator | 2026-01-10 14:19:57.350798 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2026-01-10 14:19:57.350809 | orchestrator | Saturday 10 January 2026 14:19:54 +0000 (0:00:01.007) 0:00:13.746 ****** 2026-01-10 14:19:57.350819 | orchestrator | changed: [testbed-manager] 2026-01-10 14:19:57.350830 | orchestrator | 2026-01-10 14:19:57.350841 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2026-01-10 14:19:57.350852 | orchestrator | Saturday 10 January 2026 14:19:55 +0000 (0:00:01.752) 0:00:15.498 ****** 2026-01-10 14:19:57.350863 | orchestrator | changed: [testbed-manager] 2026-01-10 14:19:57.350874 | orchestrator | 2026-01-10 14:19:57.350885 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-10 14:19:57.350896 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-10 14:19:57.350909 | orchestrator | 2026-01-10 14:19:57.350928 | orchestrator | 2026-01-10 14:19:57.350940 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-10 14:19:57.350951 | orchestrator | Saturday 10 January 2026 14:19:56 +0000 (0:00:00.933) 0:00:16.432 ****** 2026-01-10 14:19:57.350961 | orchestrator | =============================================================================== 2026-01-10 14:19:57.350972 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 7.04s 2026-01-10 14:19:57.350983 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 1.75s 2026-01-10 14:19:57.351003 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 1.68s 2026-01-10 14:19:57.351014 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.20s 2026-01-10 14:19:57.351024 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 1.01s 2026-01-10 14:19:57.351035 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 0.93s 2026-01-10 14:19:57.351046 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.71s 2026-01-10 14:19:57.351082 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.59s 2026-01-10 14:19:57.351093 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.45s 2026-01-10 14:19:57.351104 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.43s 2026-01-10 14:19:57.351132 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.41s 2026-01-10 14:19:57.681207 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2026-01-10 14:19:57.717242 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2026-01-10 14:19:57.717435 | orchestrator | Dload Upload Total Spent Left Speed 2026-01-10 14:19:57.795019 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 14 100 14 0 0 181 0 --:--:-- --:--:-- --:--:-- 181 2026-01-10 14:19:57.809409 | orchestrator | + osism apply --environment custom workarounds 2026-01-10 14:19:59.825221 | orchestrator | 2026-01-10 14:19:59 | INFO  | Trying to run play workarounds in environment custom 2026-01-10 14:20:10.065989 | orchestrator | 2026-01-10 14:20:10 | INFO  | Task 89c36206-2c37-482e-a17f-dc2e29b81d12 (workarounds) was prepared for execution. 2026-01-10 14:20:10.066170 | orchestrator | 2026-01-10 14:20:10 | INFO  | It takes a moment until task 89c36206-2c37-482e-a17f-dc2e29b81d12 (workarounds) has been started and output is visible here. 2026-01-10 14:20:36.162466 | orchestrator | 2026-01-10 14:20:36.162567 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-10 14:20:36.162584 | orchestrator | 2026-01-10 14:20:36.162597 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2026-01-10 14:20:36.162609 | orchestrator | Saturday 10 January 2026 14:20:14 +0000 (0:00:00.136) 0:00:00.136 ****** 2026-01-10 14:20:36.162621 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2026-01-10 14:20:36.162632 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2026-01-10 14:20:36.162643 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2026-01-10 14:20:36.162654 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2026-01-10 14:20:36.162665 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2026-01-10 14:20:36.162676 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2026-01-10 14:20:36.162686 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2026-01-10 14:20:36.162697 | orchestrator | 2026-01-10 14:20:36.162708 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2026-01-10 14:20:36.162719 | orchestrator | 2026-01-10 14:20:36.162730 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-01-10 14:20:36.162741 | orchestrator | Saturday 10 January 2026 14:20:15 +0000 (0:00:00.860) 0:00:00.996 ****** 2026-01-10 14:20:36.162752 | orchestrator | ok: [testbed-manager] 2026-01-10 14:20:36.162764 | orchestrator | 2026-01-10 14:20:36.162775 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2026-01-10 14:20:36.162786 | orchestrator | 2026-01-10 14:20:36.162797 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-01-10 14:20:36.162807 | orchestrator | Saturday 10 January 2026 14:20:17 +0000 (0:00:02.603) 0:00:03.600 ****** 2026-01-10 14:20:36.162818 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:20:36.162856 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:20:36.162867 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:20:36.162878 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:20:36.162888 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:20:36.162899 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:20:36.162909 | orchestrator | 2026-01-10 14:20:36.162920 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2026-01-10 14:20:36.162931 | orchestrator | 2026-01-10 14:20:36.162942 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2026-01-10 14:20:36.162953 | orchestrator | Saturday 10 January 2026 14:20:19 +0000 (0:00:01.846) 0:00:05.447 ****** 2026-01-10 14:20:36.162964 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-01-10 14:20:36.162976 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-01-10 14:20:36.162987 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-01-10 14:20:36.162998 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-01-10 14:20:36.163044 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-01-10 14:20:36.163057 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-01-10 14:20:36.163069 | orchestrator | 2026-01-10 14:20:36.163082 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2026-01-10 14:20:36.163094 | orchestrator | Saturday 10 January 2026 14:20:21 +0000 (0:00:01.598) 0:00:07.046 ****** 2026-01-10 14:20:36.163107 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:20:36.163120 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:20:36.163132 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:20:36.163144 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:20:36.163156 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:20:36.163169 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:20:36.163181 | orchestrator | 2026-01-10 14:20:36.163194 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2026-01-10 14:20:36.163206 | orchestrator | Saturday 10 January 2026 14:20:25 +0000 (0:00:03.722) 0:00:10.768 ****** 2026-01-10 14:20:36.163218 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:20:36.163231 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:20:36.163242 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:20:36.163255 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:20:36.163267 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:20:36.163279 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:20:36.163291 | orchestrator | 2026-01-10 14:20:36.163303 | orchestrator | PLAY [Add a workaround service] ************************************************ 2026-01-10 14:20:36.163315 | orchestrator | 2026-01-10 14:20:36.163327 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2026-01-10 14:20:36.163339 | orchestrator | Saturday 10 January 2026 14:20:25 +0000 (0:00:00.753) 0:00:11.522 ****** 2026-01-10 14:20:36.163352 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:20:36.163364 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:20:36.163376 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:20:36.163387 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:20:36.163398 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:20:36.163408 | orchestrator | changed: [testbed-manager] 2026-01-10 14:20:36.163419 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:20:36.163430 | orchestrator | 2026-01-10 14:20:36.163440 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2026-01-10 14:20:36.163451 | orchestrator | Saturday 10 January 2026 14:20:27 +0000 (0:00:01.638) 0:00:13.161 ****** 2026-01-10 14:20:36.163462 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:20:36.163473 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:20:36.163506 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:20:36.163518 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:20:36.163528 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:20:36.163540 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:20:36.163567 | orchestrator | changed: [testbed-manager] 2026-01-10 14:20:36.163579 | orchestrator | 2026-01-10 14:20:36.163590 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2026-01-10 14:20:36.163601 | orchestrator | Saturday 10 January 2026 14:20:29 +0000 (0:00:01.713) 0:00:14.875 ****** 2026-01-10 14:20:36.163612 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:20:36.163623 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:20:36.163634 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:20:36.163645 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:20:36.163655 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:20:36.163666 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:20:36.163677 | orchestrator | ok: [testbed-manager] 2026-01-10 14:20:36.163688 | orchestrator | 2026-01-10 14:20:36.163699 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2026-01-10 14:20:36.163710 | orchestrator | Saturday 10 January 2026 14:20:30 +0000 (0:00:01.596) 0:00:16.471 ****** 2026-01-10 14:20:36.163721 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:20:36.163731 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:20:36.163742 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:20:36.163753 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:20:36.163764 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:20:36.163775 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:20:36.163786 | orchestrator | changed: [testbed-manager] 2026-01-10 14:20:36.163796 | orchestrator | 2026-01-10 14:20:36.163807 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2026-01-10 14:20:36.163818 | orchestrator | Saturday 10 January 2026 14:20:32 +0000 (0:00:01.872) 0:00:18.344 ****** 2026-01-10 14:20:36.163829 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:20:36.163840 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:20:36.163851 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:20:36.163861 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:20:36.163872 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:20:36.163883 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:20:36.163893 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:20:36.163904 | orchestrator | 2026-01-10 14:20:36.163915 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2026-01-10 14:20:36.163926 | orchestrator | 2026-01-10 14:20:36.163937 | orchestrator | TASK [Install python3-docker] ************************************************** 2026-01-10 14:20:36.163948 | orchestrator | Saturday 10 January 2026 14:20:33 +0000 (0:00:00.648) 0:00:18.993 ****** 2026-01-10 14:20:36.163959 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:20:36.163970 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:20:36.163981 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:20:36.163992 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:20:36.164029 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:20:36.164041 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:20:36.164052 | orchestrator | ok: [testbed-manager] 2026-01-10 14:20:36.164063 | orchestrator | 2026-01-10 14:20:36.164074 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-10 14:20:36.164087 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-10 14:20:36.164099 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-10 14:20:36.164111 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-10 14:20:36.164122 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-10 14:20:36.164141 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-10 14:20:36.164152 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-10 14:20:36.164163 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-10 14:20:36.164174 | orchestrator | 2026-01-10 14:20:36.164186 | orchestrator | 2026-01-10 14:20:36.164197 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-10 14:20:36.164208 | orchestrator | Saturday 10 January 2026 14:20:36 +0000 (0:00:02.857) 0:00:21.850 ****** 2026-01-10 14:20:36.164219 | orchestrator | =============================================================================== 2026-01-10 14:20:36.164230 | orchestrator | Run update-ca-certificates ---------------------------------------------- 3.72s 2026-01-10 14:20:36.164241 | orchestrator | Install python3-docker -------------------------------------------------- 2.86s 2026-01-10 14:20:36.164252 | orchestrator | Apply netplan configuration --------------------------------------------- 2.60s 2026-01-10 14:20:36.164262 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 1.87s 2026-01-10 14:20:36.164273 | orchestrator | Apply netplan configuration --------------------------------------------- 1.85s 2026-01-10 14:20:36.164284 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.71s 2026-01-10 14:20:36.164295 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.64s 2026-01-10 14:20:36.164306 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.60s 2026-01-10 14:20:36.164317 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.60s 2026-01-10 14:20:36.164333 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.86s 2026-01-10 14:20:36.164345 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.75s 2026-01-10 14:20:36.164362 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.65s 2026-01-10 14:20:36.871089 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2026-01-10 14:20:49.089941 | orchestrator | 2026-01-10 14:20:49 | INFO  | Task 3b7fada3-f111-4b01-bc78-ba92bfdcd2f6 (reboot) was prepared for execution. 2026-01-10 14:20:49.090146 | orchestrator | 2026-01-10 14:20:49 | INFO  | It takes a moment until task 3b7fada3-f111-4b01-bc78-ba92bfdcd2f6 (reboot) has been started and output is visible here. 2026-01-10 14:20:59.713215 | orchestrator | 2026-01-10 14:20:59.713322 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-01-10 14:20:59.713339 | orchestrator | 2026-01-10 14:20:59.713352 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-01-10 14:20:59.713364 | orchestrator | Saturday 10 January 2026 14:20:53 +0000 (0:00:00.214) 0:00:00.214 ****** 2026-01-10 14:20:59.713376 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:20:59.713388 | orchestrator | 2026-01-10 14:20:59.713399 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-01-10 14:20:59.713411 | orchestrator | Saturday 10 January 2026 14:20:53 +0000 (0:00:00.115) 0:00:00.329 ****** 2026-01-10 14:20:59.713422 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:20:59.713433 | orchestrator | 2026-01-10 14:20:59.713444 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-01-10 14:20:59.713455 | orchestrator | Saturday 10 January 2026 14:20:54 +0000 (0:00:00.956) 0:00:01.286 ****** 2026-01-10 14:20:59.713494 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:20:59.713506 | orchestrator | 2026-01-10 14:20:59.713517 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-01-10 14:20:59.713528 | orchestrator | 2026-01-10 14:20:59.713567 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-01-10 14:20:59.713579 | orchestrator | Saturday 10 January 2026 14:20:54 +0000 (0:00:00.142) 0:00:01.428 ****** 2026-01-10 14:20:59.713597 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:20:59.713615 | orchestrator | 2026-01-10 14:20:59.713633 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-01-10 14:20:59.713651 | orchestrator | Saturday 10 January 2026 14:20:54 +0000 (0:00:00.112) 0:00:01.541 ****** 2026-01-10 14:20:59.713670 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:20:59.713689 | orchestrator | 2026-01-10 14:20:59.713704 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-01-10 14:20:59.713715 | orchestrator | Saturday 10 January 2026 14:20:55 +0000 (0:00:00.684) 0:00:02.226 ****** 2026-01-10 14:20:59.713726 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:20:59.713736 | orchestrator | 2026-01-10 14:20:59.713747 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-01-10 14:20:59.713758 | orchestrator | 2026-01-10 14:20:59.713771 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-01-10 14:20:59.713784 | orchestrator | Saturday 10 January 2026 14:20:55 +0000 (0:00:00.123) 0:00:02.349 ****** 2026-01-10 14:20:59.713797 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:20:59.713809 | orchestrator | 2026-01-10 14:20:59.713822 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-01-10 14:20:59.713834 | orchestrator | Saturday 10 January 2026 14:20:55 +0000 (0:00:00.211) 0:00:02.561 ****** 2026-01-10 14:20:59.713846 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:20:59.713858 | orchestrator | 2026-01-10 14:20:59.713871 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-01-10 14:20:59.713883 | orchestrator | Saturday 10 January 2026 14:20:56 +0000 (0:00:00.684) 0:00:03.245 ****** 2026-01-10 14:20:59.713895 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:20:59.713908 | orchestrator | 2026-01-10 14:20:59.713920 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-01-10 14:20:59.713932 | orchestrator | 2026-01-10 14:20:59.713944 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-01-10 14:20:59.713957 | orchestrator | Saturday 10 January 2026 14:20:56 +0000 (0:00:00.117) 0:00:03.363 ****** 2026-01-10 14:20:59.713969 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:20:59.714012 | orchestrator | 2026-01-10 14:20:59.714079 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-01-10 14:20:59.714092 | orchestrator | Saturday 10 January 2026 14:20:56 +0000 (0:00:00.141) 0:00:03.505 ****** 2026-01-10 14:20:59.714104 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:20:59.714117 | orchestrator | 2026-01-10 14:20:59.714129 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-01-10 14:20:59.714139 | orchestrator | Saturday 10 January 2026 14:20:57 +0000 (0:00:00.692) 0:00:04.197 ****** 2026-01-10 14:20:59.714150 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:20:59.714161 | orchestrator | 2026-01-10 14:20:59.714171 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-01-10 14:20:59.714182 | orchestrator | 2026-01-10 14:20:59.714193 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-01-10 14:20:59.714204 | orchestrator | Saturday 10 January 2026 14:20:57 +0000 (0:00:00.153) 0:00:04.350 ****** 2026-01-10 14:20:59.714215 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:20:59.714226 | orchestrator | 2026-01-10 14:20:59.714236 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-01-10 14:20:59.714247 | orchestrator | Saturday 10 January 2026 14:20:57 +0000 (0:00:00.103) 0:00:04.453 ****** 2026-01-10 14:20:59.714259 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:20:59.714269 | orchestrator | 2026-01-10 14:20:59.714280 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-01-10 14:20:59.714291 | orchestrator | Saturday 10 January 2026 14:20:58 +0000 (0:00:00.670) 0:00:05.124 ****** 2026-01-10 14:20:59.714314 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:20:59.714325 | orchestrator | 2026-01-10 14:20:59.714336 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-01-10 14:20:59.714346 | orchestrator | 2026-01-10 14:20:59.714373 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-01-10 14:20:59.714384 | orchestrator | Saturday 10 January 2026 14:20:58 +0000 (0:00:00.117) 0:00:05.241 ****** 2026-01-10 14:20:59.714395 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:20:59.714406 | orchestrator | 2026-01-10 14:20:59.714417 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-01-10 14:20:59.714428 | orchestrator | Saturday 10 January 2026 14:20:58 +0000 (0:00:00.112) 0:00:05.354 ****** 2026-01-10 14:20:59.714439 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:20:59.714449 | orchestrator | 2026-01-10 14:20:59.714460 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-01-10 14:20:59.714471 | orchestrator | Saturday 10 January 2026 14:20:59 +0000 (0:00:00.676) 0:00:06.030 ****** 2026-01-10 14:20:59.714501 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:20:59.714513 | orchestrator | 2026-01-10 14:20:59.714524 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-10 14:20:59.714535 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-10 14:20:59.714547 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-10 14:20:59.714558 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-10 14:20:59.714569 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-10 14:20:59.714580 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-10 14:20:59.714591 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-10 14:20:59.714602 | orchestrator | 2026-01-10 14:20:59.714613 | orchestrator | 2026-01-10 14:20:59.714624 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-10 14:20:59.714635 | orchestrator | Saturday 10 January 2026 14:20:59 +0000 (0:00:00.039) 0:00:06.070 ****** 2026-01-10 14:20:59.714646 | orchestrator | =============================================================================== 2026-01-10 14:20:59.714657 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 4.37s 2026-01-10 14:20:59.714668 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.80s 2026-01-10 14:20:59.714679 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.69s 2026-01-10 14:21:00.055403 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2026-01-10 14:21:12.221064 | orchestrator | 2026-01-10 14:21:12 | INFO  | Task 171f6c2e-efb8-4604-8b3e-fe5091229475 (wait-for-connection) was prepared for execution. 2026-01-10 14:21:12.221152 | orchestrator | 2026-01-10 14:21:12 | INFO  | It takes a moment until task 171f6c2e-efb8-4604-8b3e-fe5091229475 (wait-for-connection) has been started and output is visible here. 2026-01-10 14:21:28.669448 | orchestrator | 2026-01-10 14:21:28.669560 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2026-01-10 14:21:28.669579 | orchestrator | 2026-01-10 14:21:28.669591 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2026-01-10 14:21:28.669604 | orchestrator | Saturday 10 January 2026 14:21:16 +0000 (0:00:00.232) 0:00:00.232 ****** 2026-01-10 14:21:28.669615 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:21:28.669654 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:21:28.669696 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:21:28.669707 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:21:28.669718 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:21:28.669729 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:21:28.669739 | orchestrator | 2026-01-10 14:21:28.669751 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-10 14:21:28.669763 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-10 14:21:28.669776 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-10 14:21:28.669788 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-10 14:21:28.669799 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-10 14:21:28.669810 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-10 14:21:28.669821 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-10 14:21:28.669832 | orchestrator | 2026-01-10 14:21:28.669843 | orchestrator | 2026-01-10 14:21:28.669854 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-10 14:21:28.669866 | orchestrator | Saturday 10 January 2026 14:21:28 +0000 (0:00:11.616) 0:00:11.849 ****** 2026-01-10 14:21:28.669892 | orchestrator | =============================================================================== 2026-01-10 14:21:28.669904 | orchestrator | Wait until remote system is reachable ---------------------------------- 11.62s 2026-01-10 14:21:29.006148 | orchestrator | + osism apply hddtemp 2026-01-10 14:21:41.162068 | orchestrator | 2026-01-10 14:21:41 | INFO  | Task 394db75b-708a-458b-87dc-227a09de351e (hddtemp) was prepared for execution. 2026-01-10 14:21:41.162186 | orchestrator | 2026-01-10 14:21:41 | INFO  | It takes a moment until task 394db75b-708a-458b-87dc-227a09de351e (hddtemp) has been started and output is visible here. 2026-01-10 14:22:10.454904 | orchestrator | 2026-01-10 14:22:10.455068 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2026-01-10 14:22:10.455087 | orchestrator | 2026-01-10 14:22:10.455099 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2026-01-10 14:22:10.455110 | orchestrator | Saturday 10 January 2026 14:21:45 +0000 (0:00:00.284) 0:00:00.284 ****** 2026-01-10 14:22:10.455121 | orchestrator | ok: [testbed-manager] 2026-01-10 14:22:10.455132 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:22:10.455142 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:22:10.455152 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:22:10.455162 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:22:10.455172 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:22:10.455182 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:22:10.455192 | orchestrator | 2026-01-10 14:22:10.455202 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2026-01-10 14:22:10.455212 | orchestrator | Saturday 10 January 2026 14:21:46 +0000 (0:00:00.770) 0:00:01.054 ****** 2026-01-10 14:22:10.455224 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-10 14:22:10.455236 | orchestrator | 2026-01-10 14:22:10.455247 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2026-01-10 14:22:10.455258 | orchestrator | Saturday 10 January 2026 14:21:47 +0000 (0:00:01.215) 0:00:02.270 ****** 2026-01-10 14:22:10.455268 | orchestrator | ok: [testbed-manager] 2026-01-10 14:22:10.455278 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:22:10.455312 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:22:10.455323 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:22:10.455332 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:22:10.455342 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:22:10.455352 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:22:10.455361 | orchestrator | 2026-01-10 14:22:10.455371 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2026-01-10 14:22:10.455381 | orchestrator | Saturday 10 January 2026 14:21:49 +0000 (0:00:02.035) 0:00:04.305 ****** 2026-01-10 14:22:10.455391 | orchestrator | changed: [testbed-manager] 2026-01-10 14:22:10.455401 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:22:10.455411 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:22:10.455421 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:22:10.455430 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:22:10.455441 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:22:10.455459 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:22:10.455480 | orchestrator | 2026-01-10 14:22:10.455503 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2026-01-10 14:22:10.455520 | orchestrator | Saturday 10 January 2026 14:21:50 +0000 (0:00:01.245) 0:00:05.551 ****** 2026-01-10 14:22:10.455536 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:22:10.455552 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:22:10.455568 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:22:10.455585 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:22:10.455601 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:22:10.455617 | orchestrator | ok: [testbed-manager] 2026-01-10 14:22:10.455628 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:22:10.455638 | orchestrator | 2026-01-10 14:22:10.455650 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2026-01-10 14:22:10.455661 | orchestrator | Saturday 10 January 2026 14:21:51 +0000 (0:00:01.172) 0:00:06.723 ****** 2026-01-10 14:22:10.455672 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:22:10.455683 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:22:10.455695 | orchestrator | changed: [testbed-manager] 2026-01-10 14:22:10.455705 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:22:10.455717 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:22:10.455727 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:22:10.455738 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:22:10.455749 | orchestrator | 2026-01-10 14:22:10.455760 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2026-01-10 14:22:10.455771 | orchestrator | Saturday 10 January 2026 14:21:52 +0000 (0:00:00.879) 0:00:07.602 ****** 2026-01-10 14:22:10.455782 | orchestrator | changed: [testbed-manager] 2026-01-10 14:22:10.455793 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:22:10.455804 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:22:10.455815 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:22:10.455826 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:22:10.455835 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:22:10.455845 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:22:10.455854 | orchestrator | 2026-01-10 14:22:10.455864 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2026-01-10 14:22:10.455873 | orchestrator | Saturday 10 January 2026 14:22:06 +0000 (0:00:13.977) 0:00:21.579 ****** 2026-01-10 14:22:10.455883 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-10 14:22:10.455894 | orchestrator | 2026-01-10 14:22:10.455904 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2026-01-10 14:22:10.455938 | orchestrator | Saturday 10 January 2026 14:22:08 +0000 (0:00:01.273) 0:00:22.853 ****** 2026-01-10 14:22:10.455949 | orchestrator | changed: [testbed-manager] 2026-01-10 14:22:10.455959 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:22:10.455968 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:22:10.456003 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:22:10.456013 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:22:10.456022 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:22:10.456032 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:22:10.456041 | orchestrator | 2026-01-10 14:22:10.456051 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-10 14:22:10.456067 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-10 14:22:10.456108 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-10 14:22:10.456127 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-10 14:22:10.456143 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-10 14:22:10.456158 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-10 14:22:10.456173 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-10 14:22:10.456188 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-10 14:22:10.456203 | orchestrator | 2026-01-10 14:22:10.456219 | orchestrator | 2026-01-10 14:22:10.456236 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-10 14:22:10.456252 | orchestrator | Saturday 10 January 2026 14:22:10 +0000 (0:00:01.940) 0:00:24.793 ****** 2026-01-10 14:22:10.456270 | orchestrator | =============================================================================== 2026-01-10 14:22:10.456287 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 13.98s 2026-01-10 14:22:10.456304 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 2.04s 2026-01-10 14:22:10.456320 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 1.94s 2026-01-10 14:22:10.456334 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.27s 2026-01-10 14:22:10.456343 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 1.25s 2026-01-10 14:22:10.456353 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 1.22s 2026-01-10 14:22:10.456362 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 1.17s 2026-01-10 14:22:10.456375 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 0.88s 2026-01-10 14:22:10.456392 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.77s 2026-01-10 14:22:10.783131 | orchestrator | ++ semver latest 7.1.1 2026-01-10 14:22:10.839889 | orchestrator | + [[ -1 -ge 0 ]] 2026-01-10 14:22:10.840012 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-01-10 14:22:10.840030 | orchestrator | + sudo systemctl restart manager.service 2026-01-10 14:22:32.091809 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-01-10 14:22:32.092009 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-01-10 14:22:32.092042 | orchestrator | + local max_attempts=60 2026-01-10 14:22:32.092067 | orchestrator | + local name=ceph-ansible 2026-01-10 14:22:32.092086 | orchestrator | + local attempt_num=1 2026-01-10 14:22:32.092121 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-10 14:22:32.127807 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-01-10 14:22:32.127869 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-10 14:22:32.127880 | orchestrator | + sleep 5 2026-01-10 14:22:37.134159 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-10 14:22:37.169867 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-01-10 14:22:37.170085 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-10 14:22:37.170106 | orchestrator | + sleep 5 2026-01-10 14:22:42.173457 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-10 14:22:42.214438 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-01-10 14:22:42.214520 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-10 14:22:42.214531 | orchestrator | + sleep 5 2026-01-10 14:22:47.219359 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-10 14:22:47.259623 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-01-10 14:22:47.259698 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-10 14:22:47.259704 | orchestrator | + sleep 5 2026-01-10 14:22:52.265355 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-10 14:22:52.309008 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-01-10 14:22:52.309104 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-10 14:22:52.309120 | orchestrator | + sleep 5 2026-01-10 14:22:57.314599 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-10 14:22:57.353395 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-01-10 14:22:57.353475 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-10 14:22:57.353486 | orchestrator | + sleep 5 2026-01-10 14:23:02.359066 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-10 14:23:02.402761 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-01-10 14:23:02.402943 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-10 14:23:02.402963 | orchestrator | + sleep 5 2026-01-10 14:23:07.408142 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-10 14:23:07.447688 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-01-10 14:23:07.447788 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-10 14:23:07.447805 | orchestrator | + sleep 5 2026-01-10 14:23:12.450740 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-10 14:23:12.489650 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-01-10 14:23:12.489774 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-10 14:23:12.489800 | orchestrator | + sleep 5 2026-01-10 14:23:17.493467 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-10 14:23:17.535196 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-01-10 14:23:17.535313 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-10 14:23:17.535336 | orchestrator | + sleep 5 2026-01-10 14:23:22.540109 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-10 14:23:22.577619 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-01-10 14:23:22.577717 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-10 14:23:22.577733 | orchestrator | + sleep 5 2026-01-10 14:23:27.583299 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-10 14:23:27.624809 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-01-10 14:23:27.624940 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-10 14:23:27.624955 | orchestrator | + sleep 5 2026-01-10 14:23:32.628973 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-10 14:23:32.678236 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-01-10 14:23:32.678353 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-10 14:23:32.678377 | orchestrator | + sleep 5 2026-01-10 14:23:37.683774 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-10 14:23:37.728835 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-01-10 14:23:37.728989 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-01-10 14:23:37.729195 | orchestrator | + local max_attempts=60 2026-01-10 14:23:37.729221 | orchestrator | + local name=kolla-ansible 2026-01-10 14:23:37.729241 | orchestrator | + local attempt_num=1 2026-01-10 14:23:37.729269 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-01-10 14:23:37.773067 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-01-10 14:23:37.773144 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-01-10 14:23:37.773155 | orchestrator | + local max_attempts=60 2026-01-10 14:23:37.773164 | orchestrator | + local name=osism-ansible 2026-01-10 14:23:37.773173 | orchestrator | + local attempt_num=1 2026-01-10 14:23:37.774208 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-01-10 14:23:37.802446 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-01-10 14:23:37.802517 | orchestrator | + [[ true == \t\r\u\e ]] 2026-01-10 14:23:37.802565 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-01-10 14:23:37.961814 | orchestrator | ARA in ceph-ansible already disabled. 2026-01-10 14:23:38.118166 | orchestrator | ARA in kolla-ansible already disabled. 2026-01-10 14:23:38.275108 | orchestrator | ARA in osism-ansible already disabled. 2026-01-10 14:23:38.454675 | orchestrator | ARA in osism-kubernetes already disabled. 2026-01-10 14:23:38.455638 | orchestrator | + osism apply gather-facts 2026-01-10 14:23:50.673057 | orchestrator | 2026-01-10 14:23:50 | INFO  | Task 2914dd76-5abb-4071-8f78-c466307a13ac (gather-facts) was prepared for execution. 2026-01-10 14:23:50.673155 | orchestrator | 2026-01-10 14:23:50 | INFO  | It takes a moment until task 2914dd76-5abb-4071-8f78-c466307a13ac (gather-facts) has been started and output is visible here. 2026-01-10 14:24:04.825109 | orchestrator | 2026-01-10 14:24:04.825222 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-01-10 14:24:04.825239 | orchestrator | 2026-01-10 14:24:04.825252 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-01-10 14:24:04.825265 | orchestrator | Saturday 10 January 2026 14:23:55 +0000 (0:00:00.234) 0:00:00.234 ****** 2026-01-10 14:24:04.825277 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:24:04.825289 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:24:04.825301 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:24:04.825312 | orchestrator | ok: [testbed-manager] 2026-01-10 14:24:04.825323 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:24:04.825334 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:24:04.825345 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:24:04.825356 | orchestrator | 2026-01-10 14:24:04.825368 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-01-10 14:24:04.825380 | orchestrator | 2026-01-10 14:24:04.825391 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-01-10 14:24:04.825402 | orchestrator | Saturday 10 January 2026 14:24:03 +0000 (0:00:08.710) 0:00:08.945 ****** 2026-01-10 14:24:04.825414 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:24:04.825426 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:24:04.825437 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:24:04.825448 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:24:04.825459 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:24:04.825470 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:24:04.825481 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:24:04.825492 | orchestrator | 2026-01-10 14:24:04.825504 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-10 14:24:04.825515 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-10 14:24:04.825528 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-10 14:24:04.825551 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-10 14:24:04.825563 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-10 14:24:04.825575 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-10 14:24:04.825586 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-10 14:24:04.825597 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-10 14:24:04.825609 | orchestrator | 2026-01-10 14:24:04.825620 | orchestrator | 2026-01-10 14:24:04.825632 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-10 14:24:04.825645 | orchestrator | Saturday 10 January 2026 14:24:04 +0000 (0:00:00.609) 0:00:09.554 ****** 2026-01-10 14:24:04.825686 | orchestrator | =============================================================================== 2026-01-10 14:24:04.825700 | orchestrator | Gathers facts about hosts ----------------------------------------------- 8.71s 2026-01-10 14:24:04.825713 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.61s 2026-01-10 14:24:05.230188 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2026-01-10 14:24:05.247792 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2026-01-10 14:24:05.269944 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2026-01-10 14:24:05.283786 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2026-01-10 14:24:05.297229 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2026-01-10 14:24:05.311741 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2026-01-10 14:24:05.328651 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2026-01-10 14:24:05.348397 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2026-01-10 14:24:05.365289 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2026-01-10 14:24:05.384036 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2026-01-10 14:24:05.404888 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2026-01-10 14:24:05.422317 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2026-01-10 14:24:05.441217 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2026-01-10 14:24:05.458445 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2026-01-10 14:24:05.476799 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2026-01-10 14:24:05.489576 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2026-01-10 14:24:05.499560 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh /usr/local/bin/bootstrap-octavia 2026-01-10 14:24:05.510164 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2026-01-10 14:24:05.529946 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2026-01-10 14:24:05.553110 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2026-01-10 14:24:05.571211 | orchestrator | + [[ false == \t\r\u\e ]] 2026-01-10 14:24:05.696832 | orchestrator | ok: Runtime: 0:24:54.440186 2026-01-10 14:24:05.814682 | 2026-01-10 14:24:05.814899 | TASK [Deploy services] 2026-01-10 14:24:06.353380 | orchestrator | skipping: Conditional result was False 2026-01-10 14:24:06.372617 | 2026-01-10 14:24:06.372796 | TASK [Deploy in a nutshell] 2026-01-10 14:24:07.099479 | orchestrator | + set -e 2026-01-10 14:24:07.100980 | orchestrator | 2026-01-10 14:24:07.100999 | orchestrator | # PULL IMAGES 2026-01-10 14:24:07.101007 | orchestrator | 2026-01-10 14:24:07.101017 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-01-10 14:24:07.101028 | orchestrator | ++ export INTERACTIVE=false 2026-01-10 14:24:07.101037 | orchestrator | ++ INTERACTIVE=false 2026-01-10 14:24:07.101064 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-01-10 14:24:07.101077 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-01-10 14:24:07.101085 | orchestrator | + source /opt/manager-vars.sh 2026-01-10 14:24:07.101091 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-01-10 14:24:07.101101 | orchestrator | ++ NUMBER_OF_NODES=6 2026-01-10 14:24:07.101107 | orchestrator | ++ export CEPH_VERSION=reef 2026-01-10 14:24:07.101117 | orchestrator | ++ CEPH_VERSION=reef 2026-01-10 14:24:07.101123 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-01-10 14:24:07.101132 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-01-10 14:24:07.101138 | orchestrator | ++ export MANAGER_VERSION=latest 2026-01-10 14:24:07.101147 | orchestrator | ++ MANAGER_VERSION=latest 2026-01-10 14:24:07.101154 | orchestrator | ++ export OPENSTACK_VERSION=2025.1 2026-01-10 14:24:07.101161 | orchestrator | ++ OPENSTACK_VERSION=2025.1 2026-01-10 14:24:07.101167 | orchestrator | ++ export ARA=false 2026-01-10 14:24:07.101172 | orchestrator | ++ ARA=false 2026-01-10 14:24:07.101178 | orchestrator | ++ export DEPLOY_MODE=manager 2026-01-10 14:24:07.101184 | orchestrator | ++ DEPLOY_MODE=manager 2026-01-10 14:24:07.101190 | orchestrator | ++ export TEMPEST=false 2026-01-10 14:24:07.101196 | orchestrator | ++ TEMPEST=false 2026-01-10 14:24:07.101201 | orchestrator | ++ export IS_ZUUL=true 2026-01-10 14:24:07.101207 | orchestrator | ++ IS_ZUUL=true 2026-01-10 14:24:07.101213 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.86 2026-01-10 14:24:07.101219 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.86 2026-01-10 14:24:07.101224 | orchestrator | ++ export EXTERNAL_API=false 2026-01-10 14:24:07.101230 | orchestrator | ++ EXTERNAL_API=false 2026-01-10 14:24:07.101236 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-01-10 14:24:07.101242 | orchestrator | ++ IMAGE_USER=ubuntu 2026-01-10 14:24:07.101248 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-01-10 14:24:07.101254 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-01-10 14:24:07.101259 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-01-10 14:24:07.101271 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-01-10 14:24:07.101277 | orchestrator | + echo 2026-01-10 14:24:07.101283 | orchestrator | + echo '# PULL IMAGES' 2026-01-10 14:24:07.101289 | orchestrator | + echo 2026-01-10 14:24:07.101299 | orchestrator | ++ semver latest 7.0.0 2026-01-10 14:24:07.155937 | orchestrator | + [[ -1 -ge 0 ]] 2026-01-10 14:24:07.156047 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-01-10 14:24:07.156067 | orchestrator | + osism apply --no-wait -r 2 -e custom pull-images 2026-01-10 14:24:09.051145 | orchestrator | 2026-01-10 14:24:09 | INFO  | Trying to run play pull-images in environment custom 2026-01-10 14:24:19.219035 | orchestrator | 2026-01-10 14:24:19 | INFO  | Task d6e0243b-4023-44b2-9137-c32b748de440 (pull-images) was prepared for execution. 2026-01-10 14:24:19.219165 | orchestrator | 2026-01-10 14:24:19 | INFO  | Task d6e0243b-4023-44b2-9137-c32b748de440 is running in background. No more output. Check ARA for logs. 2026-01-10 14:24:21.760252 | orchestrator | 2026-01-10 14:24:21 | INFO  | Trying to run play wipe-partitions in environment custom 2026-01-10 14:24:31.915500 | orchestrator | 2026-01-10 14:24:31 | INFO  | Task b739dcd8-567d-4040-a0af-215ff4da1ad6 (wipe-partitions) was prepared for execution. 2026-01-10 14:24:31.915579 | orchestrator | 2026-01-10 14:24:31 | INFO  | It takes a moment until task b739dcd8-567d-4040-a0af-215ff4da1ad6 (wipe-partitions) has been started and output is visible here. 2026-01-10 14:24:45.017928 | orchestrator | 2026-01-10 14:24:45.018073 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2026-01-10 14:24:45.018085 | orchestrator | 2026-01-10 14:24:45.018090 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2026-01-10 14:24:45.018101 | orchestrator | Saturday 10 January 2026 14:24:36 +0000 (0:00:00.134) 0:00:00.134 ****** 2026-01-10 14:24:45.018108 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:24:45.018113 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:24:45.018118 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:24:45.018123 | orchestrator | 2026-01-10 14:24:45.018129 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2026-01-10 14:24:45.018159 | orchestrator | Saturday 10 January 2026 14:24:37 +0000 (0:00:00.694) 0:00:00.829 ****** 2026-01-10 14:24:45.018165 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:24:45.018169 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:24:45.018177 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:24:45.018182 | orchestrator | 2026-01-10 14:24:45.018187 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2026-01-10 14:24:45.018191 | orchestrator | Saturday 10 January 2026 14:24:37 +0000 (0:00:00.403) 0:00:01.232 ****** 2026-01-10 14:24:45.018196 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:24:45.018201 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:24:45.018205 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:24:45.018210 | orchestrator | 2026-01-10 14:24:45.018215 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2026-01-10 14:24:45.018220 | orchestrator | Saturday 10 January 2026 14:24:38 +0000 (0:00:00.566) 0:00:01.798 ****** 2026-01-10 14:24:45.018227 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:24:45.018235 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:24:45.018242 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:24:45.018250 | orchestrator | 2026-01-10 14:24:45.018258 | orchestrator | TASK [Check device availability] *********************************************** 2026-01-10 14:24:45.018266 | orchestrator | Saturday 10 January 2026 14:24:38 +0000 (0:00:00.267) 0:00:02.066 ****** 2026-01-10 14:24:45.018274 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-01-10 14:24:45.018284 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-01-10 14:24:45.018292 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-01-10 14:24:45.018299 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-01-10 14:24:45.018307 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-01-10 14:24:45.018315 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-01-10 14:24:45.018323 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-01-10 14:24:45.018331 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-01-10 14:24:45.018340 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-01-10 14:24:45.018348 | orchestrator | 2026-01-10 14:24:45.018356 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2026-01-10 14:24:45.018366 | orchestrator | Saturday 10 January 2026 14:24:39 +0000 (0:00:01.276) 0:00:03.343 ****** 2026-01-10 14:24:45.018374 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2026-01-10 14:24:45.018383 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2026-01-10 14:24:45.018392 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2026-01-10 14:24:45.018397 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2026-01-10 14:24:45.018401 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2026-01-10 14:24:45.018406 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2026-01-10 14:24:45.018410 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2026-01-10 14:24:45.018416 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2026-01-10 14:24:45.018421 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2026-01-10 14:24:45.018427 | orchestrator | 2026-01-10 14:24:45.018432 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2026-01-10 14:24:45.018437 | orchestrator | Saturday 10 January 2026 14:24:41 +0000 (0:00:01.586) 0:00:04.929 ****** 2026-01-10 14:24:45.018443 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-01-10 14:24:45.018448 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-01-10 14:24:45.018453 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-01-10 14:24:45.018459 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-01-10 14:24:45.018464 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-01-10 14:24:45.018474 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-01-10 14:24:45.018480 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-01-10 14:24:45.018491 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-01-10 14:24:45.018497 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-01-10 14:24:45.018502 | orchestrator | 2026-01-10 14:24:45.018507 | orchestrator | TASK [Reload udev rules] ******************************************************* 2026-01-10 14:24:45.018512 | orchestrator | Saturday 10 January 2026 14:24:43 +0000 (0:00:02.141) 0:00:07.071 ****** 2026-01-10 14:24:45.018518 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:24:45.018523 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:24:45.018528 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:24:45.018533 | orchestrator | 2026-01-10 14:24:45.018538 | orchestrator | TASK [Request device events from the kernel] *********************************** 2026-01-10 14:24:45.018544 | orchestrator | Saturday 10 January 2026 14:24:44 +0000 (0:00:00.619) 0:00:07.690 ****** 2026-01-10 14:24:45.018549 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:24:45.018554 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:24:45.018559 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:24:45.018564 | orchestrator | 2026-01-10 14:24:45.018569 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-10 14:24:45.018576 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-10 14:24:45.018582 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-10 14:24:45.018602 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-10 14:24:45.018607 | orchestrator | 2026-01-10 14:24:45.018612 | orchestrator | 2026-01-10 14:24:45.018618 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-10 14:24:45.018623 | orchestrator | Saturday 10 January 2026 14:24:44 +0000 (0:00:00.639) 0:00:08.329 ****** 2026-01-10 14:24:45.018629 | orchestrator | =============================================================================== 2026-01-10 14:24:45.018637 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 2.14s 2026-01-10 14:24:45.018645 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.59s 2026-01-10 14:24:45.018652 | orchestrator | Check device availability ----------------------------------------------- 1.28s 2026-01-10 14:24:45.018660 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 0.69s 2026-01-10 14:24:45.018667 | orchestrator | Request device events from the kernel ----------------------------------- 0.64s 2026-01-10 14:24:45.018675 | orchestrator | Reload udev rules ------------------------------------------------------- 0.62s 2026-01-10 14:24:45.018682 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.57s 2026-01-10 14:24:45.018690 | orchestrator | Remove all rook related logical devices --------------------------------- 0.40s 2026-01-10 14:24:45.018699 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.27s 2026-01-10 14:24:57.637545 | orchestrator | 2026-01-10 14:24:57 | INFO  | Task 40f70d63-0e91-4a49-9a3e-eb87751ef79d (facts) was prepared for execution. 2026-01-10 14:24:57.637659 | orchestrator | 2026-01-10 14:24:57 | INFO  | It takes a moment until task 40f70d63-0e91-4a49-9a3e-eb87751ef79d (facts) has been started and output is visible here. 2026-01-10 14:25:10.551927 | orchestrator | 2026-01-10 14:25:10.552041 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-01-10 14:25:10.552059 | orchestrator | 2026-01-10 14:25:10.552071 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-01-10 14:25:10.552082 | orchestrator | Saturday 10 January 2026 14:25:02 +0000 (0:00:00.300) 0:00:00.300 ****** 2026-01-10 14:25:10.552094 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:25:10.552105 | orchestrator | ok: [testbed-manager] 2026-01-10 14:25:10.552116 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:25:10.552148 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:25:10.552160 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:25:10.552170 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:25:10.552181 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:25:10.552192 | orchestrator | 2026-01-10 14:25:10.552205 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-01-10 14:25:10.552215 | orchestrator | Saturday 10 January 2026 14:25:03 +0000 (0:00:01.139) 0:00:01.439 ****** 2026-01-10 14:25:10.552226 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:25:10.552238 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:25:10.552248 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:25:10.552259 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:25:10.552269 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:25:10.552280 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:25:10.552291 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:25:10.552301 | orchestrator | 2026-01-10 14:25:10.552312 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-01-10 14:25:10.552323 | orchestrator | 2026-01-10 14:25:10.552333 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-01-10 14:25:10.552344 | orchestrator | Saturday 10 January 2026 14:25:04 +0000 (0:00:01.466) 0:00:02.906 ****** 2026-01-10 14:25:10.552355 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:25:10.552366 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:25:10.552377 | orchestrator | ok: [testbed-manager] 2026-01-10 14:25:10.552388 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:25:10.552399 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:25:10.552410 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:25:10.552420 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:25:10.552431 | orchestrator | 2026-01-10 14:25:10.552442 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-01-10 14:25:10.552453 | orchestrator | 2026-01-10 14:25:10.552464 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-01-10 14:25:10.552497 | orchestrator | Saturday 10 January 2026 14:25:09 +0000 (0:00:04.842) 0:00:07.748 ****** 2026-01-10 14:25:10.552510 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:25:10.552523 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:25:10.552535 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:25:10.552547 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:25:10.552558 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:25:10.552570 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:25:10.552582 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:25:10.552594 | orchestrator | 2026-01-10 14:25:10.552607 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-10 14:25:10.552618 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-10 14:25:10.552629 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-10 14:25:10.552640 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-10 14:25:10.552651 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-10 14:25:10.552661 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-10 14:25:10.552672 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-10 14:25:10.552683 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-10 14:25:10.552693 | orchestrator | 2026-01-10 14:25:10.552713 | orchestrator | 2026-01-10 14:25:10.552724 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-10 14:25:10.552734 | orchestrator | Saturday 10 January 2026 14:25:10 +0000 (0:00:00.534) 0:00:08.283 ****** 2026-01-10 14:25:10.552745 | orchestrator | =============================================================================== 2026-01-10 14:25:10.552756 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.84s 2026-01-10 14:25:10.552767 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.47s 2026-01-10 14:25:10.552777 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.14s 2026-01-10 14:25:10.552805 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.53s 2026-01-10 14:25:13.166501 | orchestrator | 2026-01-10 14:25:13 | INFO  | Task bd3be8a0-b658-4369-b524-2393ef303eca (ceph-configure-lvm-volumes) was prepared for execution. 2026-01-10 14:25:13.166620 | orchestrator | 2026-01-10 14:25:13 | INFO  | It takes a moment until task bd3be8a0-b658-4369-b524-2393ef303eca (ceph-configure-lvm-volumes) has been started and output is visible here. 2026-01-10 14:25:25.823392 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-01-10 14:25:25.823498 | orchestrator | 2.16.14 2026-01-10 14:25:25.823512 | orchestrator | 2026-01-10 14:25:25.823522 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-01-10 14:25:25.823533 | orchestrator | 2026-01-10 14:25:25.823543 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-01-10 14:25:25.823552 | orchestrator | Saturday 10 January 2026 14:25:18 +0000 (0:00:00.336) 0:00:00.336 ****** 2026-01-10 14:25:25.823561 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-01-10 14:25:25.823569 | orchestrator | 2026-01-10 14:25:25.823578 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-01-10 14:25:25.823586 | orchestrator | Saturday 10 January 2026 14:25:18 +0000 (0:00:00.246) 0:00:00.583 ****** 2026-01-10 14:25:25.823594 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:25:25.823602 | orchestrator | 2026-01-10 14:25:25.823609 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:25:25.823618 | orchestrator | Saturday 10 January 2026 14:25:18 +0000 (0:00:00.242) 0:00:00.826 ****** 2026-01-10 14:25:25.823627 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-01-10 14:25:25.823635 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-01-10 14:25:25.823643 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-01-10 14:25:25.823651 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-01-10 14:25:25.823659 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-01-10 14:25:25.823666 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-01-10 14:25:25.823674 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-01-10 14:25:25.823682 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-01-10 14:25:25.823690 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-01-10 14:25:25.823717 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-01-10 14:25:25.823734 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-01-10 14:25:25.823742 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-01-10 14:25:25.823750 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-01-10 14:25:25.823768 | orchestrator | 2026-01-10 14:25:25.823821 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:25:25.823852 | orchestrator | Saturday 10 January 2026 14:25:19 +0000 (0:00:00.529) 0:00:01.355 ****** 2026-01-10 14:25:25.823861 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:25:25.823870 | orchestrator | 2026-01-10 14:25:25.823878 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:25:25.823886 | orchestrator | Saturday 10 January 2026 14:25:19 +0000 (0:00:00.204) 0:00:01.560 ****** 2026-01-10 14:25:25.823893 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:25:25.823901 | orchestrator | 2026-01-10 14:25:25.823909 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:25:25.823917 | orchestrator | Saturday 10 January 2026 14:25:19 +0000 (0:00:00.221) 0:00:01.782 ****** 2026-01-10 14:25:25.823925 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:25:25.823933 | orchestrator | 2026-01-10 14:25:25.823941 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:25:25.823953 | orchestrator | Saturday 10 January 2026 14:25:19 +0000 (0:00:00.192) 0:00:01.974 ****** 2026-01-10 14:25:25.823962 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:25:25.823970 | orchestrator | 2026-01-10 14:25:25.823978 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:25:25.823986 | orchestrator | Saturday 10 January 2026 14:25:19 +0000 (0:00:00.213) 0:00:02.187 ****** 2026-01-10 14:25:25.823994 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:25:25.824002 | orchestrator | 2026-01-10 14:25:25.824009 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:25:25.824017 | orchestrator | Saturday 10 January 2026 14:25:20 +0000 (0:00:00.210) 0:00:02.398 ****** 2026-01-10 14:25:25.824024 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:25:25.824032 | orchestrator | 2026-01-10 14:25:25.824040 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:25:25.824048 | orchestrator | Saturday 10 January 2026 14:25:20 +0000 (0:00:00.206) 0:00:02.605 ****** 2026-01-10 14:25:25.824056 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:25:25.824064 | orchestrator | 2026-01-10 14:25:25.824072 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:25:25.824079 | orchestrator | Saturday 10 January 2026 14:25:20 +0000 (0:00:00.202) 0:00:02.807 ****** 2026-01-10 14:25:25.824087 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:25:25.824094 | orchestrator | 2026-01-10 14:25:25.824102 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:25:25.824109 | orchestrator | Saturday 10 January 2026 14:25:20 +0000 (0:00:00.225) 0:00:03.032 ****** 2026-01-10 14:25:25.824117 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_21aeb2d8-b6d0-4615-8a9a-0fb4b5fbf431) 2026-01-10 14:25:25.824126 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_21aeb2d8-b6d0-4615-8a9a-0fb4b5fbf431) 2026-01-10 14:25:25.824134 | orchestrator | 2026-01-10 14:25:25.824142 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:25:25.824163 | orchestrator | Saturday 10 January 2026 14:25:21 +0000 (0:00:00.423) 0:00:03.456 ****** 2026-01-10 14:25:25.824171 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_70c6fd94-218f-483a-b965-10c70b1b97fc) 2026-01-10 14:25:25.824179 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_70c6fd94-218f-483a-b965-10c70b1b97fc) 2026-01-10 14:25:25.824186 | orchestrator | 2026-01-10 14:25:25.824194 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:25:25.824201 | orchestrator | Saturday 10 January 2026 14:25:21 +0000 (0:00:00.693) 0:00:04.150 ****** 2026-01-10 14:25:25.824208 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_f7705bd4-29b3-411e-b8b9-50568fcffd73) 2026-01-10 14:25:25.824216 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_f7705bd4-29b3-411e-b8b9-50568fcffd73) 2026-01-10 14:25:25.824224 | orchestrator | 2026-01-10 14:25:25.824231 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:25:25.824251 | orchestrator | Saturday 10 January 2026 14:25:22 +0000 (0:00:00.663) 0:00:04.813 ****** 2026-01-10 14:25:25.824259 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_2130b2ec-580e-4b39-88b4-748d7926916f) 2026-01-10 14:25:25.824266 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_2130b2ec-580e-4b39-88b4-748d7926916f) 2026-01-10 14:25:25.824273 | orchestrator | 2026-01-10 14:25:25.824280 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:25:25.824288 | orchestrator | Saturday 10 January 2026 14:25:23 +0000 (0:00:00.990) 0:00:05.803 ****** 2026-01-10 14:25:25.824295 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-01-10 14:25:25.824303 | orchestrator | 2026-01-10 14:25:25.824315 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:25:25.824323 | orchestrator | Saturday 10 January 2026 14:25:23 +0000 (0:00:00.350) 0:00:06.154 ****** 2026-01-10 14:25:25.824330 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-01-10 14:25:25.824337 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-01-10 14:25:25.824345 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-01-10 14:25:25.824352 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-01-10 14:25:25.824360 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-01-10 14:25:25.824367 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-01-10 14:25:25.824374 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-01-10 14:25:25.824381 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-01-10 14:25:25.824389 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-01-10 14:25:25.824397 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-01-10 14:25:25.824404 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-01-10 14:25:25.824411 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-01-10 14:25:25.824419 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-01-10 14:25:25.824426 | orchestrator | 2026-01-10 14:25:25.824434 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:25:25.824441 | orchestrator | Saturday 10 January 2026 14:25:24 +0000 (0:00:00.405) 0:00:06.560 ****** 2026-01-10 14:25:25.824448 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:25:25.824456 | orchestrator | 2026-01-10 14:25:25.824463 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:25:25.824470 | orchestrator | Saturday 10 January 2026 14:25:24 +0000 (0:00:00.217) 0:00:06.777 ****** 2026-01-10 14:25:25.824477 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:25:25.824485 | orchestrator | 2026-01-10 14:25:25.824492 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:25:25.824500 | orchestrator | Saturday 10 January 2026 14:25:24 +0000 (0:00:00.212) 0:00:06.989 ****** 2026-01-10 14:25:25.824507 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:25:25.824515 | orchestrator | 2026-01-10 14:25:25.824522 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:25:25.824530 | orchestrator | Saturday 10 January 2026 14:25:24 +0000 (0:00:00.233) 0:00:07.223 ****** 2026-01-10 14:25:25.824537 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:25:25.824544 | orchestrator | 2026-01-10 14:25:25.824551 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:25:25.824558 | orchestrator | Saturday 10 January 2026 14:25:25 +0000 (0:00:00.217) 0:00:07.441 ****** 2026-01-10 14:25:25.824571 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:25:25.824579 | orchestrator | 2026-01-10 14:25:25.824586 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:25:25.824594 | orchestrator | Saturday 10 January 2026 14:25:25 +0000 (0:00:00.211) 0:00:07.652 ****** 2026-01-10 14:25:25.824601 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:25:25.824609 | orchestrator | 2026-01-10 14:25:25.824616 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:25:25.824623 | orchestrator | Saturday 10 January 2026 14:25:25 +0000 (0:00:00.249) 0:00:07.902 ****** 2026-01-10 14:25:25.824631 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:25:25.824638 | orchestrator | 2026-01-10 14:25:25.824649 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:25:34.227087 | orchestrator | Saturday 10 January 2026 14:25:25 +0000 (0:00:00.210) 0:00:08.112 ****** 2026-01-10 14:25:34.227201 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:25:34.227217 | orchestrator | 2026-01-10 14:25:34.227228 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:25:34.227239 | orchestrator | Saturday 10 January 2026 14:25:26 +0000 (0:00:00.210) 0:00:08.323 ****** 2026-01-10 14:25:34.227248 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-01-10 14:25:34.227258 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-01-10 14:25:34.227268 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-01-10 14:25:34.227278 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-01-10 14:25:34.227288 | orchestrator | 2026-01-10 14:25:34.227298 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:25:34.227307 | orchestrator | Saturday 10 January 2026 14:25:27 +0000 (0:00:01.140) 0:00:09.464 ****** 2026-01-10 14:25:34.227317 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:25:34.227326 | orchestrator | 2026-01-10 14:25:34.227336 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:25:34.227346 | orchestrator | Saturday 10 January 2026 14:25:27 +0000 (0:00:00.234) 0:00:09.698 ****** 2026-01-10 14:25:34.227355 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:25:34.227365 | orchestrator | 2026-01-10 14:25:34.227375 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:25:34.227391 | orchestrator | Saturday 10 January 2026 14:25:27 +0000 (0:00:00.215) 0:00:09.914 ****** 2026-01-10 14:25:34.227405 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:25:34.227420 | orchestrator | 2026-01-10 14:25:34.227436 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:25:34.227454 | orchestrator | Saturday 10 January 2026 14:25:27 +0000 (0:00:00.218) 0:00:10.132 ****** 2026-01-10 14:25:34.227472 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:25:34.227489 | orchestrator | 2026-01-10 14:25:34.227506 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-01-10 14:25:34.227522 | orchestrator | Saturday 10 January 2026 14:25:28 +0000 (0:00:00.226) 0:00:10.358 ****** 2026-01-10 14:25:34.227540 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2026-01-10 14:25:34.227559 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2026-01-10 14:25:34.227577 | orchestrator | 2026-01-10 14:25:34.227616 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-01-10 14:25:34.227631 | orchestrator | Saturday 10 January 2026 14:25:28 +0000 (0:00:00.194) 0:00:10.553 ****** 2026-01-10 14:25:34.227647 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:25:34.227660 | orchestrator | 2026-01-10 14:25:34.227670 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-01-10 14:25:34.227680 | orchestrator | Saturday 10 January 2026 14:25:28 +0000 (0:00:00.128) 0:00:10.681 ****** 2026-01-10 14:25:34.227690 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:25:34.227699 | orchestrator | 2026-01-10 14:25:34.227709 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-01-10 14:25:34.227742 | orchestrator | Saturday 10 January 2026 14:25:28 +0000 (0:00:00.149) 0:00:10.830 ****** 2026-01-10 14:25:34.227752 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:25:34.227762 | orchestrator | 2026-01-10 14:25:34.227771 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-01-10 14:25:34.227821 | orchestrator | Saturday 10 January 2026 14:25:28 +0000 (0:00:00.158) 0:00:10.988 ****** 2026-01-10 14:25:34.227832 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:25:34.227841 | orchestrator | 2026-01-10 14:25:34.227851 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-01-10 14:25:34.227860 | orchestrator | Saturday 10 January 2026 14:25:28 +0000 (0:00:00.167) 0:00:11.156 ****** 2026-01-10 14:25:34.227871 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '6bac10f4-8703-5b93-90a3-91ba865f27b3'}}) 2026-01-10 14:25:34.227881 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'ef830303-d908-5775-964e-bef8687288a6'}}) 2026-01-10 14:25:34.227890 | orchestrator | 2026-01-10 14:25:34.227900 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-01-10 14:25:34.227909 | orchestrator | Saturday 10 January 2026 14:25:29 +0000 (0:00:00.224) 0:00:11.380 ****** 2026-01-10 14:25:34.227920 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '6bac10f4-8703-5b93-90a3-91ba865f27b3'}})  2026-01-10 14:25:34.227937 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'ef830303-d908-5775-964e-bef8687288a6'}})  2026-01-10 14:25:34.227946 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:25:34.227956 | orchestrator | 2026-01-10 14:25:34.227965 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-01-10 14:25:34.227975 | orchestrator | Saturday 10 January 2026 14:25:29 +0000 (0:00:00.167) 0:00:11.547 ****** 2026-01-10 14:25:34.227984 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '6bac10f4-8703-5b93-90a3-91ba865f27b3'}})  2026-01-10 14:25:34.227994 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'ef830303-d908-5775-964e-bef8687288a6'}})  2026-01-10 14:25:34.228003 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:25:34.228013 | orchestrator | 2026-01-10 14:25:34.228022 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-01-10 14:25:34.228031 | orchestrator | Saturday 10 January 2026 14:25:29 +0000 (0:00:00.378) 0:00:11.926 ****** 2026-01-10 14:25:34.228041 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '6bac10f4-8703-5b93-90a3-91ba865f27b3'}})  2026-01-10 14:25:34.228069 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'ef830303-d908-5775-964e-bef8687288a6'}})  2026-01-10 14:25:34.228078 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:25:34.228088 | orchestrator | 2026-01-10 14:25:34.228097 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-01-10 14:25:34.228125 | orchestrator | Saturday 10 January 2026 14:25:29 +0000 (0:00:00.171) 0:00:12.097 ****** 2026-01-10 14:25:34.228135 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:25:34.228144 | orchestrator | 2026-01-10 14:25:34.228154 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-01-10 14:25:34.228163 | orchestrator | Saturday 10 January 2026 14:25:29 +0000 (0:00:00.167) 0:00:12.265 ****** 2026-01-10 14:25:34.228173 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:25:34.228182 | orchestrator | 2026-01-10 14:25:34.228192 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-01-10 14:25:34.228201 | orchestrator | Saturday 10 January 2026 14:25:30 +0000 (0:00:00.151) 0:00:12.417 ****** 2026-01-10 14:25:34.228210 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:25:34.228220 | orchestrator | 2026-01-10 14:25:34.228229 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-01-10 14:25:34.228239 | orchestrator | Saturday 10 January 2026 14:25:30 +0000 (0:00:00.157) 0:00:12.574 ****** 2026-01-10 14:25:34.228256 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:25:34.228266 | orchestrator | 2026-01-10 14:25:34.228275 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-01-10 14:25:34.228285 | orchestrator | Saturday 10 January 2026 14:25:30 +0000 (0:00:00.136) 0:00:12.710 ****** 2026-01-10 14:25:34.228295 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:25:34.228304 | orchestrator | 2026-01-10 14:25:34.228314 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-01-10 14:25:34.228323 | orchestrator | Saturday 10 January 2026 14:25:30 +0000 (0:00:00.156) 0:00:12.867 ****** 2026-01-10 14:25:34.228332 | orchestrator | ok: [testbed-node-3] => { 2026-01-10 14:25:34.228342 | orchestrator |  "ceph_osd_devices": { 2026-01-10 14:25:34.228351 | orchestrator |  "sdb": { 2026-01-10 14:25:34.228361 | orchestrator |  "osd_lvm_uuid": "6bac10f4-8703-5b93-90a3-91ba865f27b3" 2026-01-10 14:25:34.228371 | orchestrator |  }, 2026-01-10 14:25:34.228380 | orchestrator |  "sdc": { 2026-01-10 14:25:34.228389 | orchestrator |  "osd_lvm_uuid": "ef830303-d908-5775-964e-bef8687288a6" 2026-01-10 14:25:34.228399 | orchestrator |  } 2026-01-10 14:25:34.228408 | orchestrator |  } 2026-01-10 14:25:34.228418 | orchestrator | } 2026-01-10 14:25:34.228427 | orchestrator | 2026-01-10 14:25:34.228437 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-01-10 14:25:34.228446 | orchestrator | Saturday 10 January 2026 14:25:30 +0000 (0:00:00.141) 0:00:13.008 ****** 2026-01-10 14:25:34.228456 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:25:34.228465 | orchestrator | 2026-01-10 14:25:34.228475 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-01-10 14:25:34.228484 | orchestrator | Saturday 10 January 2026 14:25:30 +0000 (0:00:00.141) 0:00:13.150 ****** 2026-01-10 14:25:34.228494 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:25:34.228503 | orchestrator | 2026-01-10 14:25:34.228512 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-01-10 14:25:34.228522 | orchestrator | Saturday 10 January 2026 14:25:31 +0000 (0:00:00.149) 0:00:13.299 ****** 2026-01-10 14:25:34.228531 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:25:34.228541 | orchestrator | 2026-01-10 14:25:34.228551 | orchestrator | TASK [Print configuration data] ************************************************ 2026-01-10 14:25:34.228560 | orchestrator | Saturday 10 January 2026 14:25:31 +0000 (0:00:00.129) 0:00:13.429 ****** 2026-01-10 14:25:34.228569 | orchestrator | changed: [testbed-node-3] => { 2026-01-10 14:25:34.228579 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-01-10 14:25:34.228588 | orchestrator |  "ceph_osd_devices": { 2026-01-10 14:25:34.228598 | orchestrator |  "sdb": { 2026-01-10 14:25:34.228607 | orchestrator |  "osd_lvm_uuid": "6bac10f4-8703-5b93-90a3-91ba865f27b3" 2026-01-10 14:25:34.228616 | orchestrator |  }, 2026-01-10 14:25:34.228626 | orchestrator |  "sdc": { 2026-01-10 14:25:34.228635 | orchestrator |  "osd_lvm_uuid": "ef830303-d908-5775-964e-bef8687288a6" 2026-01-10 14:25:34.228645 | orchestrator |  } 2026-01-10 14:25:34.228654 | orchestrator |  }, 2026-01-10 14:25:34.228663 | orchestrator |  "lvm_volumes": [ 2026-01-10 14:25:34.228673 | orchestrator |  { 2026-01-10 14:25:34.228682 | orchestrator |  "data": "osd-block-6bac10f4-8703-5b93-90a3-91ba865f27b3", 2026-01-10 14:25:34.228692 | orchestrator |  "data_vg": "ceph-6bac10f4-8703-5b93-90a3-91ba865f27b3" 2026-01-10 14:25:34.228701 | orchestrator |  }, 2026-01-10 14:25:34.228711 | orchestrator |  { 2026-01-10 14:25:34.228720 | orchestrator |  "data": "osd-block-ef830303-d908-5775-964e-bef8687288a6", 2026-01-10 14:25:34.228729 | orchestrator |  "data_vg": "ceph-ef830303-d908-5775-964e-bef8687288a6" 2026-01-10 14:25:34.228744 | orchestrator |  } 2026-01-10 14:25:34.228754 | orchestrator |  ] 2026-01-10 14:25:34.228763 | orchestrator |  } 2026-01-10 14:25:34.228827 | orchestrator | } 2026-01-10 14:25:34.228839 | orchestrator | 2026-01-10 14:25:34.228849 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-01-10 14:25:34.228859 | orchestrator | Saturday 10 January 2026 14:25:31 +0000 (0:00:00.423) 0:00:13.853 ****** 2026-01-10 14:25:34.228868 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-01-10 14:25:34.228878 | orchestrator | 2026-01-10 14:25:34.228887 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-01-10 14:25:34.228897 | orchestrator | 2026-01-10 14:25:34.228906 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-01-10 14:25:34.228916 | orchestrator | Saturday 10 January 2026 14:25:33 +0000 (0:00:02.082) 0:00:15.936 ****** 2026-01-10 14:25:34.228925 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-01-10 14:25:34.228935 | orchestrator | 2026-01-10 14:25:34.228944 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-01-10 14:25:34.228954 | orchestrator | Saturday 10 January 2026 14:25:33 +0000 (0:00:00.293) 0:00:16.229 ****** 2026-01-10 14:25:34.228963 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:25:34.228973 | orchestrator | 2026-01-10 14:25:34.228989 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:25:42.529438 | orchestrator | Saturday 10 January 2026 14:25:34 +0000 (0:00:00.285) 0:00:16.514 ****** 2026-01-10 14:25:42.529565 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-01-10 14:25:42.529587 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-01-10 14:25:42.529618 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-01-10 14:25:42.530432 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-01-10 14:25:42.530458 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-01-10 14:25:42.530468 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-01-10 14:25:42.530477 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-01-10 14:25:42.530486 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-01-10 14:25:42.530495 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-01-10 14:25:42.530503 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-01-10 14:25:42.530512 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-01-10 14:25:42.530526 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-01-10 14:25:42.530535 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-01-10 14:25:42.530544 | orchestrator | 2026-01-10 14:25:42.530554 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:25:42.530563 | orchestrator | Saturday 10 January 2026 14:25:34 +0000 (0:00:00.455) 0:00:16.970 ****** 2026-01-10 14:25:42.530572 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:25:42.530581 | orchestrator | 2026-01-10 14:25:42.530590 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:25:42.530599 | orchestrator | Saturday 10 January 2026 14:25:34 +0000 (0:00:00.237) 0:00:17.208 ****** 2026-01-10 14:25:42.530607 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:25:42.530616 | orchestrator | 2026-01-10 14:25:42.530625 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:25:42.530633 | orchestrator | Saturday 10 January 2026 14:25:35 +0000 (0:00:00.215) 0:00:17.423 ****** 2026-01-10 14:25:42.530642 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:25:42.530651 | orchestrator | 2026-01-10 14:25:42.530660 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:25:42.530694 | orchestrator | Saturday 10 January 2026 14:25:35 +0000 (0:00:00.192) 0:00:17.616 ****** 2026-01-10 14:25:42.530703 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:25:42.530712 | orchestrator | 2026-01-10 14:25:42.530720 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:25:42.530729 | orchestrator | Saturday 10 January 2026 14:25:35 +0000 (0:00:00.207) 0:00:17.823 ****** 2026-01-10 14:25:42.530738 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:25:42.530746 | orchestrator | 2026-01-10 14:25:42.530755 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:25:42.530763 | orchestrator | Saturday 10 January 2026 14:25:36 +0000 (0:00:00.643) 0:00:18.467 ****** 2026-01-10 14:25:42.530794 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:25:42.530810 | orchestrator | 2026-01-10 14:25:42.530844 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:25:42.530859 | orchestrator | Saturday 10 January 2026 14:25:36 +0000 (0:00:00.200) 0:00:18.667 ****** 2026-01-10 14:25:42.530875 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:25:42.530891 | orchestrator | 2026-01-10 14:25:42.530908 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:25:42.530923 | orchestrator | Saturday 10 January 2026 14:25:36 +0000 (0:00:00.222) 0:00:18.889 ****** 2026-01-10 14:25:42.530939 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:25:42.530953 | orchestrator | 2026-01-10 14:25:42.530968 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:25:42.530979 | orchestrator | Saturday 10 January 2026 14:25:36 +0000 (0:00:00.203) 0:00:19.092 ****** 2026-01-10 14:25:42.530988 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_6133482b-469c-4f4b-9769-bc6dc055ce78) 2026-01-10 14:25:42.530998 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_6133482b-469c-4f4b-9769-bc6dc055ce78) 2026-01-10 14:25:42.531006 | orchestrator | 2026-01-10 14:25:42.531015 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:25:42.531023 | orchestrator | Saturday 10 January 2026 14:25:37 +0000 (0:00:00.402) 0:00:19.495 ****** 2026-01-10 14:25:42.531032 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_763a4a26-d97a-40e2-a569-d464b2971007) 2026-01-10 14:25:42.531040 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_763a4a26-d97a-40e2-a569-d464b2971007) 2026-01-10 14:25:42.531048 | orchestrator | 2026-01-10 14:25:42.531063 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:25:42.531077 | orchestrator | Saturday 10 January 2026 14:25:37 +0000 (0:00:00.406) 0:00:19.901 ****** 2026-01-10 14:25:42.531090 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_45b03c06-0ab6-4b62-8b16-77c772305c6a) 2026-01-10 14:25:42.531104 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_45b03c06-0ab6-4b62-8b16-77c772305c6a) 2026-01-10 14:25:42.531117 | orchestrator | 2026-01-10 14:25:42.531126 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:25:42.531155 | orchestrator | Saturday 10 January 2026 14:25:38 +0000 (0:00:00.453) 0:00:20.355 ****** 2026-01-10 14:25:42.531164 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_e6c5241f-60aa-42cf-822c-98275b24deb1) 2026-01-10 14:25:42.531173 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_e6c5241f-60aa-42cf-822c-98275b24deb1) 2026-01-10 14:25:42.531181 | orchestrator | 2026-01-10 14:25:42.531190 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:25:42.531198 | orchestrator | Saturday 10 January 2026 14:25:38 +0000 (0:00:00.417) 0:00:20.772 ****** 2026-01-10 14:25:42.531207 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-01-10 14:25:42.531216 | orchestrator | 2026-01-10 14:25:42.531224 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:25:42.531232 | orchestrator | Saturday 10 January 2026 14:25:38 +0000 (0:00:00.333) 0:00:21.106 ****** 2026-01-10 14:25:42.531252 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-01-10 14:25:42.531261 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-01-10 14:25:42.531270 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-01-10 14:25:42.531278 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-01-10 14:25:42.531286 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-01-10 14:25:42.531295 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-01-10 14:25:42.531303 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-01-10 14:25:42.531311 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-01-10 14:25:42.531320 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-01-10 14:25:42.531328 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-01-10 14:25:42.531337 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-01-10 14:25:42.531345 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-01-10 14:25:42.531355 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-01-10 14:25:42.531370 | orchestrator | 2026-01-10 14:25:42.531384 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:25:42.531398 | orchestrator | Saturday 10 January 2026 14:25:39 +0000 (0:00:00.392) 0:00:21.498 ****** 2026-01-10 14:25:42.531412 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:25:42.531426 | orchestrator | 2026-01-10 14:25:42.531440 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:25:42.531464 | orchestrator | Saturday 10 January 2026 14:25:39 +0000 (0:00:00.690) 0:00:22.188 ****** 2026-01-10 14:25:42.531481 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:25:42.531498 | orchestrator | 2026-01-10 14:25:42.531514 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:25:42.531529 | orchestrator | Saturday 10 January 2026 14:25:40 +0000 (0:00:00.189) 0:00:22.378 ****** 2026-01-10 14:25:42.531543 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:25:42.531558 | orchestrator | 2026-01-10 14:25:42.531567 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:25:42.531576 | orchestrator | Saturday 10 January 2026 14:25:40 +0000 (0:00:00.221) 0:00:22.600 ****** 2026-01-10 14:25:42.531584 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:25:42.531593 | orchestrator | 2026-01-10 14:25:42.531601 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:25:42.531610 | orchestrator | Saturday 10 January 2026 14:25:40 +0000 (0:00:00.209) 0:00:22.809 ****** 2026-01-10 14:25:42.531618 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:25:42.531626 | orchestrator | 2026-01-10 14:25:42.531635 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:25:42.531643 | orchestrator | Saturday 10 January 2026 14:25:40 +0000 (0:00:00.199) 0:00:23.008 ****** 2026-01-10 14:25:42.531651 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:25:42.531660 | orchestrator | 2026-01-10 14:25:42.531668 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:25:42.531677 | orchestrator | Saturday 10 January 2026 14:25:40 +0000 (0:00:00.203) 0:00:23.212 ****** 2026-01-10 14:25:42.531685 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:25:42.531693 | orchestrator | 2026-01-10 14:25:42.531702 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:25:42.531710 | orchestrator | Saturday 10 January 2026 14:25:41 +0000 (0:00:00.212) 0:00:23.424 ****** 2026-01-10 14:25:42.531726 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:25:42.531735 | orchestrator | 2026-01-10 14:25:42.531743 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:25:42.531752 | orchestrator | Saturday 10 January 2026 14:25:41 +0000 (0:00:00.217) 0:00:23.641 ****** 2026-01-10 14:25:42.531760 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-01-10 14:25:42.531769 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-01-10 14:25:42.531811 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-01-10 14:25:42.531820 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-01-10 14:25:42.531829 | orchestrator | 2026-01-10 14:25:42.531837 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:25:42.531846 | orchestrator | Saturday 10 January 2026 14:25:42 +0000 (0:00:00.980) 0:00:24.622 ****** 2026-01-10 14:25:42.531855 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:25:48.773171 | orchestrator | 2026-01-10 14:25:48.773270 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:25:48.773281 | orchestrator | Saturday 10 January 2026 14:25:42 +0000 (0:00:00.200) 0:00:24.823 ****** 2026-01-10 14:25:48.773289 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:25:48.773296 | orchestrator | 2026-01-10 14:25:48.773303 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:25:48.773309 | orchestrator | Saturday 10 January 2026 14:25:42 +0000 (0:00:00.187) 0:00:25.010 ****** 2026-01-10 14:25:48.773316 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:25:48.773322 | orchestrator | 2026-01-10 14:25:48.773329 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:25:48.773335 | orchestrator | Saturday 10 January 2026 14:25:42 +0000 (0:00:00.190) 0:00:25.200 ****** 2026-01-10 14:25:48.773342 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:25:48.773348 | orchestrator | 2026-01-10 14:25:48.773354 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-01-10 14:25:48.773360 | orchestrator | Saturday 10 January 2026 14:25:43 +0000 (0:00:00.718) 0:00:25.919 ****** 2026-01-10 14:25:48.773367 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2026-01-10 14:25:48.773373 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2026-01-10 14:25:48.773379 | orchestrator | 2026-01-10 14:25:48.773386 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-01-10 14:25:48.773392 | orchestrator | Saturday 10 January 2026 14:25:43 +0000 (0:00:00.188) 0:00:26.107 ****** 2026-01-10 14:25:48.773398 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:25:48.773404 | orchestrator | 2026-01-10 14:25:48.773411 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-01-10 14:25:48.773417 | orchestrator | Saturday 10 January 2026 14:25:43 +0000 (0:00:00.129) 0:00:26.237 ****** 2026-01-10 14:25:48.773423 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:25:48.773429 | orchestrator | 2026-01-10 14:25:48.773435 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-01-10 14:25:48.773441 | orchestrator | Saturday 10 January 2026 14:25:44 +0000 (0:00:00.138) 0:00:26.375 ****** 2026-01-10 14:25:48.773447 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:25:48.773453 | orchestrator | 2026-01-10 14:25:48.773459 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-01-10 14:25:48.773466 | orchestrator | Saturday 10 January 2026 14:25:44 +0000 (0:00:00.134) 0:00:26.510 ****** 2026-01-10 14:25:48.773472 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:25:48.773479 | orchestrator | 2026-01-10 14:25:48.773485 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-01-10 14:25:48.773491 | orchestrator | Saturday 10 January 2026 14:25:44 +0000 (0:00:00.138) 0:00:26.648 ****** 2026-01-10 14:25:48.773500 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '0fad3856-f6d1-50e2-a5cb-d9f4a0859299'}}) 2026-01-10 14:25:48.773506 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '39355231-3192-5ff7-9e27-947e8968f1e9'}}) 2026-01-10 14:25:48.773534 | orchestrator | 2026-01-10 14:25:48.773541 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-01-10 14:25:48.773547 | orchestrator | Saturday 10 January 2026 14:25:44 +0000 (0:00:00.183) 0:00:26.832 ****** 2026-01-10 14:25:48.773554 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '0fad3856-f6d1-50e2-a5cb-d9f4a0859299'}})  2026-01-10 14:25:48.773577 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '39355231-3192-5ff7-9e27-947e8968f1e9'}})  2026-01-10 14:25:48.773583 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:25:48.773589 | orchestrator | 2026-01-10 14:25:48.773595 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-01-10 14:25:48.773601 | orchestrator | Saturday 10 January 2026 14:25:44 +0000 (0:00:00.197) 0:00:27.030 ****** 2026-01-10 14:25:48.773608 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '0fad3856-f6d1-50e2-a5cb-d9f4a0859299'}})  2026-01-10 14:25:48.773614 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '39355231-3192-5ff7-9e27-947e8968f1e9'}})  2026-01-10 14:25:48.773620 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:25:48.773626 | orchestrator | 2026-01-10 14:25:48.773632 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-01-10 14:25:48.773638 | orchestrator | Saturday 10 January 2026 14:25:44 +0000 (0:00:00.180) 0:00:27.210 ****** 2026-01-10 14:25:48.773645 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '0fad3856-f6d1-50e2-a5cb-d9f4a0859299'}})  2026-01-10 14:25:48.773651 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '39355231-3192-5ff7-9e27-947e8968f1e9'}})  2026-01-10 14:25:48.773657 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:25:48.773663 | orchestrator | 2026-01-10 14:25:48.773669 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-01-10 14:25:48.773676 | orchestrator | Saturday 10 January 2026 14:25:45 +0000 (0:00:00.156) 0:00:27.367 ****** 2026-01-10 14:25:48.773682 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:25:48.773688 | orchestrator | 2026-01-10 14:25:48.773694 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-01-10 14:25:48.773700 | orchestrator | Saturday 10 January 2026 14:25:45 +0000 (0:00:00.139) 0:00:27.506 ****** 2026-01-10 14:25:48.773707 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:25:48.773713 | orchestrator | 2026-01-10 14:25:48.773719 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-01-10 14:25:48.773725 | orchestrator | Saturday 10 January 2026 14:25:45 +0000 (0:00:00.138) 0:00:27.645 ****** 2026-01-10 14:25:48.773746 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:25:48.773753 | orchestrator | 2026-01-10 14:25:48.773759 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-01-10 14:25:48.773765 | orchestrator | Saturday 10 January 2026 14:25:45 +0000 (0:00:00.361) 0:00:28.007 ****** 2026-01-10 14:25:48.773785 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:25:48.773792 | orchestrator | 2026-01-10 14:25:48.773798 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-01-10 14:25:48.773808 | orchestrator | Saturday 10 January 2026 14:25:45 +0000 (0:00:00.135) 0:00:28.142 ****** 2026-01-10 14:25:48.773815 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:25:48.773819 | orchestrator | 2026-01-10 14:25:48.773824 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-01-10 14:25:48.773828 | orchestrator | Saturday 10 January 2026 14:25:45 +0000 (0:00:00.126) 0:00:28.268 ****** 2026-01-10 14:25:48.773832 | orchestrator | ok: [testbed-node-4] => { 2026-01-10 14:25:48.773838 | orchestrator |  "ceph_osd_devices": { 2026-01-10 14:25:48.773845 | orchestrator |  "sdb": { 2026-01-10 14:25:48.773851 | orchestrator |  "osd_lvm_uuid": "0fad3856-f6d1-50e2-a5cb-d9f4a0859299" 2026-01-10 14:25:48.773863 | orchestrator |  }, 2026-01-10 14:25:48.773870 | orchestrator |  "sdc": { 2026-01-10 14:25:48.773877 | orchestrator |  "osd_lvm_uuid": "39355231-3192-5ff7-9e27-947e8968f1e9" 2026-01-10 14:25:48.773883 | orchestrator |  } 2026-01-10 14:25:48.773890 | orchestrator |  } 2026-01-10 14:25:48.773897 | orchestrator | } 2026-01-10 14:25:48.773904 | orchestrator | 2026-01-10 14:25:48.773911 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-01-10 14:25:48.773917 | orchestrator | Saturday 10 January 2026 14:25:46 +0000 (0:00:00.122) 0:00:28.391 ****** 2026-01-10 14:25:48.773923 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:25:48.773929 | orchestrator | 2026-01-10 14:25:48.773936 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-01-10 14:25:48.773943 | orchestrator | Saturday 10 January 2026 14:25:46 +0000 (0:00:00.115) 0:00:28.506 ****** 2026-01-10 14:25:48.773949 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:25:48.773955 | orchestrator | 2026-01-10 14:25:48.773962 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-01-10 14:25:48.773968 | orchestrator | Saturday 10 January 2026 14:25:46 +0000 (0:00:00.109) 0:00:28.616 ****** 2026-01-10 14:25:48.773975 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:25:48.773981 | orchestrator | 2026-01-10 14:25:48.773987 | orchestrator | TASK [Print configuration data] ************************************************ 2026-01-10 14:25:48.773994 | orchestrator | Saturday 10 January 2026 14:25:46 +0000 (0:00:00.131) 0:00:28.748 ****** 2026-01-10 14:25:48.774000 | orchestrator | changed: [testbed-node-4] => { 2026-01-10 14:25:48.774006 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-01-10 14:25:48.774050 | orchestrator |  "ceph_osd_devices": { 2026-01-10 14:25:48.774058 | orchestrator |  "sdb": { 2026-01-10 14:25:48.774065 | orchestrator |  "osd_lvm_uuid": "0fad3856-f6d1-50e2-a5cb-d9f4a0859299" 2026-01-10 14:25:48.774072 | orchestrator |  }, 2026-01-10 14:25:48.774079 | orchestrator |  "sdc": { 2026-01-10 14:25:48.774086 | orchestrator |  "osd_lvm_uuid": "39355231-3192-5ff7-9e27-947e8968f1e9" 2026-01-10 14:25:48.774093 | orchestrator |  } 2026-01-10 14:25:48.774100 | orchestrator |  }, 2026-01-10 14:25:48.774107 | orchestrator |  "lvm_volumes": [ 2026-01-10 14:25:48.774113 | orchestrator |  { 2026-01-10 14:25:48.774119 | orchestrator |  "data": "osd-block-0fad3856-f6d1-50e2-a5cb-d9f4a0859299", 2026-01-10 14:25:48.774147 | orchestrator |  "data_vg": "ceph-0fad3856-f6d1-50e2-a5cb-d9f4a0859299" 2026-01-10 14:25:48.774155 | orchestrator |  }, 2026-01-10 14:25:48.774162 | orchestrator |  { 2026-01-10 14:25:48.774168 | orchestrator |  "data": "osd-block-39355231-3192-5ff7-9e27-947e8968f1e9", 2026-01-10 14:25:48.774175 | orchestrator |  "data_vg": "ceph-39355231-3192-5ff7-9e27-947e8968f1e9" 2026-01-10 14:25:48.774181 | orchestrator |  } 2026-01-10 14:25:48.774188 | orchestrator |  ] 2026-01-10 14:25:48.774195 | orchestrator |  } 2026-01-10 14:25:48.774202 | orchestrator | } 2026-01-10 14:25:48.774208 | orchestrator | 2026-01-10 14:25:48.774214 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-01-10 14:25:48.774220 | orchestrator | Saturday 10 January 2026 14:25:46 +0000 (0:00:00.189) 0:00:28.937 ****** 2026-01-10 14:25:48.774227 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-01-10 14:25:48.774233 | orchestrator | 2026-01-10 14:25:48.774239 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-01-10 14:25:48.774245 | orchestrator | 2026-01-10 14:25:48.774251 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-01-10 14:25:48.774257 | orchestrator | Saturday 10 January 2026 14:25:47 +0000 (0:00:00.975) 0:00:29.912 ****** 2026-01-10 14:25:48.774264 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-01-10 14:25:48.774270 | orchestrator | 2026-01-10 14:25:48.774276 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-01-10 14:25:48.774293 | orchestrator | Saturday 10 January 2026 14:25:48 +0000 (0:00:00.582) 0:00:30.494 ****** 2026-01-10 14:25:48.774300 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:25:48.774307 | orchestrator | 2026-01-10 14:25:48.774314 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:25:48.774320 | orchestrator | Saturday 10 January 2026 14:25:48 +0000 (0:00:00.228) 0:00:30.723 ****** 2026-01-10 14:25:48.774326 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-01-10 14:25:48.774333 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-01-10 14:25:48.774340 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-01-10 14:25:48.774346 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-01-10 14:25:48.774353 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-01-10 14:25:48.774367 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-01-10 14:25:56.310120 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-01-10 14:25:56.310232 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-01-10 14:25:56.310249 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-01-10 14:25:56.310260 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-01-10 14:25:56.310272 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-01-10 14:25:56.310283 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-01-10 14:25:56.310294 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-01-10 14:25:56.310305 | orchestrator | 2026-01-10 14:25:56.310317 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:25:56.310329 | orchestrator | Saturday 10 January 2026 14:25:48 +0000 (0:00:00.332) 0:00:31.055 ****** 2026-01-10 14:25:56.310340 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:25:56.310352 | orchestrator | 2026-01-10 14:25:56.310363 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:25:56.310374 | orchestrator | Saturday 10 January 2026 14:25:48 +0000 (0:00:00.196) 0:00:31.252 ****** 2026-01-10 14:25:56.310385 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:25:56.310396 | orchestrator | 2026-01-10 14:25:56.310407 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:25:56.310418 | orchestrator | Saturday 10 January 2026 14:25:49 +0000 (0:00:00.201) 0:00:31.454 ****** 2026-01-10 14:25:56.310429 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:25:56.310440 | orchestrator | 2026-01-10 14:25:56.310451 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:25:56.310462 | orchestrator | Saturday 10 January 2026 14:25:49 +0000 (0:00:00.173) 0:00:31.627 ****** 2026-01-10 14:25:56.310473 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:25:56.310484 | orchestrator | 2026-01-10 14:25:56.310495 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:25:56.310506 | orchestrator | Saturday 10 January 2026 14:25:49 +0000 (0:00:00.198) 0:00:31.826 ****** 2026-01-10 14:25:56.310516 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:25:56.310527 | orchestrator | 2026-01-10 14:25:56.310538 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:25:56.310549 | orchestrator | Saturday 10 January 2026 14:25:49 +0000 (0:00:00.179) 0:00:32.005 ****** 2026-01-10 14:25:56.310560 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:25:56.310571 | orchestrator | 2026-01-10 14:25:56.310582 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:25:56.310620 | orchestrator | Saturday 10 January 2026 14:25:49 +0000 (0:00:00.202) 0:00:32.208 ****** 2026-01-10 14:25:56.310633 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:25:56.310645 | orchestrator | 2026-01-10 14:25:56.310658 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:25:56.310671 | orchestrator | Saturday 10 January 2026 14:25:50 +0000 (0:00:00.192) 0:00:32.400 ****** 2026-01-10 14:25:56.310681 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:25:56.310692 | orchestrator | 2026-01-10 14:25:56.310703 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:25:56.310714 | orchestrator | Saturday 10 January 2026 14:25:50 +0000 (0:00:00.196) 0:00:32.596 ****** 2026-01-10 14:25:56.310725 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_832b7b05-f737-40ff-a441-99af22cffa7c) 2026-01-10 14:25:56.310737 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_832b7b05-f737-40ff-a441-99af22cffa7c) 2026-01-10 14:25:56.310747 | orchestrator | 2026-01-10 14:25:56.310758 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:25:56.310769 | orchestrator | Saturday 10 January 2026 14:25:51 +0000 (0:00:00.760) 0:00:33.357 ****** 2026-01-10 14:25:56.310780 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_4515c98e-1f25-421e-81d3-264e20827141) 2026-01-10 14:25:56.310791 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_4515c98e-1f25-421e-81d3-264e20827141) 2026-01-10 14:25:56.310826 | orchestrator | 2026-01-10 14:25:56.310837 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:25:56.310848 | orchestrator | Saturday 10 January 2026 14:25:51 +0000 (0:00:00.432) 0:00:33.790 ****** 2026-01-10 14:25:56.310859 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_9cc4e4a6-fdb0-4f2b-8497-7d80ad86af00) 2026-01-10 14:25:56.310870 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_9cc4e4a6-fdb0-4f2b-8497-7d80ad86af00) 2026-01-10 14:25:56.310880 | orchestrator | 2026-01-10 14:25:56.310891 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:25:56.310902 | orchestrator | Saturday 10 January 2026 14:25:51 +0000 (0:00:00.398) 0:00:34.189 ****** 2026-01-10 14:25:56.310913 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_355a7212-75f2-41c4-a284-fbc15ac49d3c) 2026-01-10 14:25:56.310923 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_355a7212-75f2-41c4-a284-fbc15ac49d3c) 2026-01-10 14:25:56.310934 | orchestrator | 2026-01-10 14:25:56.310945 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:25:56.310955 | orchestrator | Saturday 10 January 2026 14:25:52 +0000 (0:00:00.493) 0:00:34.682 ****** 2026-01-10 14:25:56.310966 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-01-10 14:25:56.310977 | orchestrator | 2026-01-10 14:25:56.310988 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:25:56.311016 | orchestrator | Saturday 10 January 2026 14:25:52 +0000 (0:00:00.318) 0:00:35.001 ****** 2026-01-10 14:25:56.311027 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-01-10 14:25:56.311038 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-01-10 14:25:56.311049 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-01-10 14:25:56.311060 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-01-10 14:25:56.311070 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-01-10 14:25:56.311099 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-01-10 14:25:56.311111 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-01-10 14:25:56.311121 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-01-10 14:25:56.311142 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-01-10 14:25:56.311152 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-01-10 14:25:56.311163 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-01-10 14:25:56.311173 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-01-10 14:25:56.311183 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-01-10 14:25:56.311194 | orchestrator | 2026-01-10 14:25:56.311205 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:25:56.311215 | orchestrator | Saturday 10 January 2026 14:25:53 +0000 (0:00:00.341) 0:00:35.343 ****** 2026-01-10 14:25:56.311226 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:25:56.311237 | orchestrator | 2026-01-10 14:25:56.311247 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:25:56.311258 | orchestrator | Saturday 10 January 2026 14:25:53 +0000 (0:00:00.225) 0:00:35.569 ****** 2026-01-10 14:25:56.311268 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:25:56.311279 | orchestrator | 2026-01-10 14:25:56.311290 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:25:56.311306 | orchestrator | Saturday 10 January 2026 14:25:53 +0000 (0:00:00.214) 0:00:35.783 ****** 2026-01-10 14:25:56.311317 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:25:56.311327 | orchestrator | 2026-01-10 14:25:56.311338 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:25:56.311349 | orchestrator | Saturday 10 January 2026 14:25:53 +0000 (0:00:00.188) 0:00:35.972 ****** 2026-01-10 14:25:56.311360 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:25:56.311370 | orchestrator | 2026-01-10 14:25:56.311381 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:25:56.311391 | orchestrator | Saturday 10 January 2026 14:25:53 +0000 (0:00:00.192) 0:00:36.164 ****** 2026-01-10 14:25:56.311402 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:25:56.311413 | orchestrator | 2026-01-10 14:25:56.311423 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:25:56.311434 | orchestrator | Saturday 10 January 2026 14:25:54 +0000 (0:00:00.212) 0:00:36.377 ****** 2026-01-10 14:25:56.311444 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:25:56.311454 | orchestrator | 2026-01-10 14:25:56.311465 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:25:56.311476 | orchestrator | Saturday 10 January 2026 14:25:54 +0000 (0:00:00.413) 0:00:36.791 ****** 2026-01-10 14:25:56.311486 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:25:56.311497 | orchestrator | 2026-01-10 14:25:56.311507 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:25:56.311518 | orchestrator | Saturday 10 January 2026 14:25:54 +0000 (0:00:00.170) 0:00:36.961 ****** 2026-01-10 14:25:56.311528 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:25:56.311539 | orchestrator | 2026-01-10 14:25:56.311549 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:25:56.311560 | orchestrator | Saturday 10 January 2026 14:25:54 +0000 (0:00:00.245) 0:00:37.206 ****** 2026-01-10 14:25:56.311570 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-01-10 14:25:56.311581 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-01-10 14:25:56.311591 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-01-10 14:25:56.311602 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-01-10 14:25:56.311612 | orchestrator | 2026-01-10 14:25:56.311623 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:25:56.311633 | orchestrator | Saturday 10 January 2026 14:25:55 +0000 (0:00:00.685) 0:00:37.892 ****** 2026-01-10 14:25:56.311644 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:25:56.311662 | orchestrator | 2026-01-10 14:25:56.311673 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:25:56.311683 | orchestrator | Saturday 10 January 2026 14:25:55 +0000 (0:00:00.194) 0:00:38.087 ****** 2026-01-10 14:25:56.311694 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:25:56.311705 | orchestrator | 2026-01-10 14:25:56.311715 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:25:56.311726 | orchestrator | Saturday 10 January 2026 14:25:55 +0000 (0:00:00.169) 0:00:38.257 ****** 2026-01-10 14:25:56.311736 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:25:56.311747 | orchestrator | 2026-01-10 14:25:56.311758 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:25:56.311768 | orchestrator | Saturday 10 January 2026 14:25:56 +0000 (0:00:00.172) 0:00:38.429 ****** 2026-01-10 14:25:56.311779 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:25:56.311790 | orchestrator | 2026-01-10 14:25:56.311848 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-01-10 14:26:00.757738 | orchestrator | Saturday 10 January 2026 14:25:56 +0000 (0:00:00.173) 0:00:38.603 ****** 2026-01-10 14:26:00.757832 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2026-01-10 14:26:00.757840 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2026-01-10 14:26:00.757845 | orchestrator | 2026-01-10 14:26:00.757849 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-01-10 14:26:00.757854 | orchestrator | Saturday 10 January 2026 14:25:56 +0000 (0:00:00.143) 0:00:38.746 ****** 2026-01-10 14:26:00.757858 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:26:00.757863 | orchestrator | 2026-01-10 14:26:00.757867 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-01-10 14:26:00.757871 | orchestrator | Saturday 10 January 2026 14:25:56 +0000 (0:00:00.107) 0:00:38.853 ****** 2026-01-10 14:26:00.757875 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:26:00.757879 | orchestrator | 2026-01-10 14:26:00.757883 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-01-10 14:26:00.757886 | orchestrator | Saturday 10 January 2026 14:25:56 +0000 (0:00:00.101) 0:00:38.954 ****** 2026-01-10 14:26:00.757890 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:26:00.757894 | orchestrator | 2026-01-10 14:26:00.757897 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-01-10 14:26:00.757901 | orchestrator | Saturday 10 January 2026 14:25:56 +0000 (0:00:00.243) 0:00:39.198 ****** 2026-01-10 14:26:00.757905 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:26:00.757909 | orchestrator | 2026-01-10 14:26:00.757914 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-01-10 14:26:00.757917 | orchestrator | Saturday 10 January 2026 14:25:57 +0000 (0:00:00.120) 0:00:39.318 ****** 2026-01-10 14:26:00.757921 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '4cb3fc90-004d-5443-9ae7-f5eff9c4438f'}}) 2026-01-10 14:26:00.757926 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'dec76364-a7ee-5469-8bc3-2dcf5060f83e'}}) 2026-01-10 14:26:00.757929 | orchestrator | 2026-01-10 14:26:00.757933 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-01-10 14:26:00.757937 | orchestrator | Saturday 10 January 2026 14:25:57 +0000 (0:00:00.142) 0:00:39.460 ****** 2026-01-10 14:26:00.757941 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '4cb3fc90-004d-5443-9ae7-f5eff9c4438f'}})  2026-01-10 14:26:00.757946 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'dec76364-a7ee-5469-8bc3-2dcf5060f83e'}})  2026-01-10 14:26:00.757950 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:26:00.757954 | orchestrator | 2026-01-10 14:26:00.757958 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-01-10 14:26:00.757962 | orchestrator | Saturday 10 January 2026 14:25:57 +0000 (0:00:00.150) 0:00:39.611 ****** 2026-01-10 14:26:00.758003 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '4cb3fc90-004d-5443-9ae7-f5eff9c4438f'}})  2026-01-10 14:26:00.758008 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'dec76364-a7ee-5469-8bc3-2dcf5060f83e'}})  2026-01-10 14:26:00.758011 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:26:00.758043 | orchestrator | 2026-01-10 14:26:00.758048 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-01-10 14:26:00.758052 | orchestrator | Saturday 10 January 2026 14:25:57 +0000 (0:00:00.133) 0:00:39.745 ****** 2026-01-10 14:26:00.758066 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '4cb3fc90-004d-5443-9ae7-f5eff9c4438f'}})  2026-01-10 14:26:00.758070 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'dec76364-a7ee-5469-8bc3-2dcf5060f83e'}})  2026-01-10 14:26:00.758074 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:26:00.758078 | orchestrator | 2026-01-10 14:26:00.758081 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-01-10 14:26:00.758085 | orchestrator | Saturday 10 January 2026 14:25:57 +0000 (0:00:00.240) 0:00:39.985 ****** 2026-01-10 14:26:00.758089 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:26:00.758093 | orchestrator | 2026-01-10 14:26:00.758096 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-01-10 14:26:00.758100 | orchestrator | Saturday 10 January 2026 14:25:57 +0000 (0:00:00.145) 0:00:40.131 ****** 2026-01-10 14:26:00.758104 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:26:00.758107 | orchestrator | 2026-01-10 14:26:00.758111 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-01-10 14:26:00.758115 | orchestrator | Saturday 10 January 2026 14:25:57 +0000 (0:00:00.152) 0:00:40.284 ****** 2026-01-10 14:26:00.758118 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:26:00.758122 | orchestrator | 2026-01-10 14:26:00.758126 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-01-10 14:26:00.758129 | orchestrator | Saturday 10 January 2026 14:25:58 +0000 (0:00:00.124) 0:00:40.408 ****** 2026-01-10 14:26:00.758133 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:26:00.758137 | orchestrator | 2026-01-10 14:26:00.758140 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-01-10 14:26:00.758144 | orchestrator | Saturday 10 January 2026 14:25:58 +0000 (0:00:00.153) 0:00:40.561 ****** 2026-01-10 14:26:00.758148 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:26:00.758151 | orchestrator | 2026-01-10 14:26:00.758155 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-01-10 14:26:00.758159 | orchestrator | Saturday 10 January 2026 14:25:58 +0000 (0:00:00.166) 0:00:40.728 ****** 2026-01-10 14:26:00.758163 | orchestrator | ok: [testbed-node-5] => { 2026-01-10 14:26:00.758166 | orchestrator |  "ceph_osd_devices": { 2026-01-10 14:26:00.758170 | orchestrator |  "sdb": { 2026-01-10 14:26:00.758185 | orchestrator |  "osd_lvm_uuid": "4cb3fc90-004d-5443-9ae7-f5eff9c4438f" 2026-01-10 14:26:00.758189 | orchestrator |  }, 2026-01-10 14:26:00.758193 | orchestrator |  "sdc": { 2026-01-10 14:26:00.758197 | orchestrator |  "osd_lvm_uuid": "dec76364-a7ee-5469-8bc3-2dcf5060f83e" 2026-01-10 14:26:00.758201 | orchestrator |  } 2026-01-10 14:26:00.758205 | orchestrator |  } 2026-01-10 14:26:00.758209 | orchestrator | } 2026-01-10 14:26:00.758213 | orchestrator | 2026-01-10 14:26:00.758216 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-01-10 14:26:00.758220 | orchestrator | Saturday 10 January 2026 14:25:58 +0000 (0:00:00.138) 0:00:40.866 ****** 2026-01-10 14:26:00.758224 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:26:00.758228 | orchestrator | 2026-01-10 14:26:00.758231 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-01-10 14:26:00.758235 | orchestrator | Saturday 10 January 2026 14:25:58 +0000 (0:00:00.162) 0:00:41.029 ****** 2026-01-10 14:26:00.758244 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:26:00.758247 | orchestrator | 2026-01-10 14:26:00.758251 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-01-10 14:26:00.758255 | orchestrator | Saturday 10 January 2026 14:25:59 +0000 (0:00:00.398) 0:00:41.428 ****** 2026-01-10 14:26:00.758259 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:26:00.758262 | orchestrator | 2026-01-10 14:26:00.758266 | orchestrator | TASK [Print configuration data] ************************************************ 2026-01-10 14:26:00.758270 | orchestrator | Saturday 10 January 2026 14:25:59 +0000 (0:00:00.180) 0:00:41.608 ****** 2026-01-10 14:26:00.758273 | orchestrator | changed: [testbed-node-5] => { 2026-01-10 14:26:00.758277 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-01-10 14:26:00.758281 | orchestrator |  "ceph_osd_devices": { 2026-01-10 14:26:00.758285 | orchestrator |  "sdb": { 2026-01-10 14:26:00.758288 | orchestrator |  "osd_lvm_uuid": "4cb3fc90-004d-5443-9ae7-f5eff9c4438f" 2026-01-10 14:26:00.758292 | orchestrator |  }, 2026-01-10 14:26:00.758296 | orchestrator |  "sdc": { 2026-01-10 14:26:00.758300 | orchestrator |  "osd_lvm_uuid": "dec76364-a7ee-5469-8bc3-2dcf5060f83e" 2026-01-10 14:26:00.758303 | orchestrator |  } 2026-01-10 14:26:00.758307 | orchestrator |  }, 2026-01-10 14:26:00.758312 | orchestrator |  "lvm_volumes": [ 2026-01-10 14:26:00.758316 | orchestrator |  { 2026-01-10 14:26:00.758320 | orchestrator |  "data": "osd-block-4cb3fc90-004d-5443-9ae7-f5eff9c4438f", 2026-01-10 14:26:00.758325 | orchestrator |  "data_vg": "ceph-4cb3fc90-004d-5443-9ae7-f5eff9c4438f" 2026-01-10 14:26:00.758329 | orchestrator |  }, 2026-01-10 14:26:00.758333 | orchestrator |  { 2026-01-10 14:26:00.758338 | orchestrator |  "data": "osd-block-dec76364-a7ee-5469-8bc3-2dcf5060f83e", 2026-01-10 14:26:00.758342 | orchestrator |  "data_vg": "ceph-dec76364-a7ee-5469-8bc3-2dcf5060f83e" 2026-01-10 14:26:00.758346 | orchestrator |  } 2026-01-10 14:26:00.758354 | orchestrator |  ] 2026-01-10 14:26:00.758358 | orchestrator |  } 2026-01-10 14:26:00.758362 | orchestrator | } 2026-01-10 14:26:00.758366 | orchestrator | 2026-01-10 14:26:00.758371 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-01-10 14:26:00.758375 | orchestrator | Saturday 10 January 2026 14:25:59 +0000 (0:00:00.261) 0:00:41.870 ****** 2026-01-10 14:26:00.758379 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-01-10 14:26:00.758384 | orchestrator | 2026-01-10 14:26:00.758388 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-10 14:26:00.758392 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-01-10 14:26:00.758398 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-01-10 14:26:00.758402 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-01-10 14:26:00.758406 | orchestrator | 2026-01-10 14:26:00.758411 | orchestrator | 2026-01-10 14:26:00.758415 | orchestrator | 2026-01-10 14:26:00.758419 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-10 14:26:00.758424 | orchestrator | Saturday 10 January 2026 14:26:00 +0000 (0:00:01.163) 0:00:43.033 ****** 2026-01-10 14:26:00.758428 | orchestrator | =============================================================================== 2026-01-10 14:26:00.758432 | orchestrator | Write configuration file ------------------------------------------------ 4.22s 2026-01-10 14:26:00.758437 | orchestrator | Add known links to the list of available block devices ------------------ 1.32s 2026-01-10 14:26:00.758441 | orchestrator | Add known partitions to the list of available block devices ------------- 1.14s 2026-01-10 14:26:00.758445 | orchestrator | Add known partitions to the list of available block devices ------------- 1.14s 2026-01-10 14:26:00.758453 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 1.12s 2026-01-10 14:26:00.758457 | orchestrator | Add known links to the list of available block devices ------------------ 0.99s 2026-01-10 14:26:00.758462 | orchestrator | Add known partitions to the list of available block devices ------------- 0.98s 2026-01-10 14:26:00.758466 | orchestrator | Print configuration data ------------------------------------------------ 0.87s 2026-01-10 14:26:00.758470 | orchestrator | Add known links to the list of available block devices ------------------ 0.76s 2026-01-10 14:26:00.758475 | orchestrator | Get initial list of available block devices ----------------------------- 0.76s 2026-01-10 14:26:00.758479 | orchestrator | Add known partitions to the list of available block devices ------------- 0.72s 2026-01-10 14:26:00.758483 | orchestrator | Add known links to the list of available block devices ------------------ 0.69s 2026-01-10 14:26:00.758488 | orchestrator | Generate lvm_volumes structure (block + wal) ---------------------------- 0.69s 2026-01-10 14:26:00.758494 | orchestrator | Add known partitions to the list of available block devices ------------- 0.69s 2026-01-10 14:26:01.206603 | orchestrator | Add known partitions to the list of available block devices ------------- 0.69s 2026-01-10 14:26:01.206684 | orchestrator | Add known links to the list of available block devices ------------------ 0.66s 2026-01-10 14:26:01.206691 | orchestrator | Print DB devices -------------------------------------------------------- 0.66s 2026-01-10 14:26:01.206696 | orchestrator | Add known links to the list of available block devices ------------------ 0.64s 2026-01-10 14:26:01.206701 | orchestrator | Set DB devices config data ---------------------------------------------- 0.64s 2026-01-10 14:26:01.206706 | orchestrator | Generate lvm_volumes structure (block + db + wal) ----------------------- 0.57s 2026-01-10 14:26:23.946325 | orchestrator | 2026-01-10 14:26:23 | INFO  | Task 437aa22e-6f67-4801-8656-949352817889 (sync inventory) is running in background. Output coming soon. 2026-01-10 14:26:52.476621 | orchestrator | 2026-01-10 14:26:25 | INFO  | Starting group_vars file reorganization 2026-01-10 14:26:52.476738 | orchestrator | 2026-01-10 14:26:25 | INFO  | Moved 0 file(s) to their respective directories 2026-01-10 14:26:52.476755 | orchestrator | 2026-01-10 14:26:25 | INFO  | Group_vars file reorganization completed 2026-01-10 14:26:52.476768 | orchestrator | 2026-01-10 14:26:28 | INFO  | Starting variable preparation from inventory 2026-01-10 14:26:52.476779 | orchestrator | 2026-01-10 14:26:31 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2026-01-10 14:26:52.476791 | orchestrator | 2026-01-10 14:26:31 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2026-01-10 14:26:52.476822 | orchestrator | 2026-01-10 14:26:31 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2026-01-10 14:26:52.476834 | orchestrator | 2026-01-10 14:26:31 | INFO  | 3 file(s) written, 6 host(s) processed 2026-01-10 14:26:52.476845 | orchestrator | 2026-01-10 14:26:31 | INFO  | Variable preparation completed 2026-01-10 14:26:52.476856 | orchestrator | 2026-01-10 14:26:33 | INFO  | Starting inventory overwrite handling 2026-01-10 14:26:52.476876 | orchestrator | 2026-01-10 14:26:33 | INFO  | Handling group overwrites in 99-overwrite 2026-01-10 14:26:52.476896 | orchestrator | 2026-01-10 14:26:33 | INFO  | Removing group frr:children from 60-generic 2026-01-10 14:26:52.476913 | orchestrator | 2026-01-10 14:26:33 | INFO  | Removing group netbird:children from 50-infrastructure 2026-01-10 14:26:52.476931 | orchestrator | 2026-01-10 14:26:33 | INFO  | Removing group ceph-rgw from 50-ceph 2026-01-10 14:26:52.476951 | orchestrator | 2026-01-10 14:26:33 | INFO  | Removing group ceph-mds from 50-ceph 2026-01-10 14:26:52.476970 | orchestrator | 2026-01-10 14:26:33 | INFO  | Handling group overwrites in 20-roles 2026-01-10 14:26:52.477098 | orchestrator | 2026-01-10 14:26:33 | INFO  | Removing group k3s_node from 50-infrastructure 2026-01-10 14:26:52.477114 | orchestrator | 2026-01-10 14:26:33 | INFO  | Removed 5 group(s) in total 2026-01-10 14:26:52.477125 | orchestrator | 2026-01-10 14:26:33 | INFO  | Inventory overwrite handling completed 2026-01-10 14:26:52.477135 | orchestrator | 2026-01-10 14:26:34 | INFO  | Starting merge of inventory files 2026-01-10 14:26:52.477146 | orchestrator | 2026-01-10 14:26:34 | INFO  | Inventory files merged successfully 2026-01-10 14:26:52.477157 | orchestrator | 2026-01-10 14:26:39 | INFO  | Generating ClusterShell configuration from Ansible inventory 2026-01-10 14:26:52.477170 | orchestrator | 2026-01-10 14:26:51 | INFO  | Successfully wrote ClusterShell configuration 2026-01-10 14:26:52.477183 | orchestrator | [master 58b6f14] 2026-01-10-14-26 2026-01-10 14:26:52.477196 | orchestrator | 1 file changed, 30 insertions(+), 9 deletions(-) 2026-01-10 14:26:54.607689 | orchestrator | 2026-01-10 14:26:54 | INFO  | Task d35ac301-c6e3-459c-879b-0f328fad17c3 (ceph-create-lvm-devices) was prepared for execution. 2026-01-10 14:26:54.607794 | orchestrator | 2026-01-10 14:26:54 | INFO  | It takes a moment until task d35ac301-c6e3-459c-879b-0f328fad17c3 (ceph-create-lvm-devices) has been started and output is visible here. 2026-01-10 14:27:07.109394 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-01-10 14:27:07.109517 | orchestrator | 2.16.14 2026-01-10 14:27:07.109530 | orchestrator | 2026-01-10 14:27:07.109539 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-01-10 14:27:07.109580 | orchestrator | 2026-01-10 14:27:07.109590 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-01-10 14:27:07.109599 | orchestrator | Saturday 10 January 2026 14:26:59 +0000 (0:00:00.327) 0:00:00.327 ****** 2026-01-10 14:27:07.109607 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-01-10 14:27:07.109615 | orchestrator | 2026-01-10 14:27:07.109623 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-01-10 14:27:07.109632 | orchestrator | Saturday 10 January 2026 14:26:59 +0000 (0:00:00.309) 0:00:00.637 ****** 2026-01-10 14:27:07.109642 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:27:07.109655 | orchestrator | 2026-01-10 14:27:07.109669 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:27:07.109682 | orchestrator | Saturday 10 January 2026 14:26:59 +0000 (0:00:00.263) 0:00:00.900 ****** 2026-01-10 14:27:07.109693 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-01-10 14:27:07.109706 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-01-10 14:27:07.109718 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-01-10 14:27:07.109729 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-01-10 14:27:07.109741 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-01-10 14:27:07.109752 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-01-10 14:27:07.109763 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-01-10 14:27:07.109775 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-01-10 14:27:07.109787 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-01-10 14:27:07.109798 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-01-10 14:27:07.109811 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-01-10 14:27:07.109822 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-01-10 14:27:07.109863 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-01-10 14:27:07.109876 | orchestrator | 2026-01-10 14:27:07.109886 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:27:07.109898 | orchestrator | Saturday 10 January 2026 14:27:00 +0000 (0:00:00.538) 0:00:01.438 ****** 2026-01-10 14:27:07.109910 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:27:07.109922 | orchestrator | 2026-01-10 14:27:07.109934 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:27:07.109948 | orchestrator | Saturday 10 January 2026 14:27:00 +0000 (0:00:00.210) 0:00:01.649 ****** 2026-01-10 14:27:07.109962 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:27:07.109975 | orchestrator | 2026-01-10 14:27:07.109989 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:27:07.110002 | orchestrator | Saturday 10 January 2026 14:27:00 +0000 (0:00:00.196) 0:00:01.845 ****** 2026-01-10 14:27:07.110106 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:27:07.110144 | orchestrator | 2026-01-10 14:27:07.110154 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:27:07.110163 | orchestrator | Saturday 10 January 2026 14:27:01 +0000 (0:00:00.211) 0:00:02.057 ****** 2026-01-10 14:27:07.110174 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:27:07.110188 | orchestrator | 2026-01-10 14:27:07.110201 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:27:07.110213 | orchestrator | Saturday 10 January 2026 14:27:01 +0000 (0:00:00.193) 0:00:02.250 ****** 2026-01-10 14:27:07.110226 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:27:07.110240 | orchestrator | 2026-01-10 14:27:07.110255 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:27:07.110270 | orchestrator | Saturday 10 January 2026 14:27:01 +0000 (0:00:00.213) 0:00:02.463 ****** 2026-01-10 14:27:07.110284 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:27:07.110299 | orchestrator | 2026-01-10 14:27:07.110314 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:27:07.110328 | orchestrator | Saturday 10 January 2026 14:27:01 +0000 (0:00:00.221) 0:00:02.685 ****** 2026-01-10 14:27:07.110340 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:27:07.110352 | orchestrator | 2026-01-10 14:27:07.110364 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:27:07.110377 | orchestrator | Saturday 10 January 2026 14:27:01 +0000 (0:00:00.221) 0:00:02.906 ****** 2026-01-10 14:27:07.110391 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:27:07.110405 | orchestrator | 2026-01-10 14:27:07.110418 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:27:07.110432 | orchestrator | Saturday 10 January 2026 14:27:02 +0000 (0:00:00.199) 0:00:03.106 ****** 2026-01-10 14:27:07.110441 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_21aeb2d8-b6d0-4615-8a9a-0fb4b5fbf431) 2026-01-10 14:27:07.110450 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_21aeb2d8-b6d0-4615-8a9a-0fb4b5fbf431) 2026-01-10 14:27:07.110459 | orchestrator | 2026-01-10 14:27:07.110467 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:27:07.110494 | orchestrator | Saturday 10 January 2026 14:27:02 +0000 (0:00:00.427) 0:00:03.533 ****** 2026-01-10 14:27:07.110503 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_70c6fd94-218f-483a-b965-10c70b1b97fc) 2026-01-10 14:27:07.110511 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_70c6fd94-218f-483a-b965-10c70b1b97fc) 2026-01-10 14:27:07.110519 | orchestrator | 2026-01-10 14:27:07.110527 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:27:07.110535 | orchestrator | Saturday 10 January 2026 14:27:03 +0000 (0:00:00.769) 0:00:04.302 ****** 2026-01-10 14:27:07.110543 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_f7705bd4-29b3-411e-b8b9-50568fcffd73) 2026-01-10 14:27:07.110563 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_f7705bd4-29b3-411e-b8b9-50568fcffd73) 2026-01-10 14:27:07.110571 | orchestrator | 2026-01-10 14:27:07.110579 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:27:07.110586 | orchestrator | Saturday 10 January 2026 14:27:04 +0000 (0:00:00.804) 0:00:05.106 ****** 2026-01-10 14:27:07.110594 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_2130b2ec-580e-4b39-88b4-748d7926916f) 2026-01-10 14:27:07.110602 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_2130b2ec-580e-4b39-88b4-748d7926916f) 2026-01-10 14:27:07.110609 | orchestrator | 2026-01-10 14:27:07.110617 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:27:07.110625 | orchestrator | Saturday 10 January 2026 14:27:05 +0000 (0:00:00.831) 0:00:05.938 ****** 2026-01-10 14:27:07.110632 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-01-10 14:27:07.110640 | orchestrator | 2026-01-10 14:27:07.110648 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:27:07.110656 | orchestrator | Saturday 10 January 2026 14:27:05 +0000 (0:00:00.323) 0:00:06.262 ****** 2026-01-10 14:27:07.110663 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-01-10 14:27:07.110671 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-01-10 14:27:07.110679 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-01-10 14:27:07.110702 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-01-10 14:27:07.110710 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-01-10 14:27:07.110718 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-01-10 14:27:07.110725 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-01-10 14:27:07.110733 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-01-10 14:27:07.110741 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-01-10 14:27:07.110748 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-01-10 14:27:07.110756 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-01-10 14:27:07.110768 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-01-10 14:27:07.110776 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-01-10 14:27:07.110784 | orchestrator | 2026-01-10 14:27:07.110792 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:27:07.110799 | orchestrator | Saturday 10 January 2026 14:27:05 +0000 (0:00:00.402) 0:00:06.664 ****** 2026-01-10 14:27:07.110807 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:27:07.110815 | orchestrator | 2026-01-10 14:27:07.110822 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:27:07.110830 | orchestrator | Saturday 10 January 2026 14:27:05 +0000 (0:00:00.182) 0:00:06.847 ****** 2026-01-10 14:27:07.110838 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:27:07.110845 | orchestrator | 2026-01-10 14:27:07.110853 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:27:07.110861 | orchestrator | Saturday 10 January 2026 14:27:06 +0000 (0:00:00.202) 0:00:07.050 ****** 2026-01-10 14:27:07.110868 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:27:07.110876 | orchestrator | 2026-01-10 14:27:07.110884 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:27:07.110891 | orchestrator | Saturday 10 January 2026 14:27:06 +0000 (0:00:00.209) 0:00:07.259 ****** 2026-01-10 14:27:07.110904 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:27:07.110912 | orchestrator | 2026-01-10 14:27:07.110920 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:27:07.110928 | orchestrator | Saturday 10 January 2026 14:27:06 +0000 (0:00:00.200) 0:00:07.460 ****** 2026-01-10 14:27:07.110935 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:27:07.110943 | orchestrator | 2026-01-10 14:27:07.110951 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:27:07.110958 | orchestrator | Saturday 10 January 2026 14:27:06 +0000 (0:00:00.180) 0:00:07.640 ****** 2026-01-10 14:27:07.110982 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:27:07.110997 | orchestrator | 2026-01-10 14:27:07.111005 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:27:07.111013 | orchestrator | Saturday 10 January 2026 14:27:06 +0000 (0:00:00.199) 0:00:07.840 ****** 2026-01-10 14:27:07.111021 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:27:07.111029 | orchestrator | 2026-01-10 14:27:07.111057 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:27:15.440532 | orchestrator | Saturday 10 January 2026 14:27:07 +0000 (0:00:00.192) 0:00:08.032 ****** 2026-01-10 14:27:15.440637 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:27:15.440651 | orchestrator | 2026-01-10 14:27:15.440661 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:27:15.440669 | orchestrator | Saturday 10 January 2026 14:27:07 +0000 (0:00:00.212) 0:00:08.244 ****** 2026-01-10 14:27:15.440677 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-01-10 14:27:15.440686 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-01-10 14:27:15.440695 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-01-10 14:27:15.440703 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-01-10 14:27:15.440712 | orchestrator | 2026-01-10 14:27:15.440720 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:27:15.440728 | orchestrator | Saturday 10 January 2026 14:27:08 +0000 (0:00:00.900) 0:00:09.145 ****** 2026-01-10 14:27:15.440736 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:27:15.440744 | orchestrator | 2026-01-10 14:27:15.440752 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:27:15.440760 | orchestrator | Saturday 10 January 2026 14:27:08 +0000 (0:00:00.206) 0:00:09.352 ****** 2026-01-10 14:27:15.440765 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:27:15.440770 | orchestrator | 2026-01-10 14:27:15.440776 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:27:15.440784 | orchestrator | Saturday 10 January 2026 14:27:08 +0000 (0:00:00.203) 0:00:09.555 ****** 2026-01-10 14:27:15.440792 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:27:15.440799 | orchestrator | 2026-01-10 14:27:15.440807 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:27:15.440815 | orchestrator | Saturday 10 January 2026 14:27:08 +0000 (0:00:00.196) 0:00:09.751 ****** 2026-01-10 14:27:15.440823 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:27:15.440831 | orchestrator | 2026-01-10 14:27:15.440839 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-01-10 14:27:15.440847 | orchestrator | Saturday 10 January 2026 14:27:09 +0000 (0:00:00.186) 0:00:09.938 ****** 2026-01-10 14:27:15.440854 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:27:15.440862 | orchestrator | 2026-01-10 14:27:15.440870 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-01-10 14:27:15.440878 | orchestrator | Saturday 10 January 2026 14:27:09 +0000 (0:00:00.123) 0:00:10.062 ****** 2026-01-10 14:27:15.440887 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '6bac10f4-8703-5b93-90a3-91ba865f27b3'}}) 2026-01-10 14:27:15.440895 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'ef830303-d908-5775-964e-bef8687288a6'}}) 2026-01-10 14:27:15.440903 | orchestrator | 2026-01-10 14:27:15.440912 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-01-10 14:27:15.440940 | orchestrator | Saturday 10 January 2026 14:27:09 +0000 (0:00:00.191) 0:00:10.253 ****** 2026-01-10 14:27:15.440950 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-6bac10f4-8703-5b93-90a3-91ba865f27b3', 'data_vg': 'ceph-6bac10f4-8703-5b93-90a3-91ba865f27b3'}) 2026-01-10 14:27:15.440974 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-ef830303-d908-5775-964e-bef8687288a6', 'data_vg': 'ceph-ef830303-d908-5775-964e-bef8687288a6'}) 2026-01-10 14:27:15.440983 | orchestrator | 2026-01-10 14:27:15.440992 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-01-10 14:27:15.441008 | orchestrator | Saturday 10 January 2026 14:27:11 +0000 (0:00:01.973) 0:00:12.226 ****** 2026-01-10 14:27:15.441017 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6bac10f4-8703-5b93-90a3-91ba865f27b3', 'data_vg': 'ceph-6bac10f4-8703-5b93-90a3-91ba865f27b3'})  2026-01-10 14:27:15.441027 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ef830303-d908-5775-964e-bef8687288a6', 'data_vg': 'ceph-ef830303-d908-5775-964e-bef8687288a6'})  2026-01-10 14:27:15.441034 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:27:15.441043 | orchestrator | 2026-01-10 14:27:15.441050 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-01-10 14:27:15.441157 | orchestrator | Saturday 10 January 2026 14:27:11 +0000 (0:00:00.176) 0:00:12.403 ****** 2026-01-10 14:27:15.441178 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-6bac10f4-8703-5b93-90a3-91ba865f27b3', 'data_vg': 'ceph-6bac10f4-8703-5b93-90a3-91ba865f27b3'}) 2026-01-10 14:27:15.441184 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-ef830303-d908-5775-964e-bef8687288a6', 'data_vg': 'ceph-ef830303-d908-5775-964e-bef8687288a6'}) 2026-01-10 14:27:15.441190 | orchestrator | 2026-01-10 14:27:15.441197 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-01-10 14:27:15.441201 | orchestrator | Saturday 10 January 2026 14:27:12 +0000 (0:00:01.513) 0:00:13.917 ****** 2026-01-10 14:27:15.441206 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6bac10f4-8703-5b93-90a3-91ba865f27b3', 'data_vg': 'ceph-6bac10f4-8703-5b93-90a3-91ba865f27b3'})  2026-01-10 14:27:15.441211 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ef830303-d908-5775-964e-bef8687288a6', 'data_vg': 'ceph-ef830303-d908-5775-964e-bef8687288a6'})  2026-01-10 14:27:15.441216 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:27:15.441221 | orchestrator | 2026-01-10 14:27:15.441235 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-01-10 14:27:15.441240 | orchestrator | Saturday 10 January 2026 14:27:13 +0000 (0:00:00.203) 0:00:14.121 ****** 2026-01-10 14:27:15.441269 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:27:15.441310 | orchestrator | 2026-01-10 14:27:15.441316 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-01-10 14:27:15.441321 | orchestrator | Saturday 10 January 2026 14:27:13 +0000 (0:00:00.152) 0:00:14.274 ****** 2026-01-10 14:27:15.441325 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6bac10f4-8703-5b93-90a3-91ba865f27b3', 'data_vg': 'ceph-6bac10f4-8703-5b93-90a3-91ba865f27b3'})  2026-01-10 14:27:15.441331 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ef830303-d908-5775-964e-bef8687288a6', 'data_vg': 'ceph-ef830303-d908-5775-964e-bef8687288a6'})  2026-01-10 14:27:15.441335 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:27:15.441340 | orchestrator | 2026-01-10 14:27:15.441345 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-01-10 14:27:15.441350 | orchestrator | Saturday 10 January 2026 14:27:13 +0000 (0:00:00.411) 0:00:14.685 ****** 2026-01-10 14:27:15.441354 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:27:15.441359 | orchestrator | 2026-01-10 14:27:15.441364 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-01-10 14:27:15.441369 | orchestrator | Saturday 10 January 2026 14:27:13 +0000 (0:00:00.175) 0:00:14.861 ****** 2026-01-10 14:27:15.441384 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6bac10f4-8703-5b93-90a3-91ba865f27b3', 'data_vg': 'ceph-6bac10f4-8703-5b93-90a3-91ba865f27b3'})  2026-01-10 14:27:15.441389 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ef830303-d908-5775-964e-bef8687288a6', 'data_vg': 'ceph-ef830303-d908-5775-964e-bef8687288a6'})  2026-01-10 14:27:15.441394 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:27:15.441399 | orchestrator | 2026-01-10 14:27:15.441403 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-01-10 14:27:15.441415 | orchestrator | Saturday 10 January 2026 14:27:14 +0000 (0:00:00.188) 0:00:15.049 ****** 2026-01-10 14:27:15.441420 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:27:15.441425 | orchestrator | 2026-01-10 14:27:15.441429 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-01-10 14:27:15.441434 | orchestrator | Saturday 10 January 2026 14:27:14 +0000 (0:00:00.165) 0:00:15.214 ****** 2026-01-10 14:27:15.441439 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6bac10f4-8703-5b93-90a3-91ba865f27b3', 'data_vg': 'ceph-6bac10f4-8703-5b93-90a3-91ba865f27b3'})  2026-01-10 14:27:15.441443 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ef830303-d908-5775-964e-bef8687288a6', 'data_vg': 'ceph-ef830303-d908-5775-964e-bef8687288a6'})  2026-01-10 14:27:15.441448 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:27:15.441453 | orchestrator | 2026-01-10 14:27:15.441458 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-01-10 14:27:15.441462 | orchestrator | Saturday 10 January 2026 14:27:14 +0000 (0:00:00.246) 0:00:15.461 ****** 2026-01-10 14:27:15.441467 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:27:15.441472 | orchestrator | 2026-01-10 14:27:15.441477 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-01-10 14:27:15.441495 | orchestrator | Saturday 10 January 2026 14:27:14 +0000 (0:00:00.169) 0:00:15.631 ****** 2026-01-10 14:27:15.441503 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6bac10f4-8703-5b93-90a3-91ba865f27b3', 'data_vg': 'ceph-6bac10f4-8703-5b93-90a3-91ba865f27b3'})  2026-01-10 14:27:15.441508 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ef830303-d908-5775-964e-bef8687288a6', 'data_vg': 'ceph-ef830303-d908-5775-964e-bef8687288a6'})  2026-01-10 14:27:15.441512 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:27:15.441517 | orchestrator | 2026-01-10 14:27:15.441522 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-01-10 14:27:15.441527 | orchestrator | Saturday 10 January 2026 14:27:14 +0000 (0:00:00.186) 0:00:15.818 ****** 2026-01-10 14:27:15.441531 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6bac10f4-8703-5b93-90a3-91ba865f27b3', 'data_vg': 'ceph-6bac10f4-8703-5b93-90a3-91ba865f27b3'})  2026-01-10 14:27:15.441536 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ef830303-d908-5775-964e-bef8687288a6', 'data_vg': 'ceph-ef830303-d908-5775-964e-bef8687288a6'})  2026-01-10 14:27:15.441541 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:27:15.441545 | orchestrator | 2026-01-10 14:27:15.441550 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-01-10 14:27:15.441555 | orchestrator | Saturday 10 January 2026 14:27:15 +0000 (0:00:00.204) 0:00:16.022 ****** 2026-01-10 14:27:15.441560 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6bac10f4-8703-5b93-90a3-91ba865f27b3', 'data_vg': 'ceph-6bac10f4-8703-5b93-90a3-91ba865f27b3'})  2026-01-10 14:27:15.441564 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ef830303-d908-5775-964e-bef8687288a6', 'data_vg': 'ceph-ef830303-d908-5775-964e-bef8687288a6'})  2026-01-10 14:27:15.441569 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:27:15.441574 | orchestrator | 2026-01-10 14:27:15.441579 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-01-10 14:27:15.441587 | orchestrator | Saturday 10 January 2026 14:27:15 +0000 (0:00:00.185) 0:00:16.207 ****** 2026-01-10 14:27:15.441592 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:27:15.441596 | orchestrator | 2026-01-10 14:27:15.441601 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-01-10 14:27:15.441611 | orchestrator | Saturday 10 January 2026 14:27:15 +0000 (0:00:00.156) 0:00:16.364 ****** 2026-01-10 14:27:22.905792 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:27:22.905900 | orchestrator | 2026-01-10 14:27:22.905918 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-01-10 14:27:22.905931 | orchestrator | Saturday 10 January 2026 14:27:15 +0000 (0:00:00.177) 0:00:16.541 ****** 2026-01-10 14:27:22.905943 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:27:22.905954 | orchestrator | 2026-01-10 14:27:22.905966 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-01-10 14:27:22.905978 | orchestrator | Saturday 10 January 2026 14:27:15 +0000 (0:00:00.198) 0:00:16.739 ****** 2026-01-10 14:27:22.905989 | orchestrator | ok: [testbed-node-3] => { 2026-01-10 14:27:22.906001 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-01-10 14:27:22.906062 | orchestrator | } 2026-01-10 14:27:22.906076 | orchestrator | 2026-01-10 14:27:22.906158 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-01-10 14:27:22.906171 | orchestrator | Saturday 10 January 2026 14:27:16 +0000 (0:00:00.423) 0:00:17.163 ****** 2026-01-10 14:27:22.906181 | orchestrator | ok: [testbed-node-3] => { 2026-01-10 14:27:22.906193 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-01-10 14:27:22.906204 | orchestrator | } 2026-01-10 14:27:22.906215 | orchestrator | 2026-01-10 14:27:22.906227 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-01-10 14:27:22.906238 | orchestrator | Saturday 10 January 2026 14:27:16 +0000 (0:00:00.171) 0:00:17.335 ****** 2026-01-10 14:27:22.906250 | orchestrator | ok: [testbed-node-3] => { 2026-01-10 14:27:22.906261 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-01-10 14:27:22.906272 | orchestrator | } 2026-01-10 14:27:22.906283 | orchestrator | 2026-01-10 14:27:22.906292 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-01-10 14:27:22.906302 | orchestrator | Saturday 10 January 2026 14:27:16 +0000 (0:00:00.201) 0:00:17.537 ****** 2026-01-10 14:27:22.906312 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:27:22.906322 | orchestrator | 2026-01-10 14:27:22.906332 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-01-10 14:27:22.906343 | orchestrator | Saturday 10 January 2026 14:27:17 +0000 (0:00:00.747) 0:00:18.284 ****** 2026-01-10 14:27:22.906353 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:27:22.906363 | orchestrator | 2026-01-10 14:27:22.906373 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-01-10 14:27:22.906383 | orchestrator | Saturday 10 January 2026 14:27:17 +0000 (0:00:00.567) 0:00:18.852 ****** 2026-01-10 14:27:22.906393 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:27:22.906404 | orchestrator | 2026-01-10 14:27:22.906414 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-01-10 14:27:22.906424 | orchestrator | Saturday 10 January 2026 14:27:18 +0000 (0:00:00.597) 0:00:19.449 ****** 2026-01-10 14:27:22.906434 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:27:22.906445 | orchestrator | 2026-01-10 14:27:22.906456 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-01-10 14:27:22.906467 | orchestrator | Saturday 10 January 2026 14:27:18 +0000 (0:00:00.179) 0:00:19.629 ****** 2026-01-10 14:27:22.906479 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:27:22.906490 | orchestrator | 2026-01-10 14:27:22.906499 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-01-10 14:27:22.906510 | orchestrator | Saturday 10 January 2026 14:27:18 +0000 (0:00:00.130) 0:00:19.759 ****** 2026-01-10 14:27:22.906520 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:27:22.906536 | orchestrator | 2026-01-10 14:27:22.906548 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-01-10 14:27:22.906607 | orchestrator | Saturday 10 January 2026 14:27:18 +0000 (0:00:00.121) 0:00:19.880 ****** 2026-01-10 14:27:22.906623 | orchestrator | ok: [testbed-node-3] => { 2026-01-10 14:27:22.906636 | orchestrator |  "vgs_report": { 2026-01-10 14:27:22.906648 | orchestrator |  "vg": [] 2026-01-10 14:27:22.906658 | orchestrator |  } 2026-01-10 14:27:22.906668 | orchestrator | } 2026-01-10 14:27:22.906679 | orchestrator | 2026-01-10 14:27:22.906689 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-01-10 14:27:22.906699 | orchestrator | Saturday 10 January 2026 14:27:19 +0000 (0:00:00.159) 0:00:20.039 ****** 2026-01-10 14:27:22.906710 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:27:22.906720 | orchestrator | 2026-01-10 14:27:22.906730 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-01-10 14:27:22.906739 | orchestrator | Saturday 10 January 2026 14:27:19 +0000 (0:00:00.156) 0:00:20.195 ****** 2026-01-10 14:27:22.906748 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:27:22.906758 | orchestrator | 2026-01-10 14:27:22.906767 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-01-10 14:27:22.906777 | orchestrator | Saturday 10 January 2026 14:27:19 +0000 (0:00:00.166) 0:00:20.362 ****** 2026-01-10 14:27:22.906786 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:27:22.906796 | orchestrator | 2026-01-10 14:27:22.906806 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-01-10 14:27:22.906815 | orchestrator | Saturday 10 January 2026 14:27:19 +0000 (0:00:00.434) 0:00:20.796 ****** 2026-01-10 14:27:22.906825 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:27:22.906835 | orchestrator | 2026-01-10 14:27:22.906846 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-01-10 14:27:22.906855 | orchestrator | Saturday 10 January 2026 14:27:20 +0000 (0:00:00.199) 0:00:20.996 ****** 2026-01-10 14:27:22.906865 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:27:22.906875 | orchestrator | 2026-01-10 14:27:22.906884 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-01-10 14:27:22.906894 | orchestrator | Saturday 10 January 2026 14:27:20 +0000 (0:00:00.159) 0:00:21.155 ****** 2026-01-10 14:27:22.906903 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:27:22.906913 | orchestrator | 2026-01-10 14:27:22.906923 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-01-10 14:27:22.906932 | orchestrator | Saturday 10 January 2026 14:27:20 +0000 (0:00:00.150) 0:00:21.305 ****** 2026-01-10 14:27:22.906941 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:27:22.906951 | orchestrator | 2026-01-10 14:27:22.906961 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-01-10 14:27:22.906971 | orchestrator | Saturday 10 January 2026 14:27:20 +0000 (0:00:00.147) 0:00:21.452 ****** 2026-01-10 14:27:22.907003 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:27:22.907014 | orchestrator | 2026-01-10 14:27:22.907025 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-01-10 14:27:22.907035 | orchestrator | Saturday 10 January 2026 14:27:20 +0000 (0:00:00.153) 0:00:21.606 ****** 2026-01-10 14:27:22.907045 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:27:22.907056 | orchestrator | 2026-01-10 14:27:22.907066 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-01-10 14:27:22.907076 | orchestrator | Saturday 10 January 2026 14:27:20 +0000 (0:00:00.128) 0:00:21.735 ****** 2026-01-10 14:27:22.907112 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:27:22.907123 | orchestrator | 2026-01-10 14:27:22.907133 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-01-10 14:27:22.907144 | orchestrator | Saturday 10 January 2026 14:27:20 +0000 (0:00:00.165) 0:00:21.901 ****** 2026-01-10 14:27:22.907154 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:27:22.907164 | orchestrator | 2026-01-10 14:27:22.907174 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-01-10 14:27:22.907185 | orchestrator | Saturday 10 January 2026 14:27:21 +0000 (0:00:00.162) 0:00:22.064 ****** 2026-01-10 14:27:22.907206 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:27:22.907218 | orchestrator | 2026-01-10 14:27:22.907229 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-01-10 14:27:22.907240 | orchestrator | Saturday 10 January 2026 14:27:21 +0000 (0:00:00.141) 0:00:22.206 ****** 2026-01-10 14:27:22.907251 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:27:22.907261 | orchestrator | 2026-01-10 14:27:22.907273 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-01-10 14:27:22.907283 | orchestrator | Saturday 10 January 2026 14:27:21 +0000 (0:00:00.117) 0:00:22.323 ****** 2026-01-10 14:27:22.907294 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:27:22.907305 | orchestrator | 2026-01-10 14:27:22.907315 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-01-10 14:27:22.907325 | orchestrator | Saturday 10 January 2026 14:27:21 +0000 (0:00:00.134) 0:00:22.458 ****** 2026-01-10 14:27:22.907337 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6bac10f4-8703-5b93-90a3-91ba865f27b3', 'data_vg': 'ceph-6bac10f4-8703-5b93-90a3-91ba865f27b3'})  2026-01-10 14:27:22.907349 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ef830303-d908-5775-964e-bef8687288a6', 'data_vg': 'ceph-ef830303-d908-5775-964e-bef8687288a6'})  2026-01-10 14:27:22.907359 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:27:22.907368 | orchestrator | 2026-01-10 14:27:22.907378 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-01-10 14:27:22.907388 | orchestrator | Saturday 10 January 2026 14:27:21 +0000 (0:00:00.456) 0:00:22.914 ****** 2026-01-10 14:27:22.907397 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6bac10f4-8703-5b93-90a3-91ba865f27b3', 'data_vg': 'ceph-6bac10f4-8703-5b93-90a3-91ba865f27b3'})  2026-01-10 14:27:22.907407 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ef830303-d908-5775-964e-bef8687288a6', 'data_vg': 'ceph-ef830303-d908-5775-964e-bef8687288a6'})  2026-01-10 14:27:22.907416 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:27:22.907426 | orchestrator | 2026-01-10 14:27:22.907437 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-01-10 14:27:22.907448 | orchestrator | Saturday 10 January 2026 14:27:22 +0000 (0:00:00.183) 0:00:23.097 ****** 2026-01-10 14:27:22.907459 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6bac10f4-8703-5b93-90a3-91ba865f27b3', 'data_vg': 'ceph-6bac10f4-8703-5b93-90a3-91ba865f27b3'})  2026-01-10 14:27:22.907469 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ef830303-d908-5775-964e-bef8687288a6', 'data_vg': 'ceph-ef830303-d908-5775-964e-bef8687288a6'})  2026-01-10 14:27:22.907480 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:27:22.907490 | orchestrator | 2026-01-10 14:27:22.907500 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-01-10 14:27:22.907511 | orchestrator | Saturday 10 January 2026 14:27:22 +0000 (0:00:00.181) 0:00:23.279 ****** 2026-01-10 14:27:22.907521 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6bac10f4-8703-5b93-90a3-91ba865f27b3', 'data_vg': 'ceph-6bac10f4-8703-5b93-90a3-91ba865f27b3'})  2026-01-10 14:27:22.907532 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ef830303-d908-5775-964e-bef8687288a6', 'data_vg': 'ceph-ef830303-d908-5775-964e-bef8687288a6'})  2026-01-10 14:27:22.907543 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:27:22.907553 | orchestrator | 2026-01-10 14:27:22.907563 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-01-10 14:27:22.907574 | orchestrator | Saturday 10 January 2026 14:27:22 +0000 (0:00:00.227) 0:00:23.506 ****** 2026-01-10 14:27:22.907584 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6bac10f4-8703-5b93-90a3-91ba865f27b3', 'data_vg': 'ceph-6bac10f4-8703-5b93-90a3-91ba865f27b3'})  2026-01-10 14:27:22.907594 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ef830303-d908-5775-964e-bef8687288a6', 'data_vg': 'ceph-ef830303-d908-5775-964e-bef8687288a6'})  2026-01-10 14:27:22.907613 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:27:22.907624 | orchestrator | 2026-01-10 14:27:22.907634 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-01-10 14:27:22.907656 | orchestrator | Saturday 10 January 2026 14:27:22 +0000 (0:00:00.170) 0:00:23.677 ****** 2026-01-10 14:27:22.907676 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6bac10f4-8703-5b93-90a3-91ba865f27b3', 'data_vg': 'ceph-6bac10f4-8703-5b93-90a3-91ba865f27b3'})  2026-01-10 14:27:28.306664 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ef830303-d908-5775-964e-bef8687288a6', 'data_vg': 'ceph-ef830303-d908-5775-964e-bef8687288a6'})  2026-01-10 14:27:28.306794 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:27:28.306818 | orchestrator | 2026-01-10 14:27:28.306835 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-01-10 14:27:28.306852 | orchestrator | Saturday 10 January 2026 14:27:22 +0000 (0:00:00.155) 0:00:23.832 ****** 2026-01-10 14:27:28.306868 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6bac10f4-8703-5b93-90a3-91ba865f27b3', 'data_vg': 'ceph-6bac10f4-8703-5b93-90a3-91ba865f27b3'})  2026-01-10 14:27:28.306884 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ef830303-d908-5775-964e-bef8687288a6', 'data_vg': 'ceph-ef830303-d908-5775-964e-bef8687288a6'})  2026-01-10 14:27:28.306902 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:27:28.306919 | orchestrator | 2026-01-10 14:27:28.306937 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-01-10 14:27:28.306948 | orchestrator | Saturday 10 January 2026 14:27:23 +0000 (0:00:00.153) 0:00:23.986 ****** 2026-01-10 14:27:28.306958 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6bac10f4-8703-5b93-90a3-91ba865f27b3', 'data_vg': 'ceph-6bac10f4-8703-5b93-90a3-91ba865f27b3'})  2026-01-10 14:27:28.306972 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ef830303-d908-5775-964e-bef8687288a6', 'data_vg': 'ceph-ef830303-d908-5775-964e-bef8687288a6'})  2026-01-10 14:27:28.306989 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:27:28.307004 | orchestrator | 2026-01-10 14:27:28.307020 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-01-10 14:27:28.307037 | orchestrator | Saturday 10 January 2026 14:27:23 +0000 (0:00:00.146) 0:00:24.132 ****** 2026-01-10 14:27:28.307052 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:27:28.307070 | orchestrator | 2026-01-10 14:27:28.307086 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-01-10 14:27:28.307149 | orchestrator | Saturday 10 January 2026 14:27:23 +0000 (0:00:00.513) 0:00:24.646 ****** 2026-01-10 14:27:28.307167 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:27:28.307184 | orchestrator | 2026-01-10 14:27:28.307203 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-01-10 14:27:28.307220 | orchestrator | Saturday 10 January 2026 14:27:24 +0000 (0:00:00.575) 0:00:25.221 ****** 2026-01-10 14:27:28.307237 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:27:28.307249 | orchestrator | 2026-01-10 14:27:28.307260 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-01-10 14:27:28.307271 | orchestrator | Saturday 10 January 2026 14:27:24 +0000 (0:00:00.122) 0:00:25.343 ****** 2026-01-10 14:27:28.307283 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-6bac10f4-8703-5b93-90a3-91ba865f27b3', 'vg_name': 'ceph-6bac10f4-8703-5b93-90a3-91ba865f27b3'}) 2026-01-10 14:27:28.307320 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-ef830303-d908-5775-964e-bef8687288a6', 'vg_name': 'ceph-ef830303-d908-5775-964e-bef8687288a6'}) 2026-01-10 14:27:28.307337 | orchestrator | 2026-01-10 14:27:28.307356 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-01-10 14:27:28.307374 | orchestrator | Saturday 10 January 2026 14:27:24 +0000 (0:00:00.159) 0:00:25.503 ****** 2026-01-10 14:27:28.307418 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6bac10f4-8703-5b93-90a3-91ba865f27b3', 'data_vg': 'ceph-6bac10f4-8703-5b93-90a3-91ba865f27b3'})  2026-01-10 14:27:28.307434 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ef830303-d908-5775-964e-bef8687288a6', 'data_vg': 'ceph-ef830303-d908-5775-964e-bef8687288a6'})  2026-01-10 14:27:28.307451 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:27:28.307468 | orchestrator | 2026-01-10 14:27:28.307484 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-01-10 14:27:28.307502 | orchestrator | Saturday 10 January 2026 14:27:24 +0000 (0:00:00.346) 0:00:25.849 ****** 2026-01-10 14:27:28.307520 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6bac10f4-8703-5b93-90a3-91ba865f27b3', 'data_vg': 'ceph-6bac10f4-8703-5b93-90a3-91ba865f27b3'})  2026-01-10 14:27:28.307537 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ef830303-d908-5775-964e-bef8687288a6', 'data_vg': 'ceph-ef830303-d908-5775-964e-bef8687288a6'})  2026-01-10 14:27:28.307553 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:27:28.307571 | orchestrator | 2026-01-10 14:27:28.307586 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-01-10 14:27:28.307603 | orchestrator | Saturday 10 January 2026 14:27:25 +0000 (0:00:00.166) 0:00:26.016 ****** 2026-01-10 14:27:28.307618 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6bac10f4-8703-5b93-90a3-91ba865f27b3', 'data_vg': 'ceph-6bac10f4-8703-5b93-90a3-91ba865f27b3'})  2026-01-10 14:27:28.307634 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ef830303-d908-5775-964e-bef8687288a6', 'data_vg': 'ceph-ef830303-d908-5775-964e-bef8687288a6'})  2026-01-10 14:27:28.307652 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:27:28.307668 | orchestrator | 2026-01-10 14:27:28.307685 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-01-10 14:27:28.307695 | orchestrator | Saturday 10 January 2026 14:27:25 +0000 (0:00:00.158) 0:00:26.174 ****** 2026-01-10 14:27:28.307728 | orchestrator | ok: [testbed-node-3] => { 2026-01-10 14:27:28.307738 | orchestrator |  "lvm_report": { 2026-01-10 14:27:28.307748 | orchestrator |  "lv": [ 2026-01-10 14:27:28.307758 | orchestrator |  { 2026-01-10 14:27:28.307767 | orchestrator |  "lv_name": "osd-block-6bac10f4-8703-5b93-90a3-91ba865f27b3", 2026-01-10 14:27:28.307778 | orchestrator |  "vg_name": "ceph-6bac10f4-8703-5b93-90a3-91ba865f27b3" 2026-01-10 14:27:28.307787 | orchestrator |  }, 2026-01-10 14:27:28.307797 | orchestrator |  { 2026-01-10 14:27:28.307806 | orchestrator |  "lv_name": "osd-block-ef830303-d908-5775-964e-bef8687288a6", 2026-01-10 14:27:28.307815 | orchestrator |  "vg_name": "ceph-ef830303-d908-5775-964e-bef8687288a6" 2026-01-10 14:27:28.307825 | orchestrator |  } 2026-01-10 14:27:28.307834 | orchestrator |  ], 2026-01-10 14:27:28.307843 | orchestrator |  "pv": [ 2026-01-10 14:27:28.307852 | orchestrator |  { 2026-01-10 14:27:28.307862 | orchestrator |  "pv_name": "/dev/sdb", 2026-01-10 14:27:28.307871 | orchestrator |  "vg_name": "ceph-6bac10f4-8703-5b93-90a3-91ba865f27b3" 2026-01-10 14:27:28.307880 | orchestrator |  }, 2026-01-10 14:27:28.307890 | orchestrator |  { 2026-01-10 14:27:28.307899 | orchestrator |  "pv_name": "/dev/sdc", 2026-01-10 14:27:28.307909 | orchestrator |  "vg_name": "ceph-ef830303-d908-5775-964e-bef8687288a6" 2026-01-10 14:27:28.307918 | orchestrator |  } 2026-01-10 14:27:28.307927 | orchestrator |  ] 2026-01-10 14:27:28.307937 | orchestrator |  } 2026-01-10 14:27:28.307946 | orchestrator | } 2026-01-10 14:27:28.307956 | orchestrator | 2026-01-10 14:27:28.307965 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-01-10 14:27:28.307975 | orchestrator | 2026-01-10 14:27:28.307984 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-01-10 14:27:28.308005 | orchestrator | Saturday 10 January 2026 14:27:25 +0000 (0:00:00.359) 0:00:26.534 ****** 2026-01-10 14:27:28.308015 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-01-10 14:27:28.308024 | orchestrator | 2026-01-10 14:27:28.308033 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-01-10 14:27:28.308043 | orchestrator | Saturday 10 January 2026 14:27:25 +0000 (0:00:00.260) 0:00:26.794 ****** 2026-01-10 14:27:28.308052 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:27:28.308062 | orchestrator | 2026-01-10 14:27:28.308071 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:27:28.308081 | orchestrator | Saturday 10 January 2026 14:27:26 +0000 (0:00:00.232) 0:00:27.027 ****** 2026-01-10 14:27:28.308090 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-01-10 14:27:28.308125 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-01-10 14:27:28.308141 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-01-10 14:27:28.308156 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-01-10 14:27:28.308173 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-01-10 14:27:28.308188 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-01-10 14:27:28.308214 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-01-10 14:27:28.308231 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-01-10 14:27:28.308247 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-01-10 14:27:28.308262 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-01-10 14:27:28.308278 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-01-10 14:27:28.308292 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-01-10 14:27:28.308307 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-01-10 14:27:28.308322 | orchestrator | 2026-01-10 14:27:28.308336 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:27:28.308352 | orchestrator | Saturday 10 January 2026 14:27:26 +0000 (0:00:00.454) 0:00:27.481 ****** 2026-01-10 14:27:28.308367 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:27:28.308383 | orchestrator | 2026-01-10 14:27:28.308399 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:27:28.308415 | orchestrator | Saturday 10 January 2026 14:27:26 +0000 (0:00:00.238) 0:00:27.720 ****** 2026-01-10 14:27:28.308430 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:27:28.308445 | orchestrator | 2026-01-10 14:27:28.308461 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:27:28.308475 | orchestrator | Saturday 10 January 2026 14:27:26 +0000 (0:00:00.210) 0:00:27.930 ****** 2026-01-10 14:27:28.308489 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:27:28.308506 | orchestrator | 2026-01-10 14:27:28.308521 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:27:28.308537 | orchestrator | Saturday 10 January 2026 14:27:27 +0000 (0:00:00.654) 0:00:28.584 ****** 2026-01-10 14:27:28.308553 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:27:28.308597 | orchestrator | 2026-01-10 14:27:28.308614 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:27:28.308630 | orchestrator | Saturday 10 January 2026 14:27:27 +0000 (0:00:00.240) 0:00:28.825 ****** 2026-01-10 14:27:28.308645 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:27:28.308661 | orchestrator | 2026-01-10 14:27:28.308677 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:27:28.308712 | orchestrator | Saturday 10 January 2026 14:27:28 +0000 (0:00:00.201) 0:00:29.027 ****** 2026-01-10 14:27:28.308728 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:27:28.308744 | orchestrator | 2026-01-10 14:27:28.308777 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:27:39.999287 | orchestrator | Saturday 10 January 2026 14:27:28 +0000 (0:00:00.204) 0:00:29.232 ****** 2026-01-10 14:27:39.999392 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:27:39.999404 | orchestrator | 2026-01-10 14:27:39.999414 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:27:39.999423 | orchestrator | Saturday 10 January 2026 14:27:28 +0000 (0:00:00.221) 0:00:29.453 ****** 2026-01-10 14:27:39.999431 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:27:39.999439 | orchestrator | 2026-01-10 14:27:39.999447 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:27:39.999455 | orchestrator | Saturday 10 January 2026 14:27:28 +0000 (0:00:00.215) 0:00:29.668 ****** 2026-01-10 14:27:39.999463 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_6133482b-469c-4f4b-9769-bc6dc055ce78) 2026-01-10 14:27:39.999472 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_6133482b-469c-4f4b-9769-bc6dc055ce78) 2026-01-10 14:27:39.999480 | orchestrator | 2026-01-10 14:27:39.999488 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:27:39.999496 | orchestrator | Saturday 10 January 2026 14:27:29 +0000 (0:00:00.496) 0:00:30.165 ****** 2026-01-10 14:27:39.999504 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_763a4a26-d97a-40e2-a569-d464b2971007) 2026-01-10 14:27:39.999512 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_763a4a26-d97a-40e2-a569-d464b2971007) 2026-01-10 14:27:39.999520 | orchestrator | 2026-01-10 14:27:39.999528 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:27:39.999536 | orchestrator | Saturday 10 January 2026 14:27:29 +0000 (0:00:00.438) 0:00:30.604 ****** 2026-01-10 14:27:39.999544 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_45b03c06-0ab6-4b62-8b16-77c772305c6a) 2026-01-10 14:27:39.999552 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_45b03c06-0ab6-4b62-8b16-77c772305c6a) 2026-01-10 14:27:39.999560 | orchestrator | 2026-01-10 14:27:39.999568 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:27:39.999575 | orchestrator | Saturday 10 January 2026 14:27:30 +0000 (0:00:00.425) 0:00:31.029 ****** 2026-01-10 14:27:39.999583 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_e6c5241f-60aa-42cf-822c-98275b24deb1) 2026-01-10 14:27:39.999591 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_e6c5241f-60aa-42cf-822c-98275b24deb1) 2026-01-10 14:27:39.999599 | orchestrator | 2026-01-10 14:27:39.999607 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:27:39.999615 | orchestrator | Saturday 10 January 2026 14:27:30 +0000 (0:00:00.688) 0:00:31.718 ****** 2026-01-10 14:27:39.999623 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-01-10 14:27:39.999631 | orchestrator | 2026-01-10 14:27:39.999638 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:27:39.999646 | orchestrator | Saturday 10 January 2026 14:27:31 +0000 (0:00:00.587) 0:00:32.306 ****** 2026-01-10 14:27:39.999654 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-01-10 14:27:39.999662 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-01-10 14:27:39.999671 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-01-10 14:27:39.999679 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-01-10 14:27:39.999687 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-01-10 14:27:39.999733 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-01-10 14:27:39.999742 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-01-10 14:27:39.999749 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-01-10 14:27:39.999757 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-01-10 14:27:39.999764 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-01-10 14:27:39.999771 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-01-10 14:27:39.999778 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-01-10 14:27:39.999785 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-01-10 14:27:39.999792 | orchestrator | 2026-01-10 14:27:39.999800 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:27:39.999807 | orchestrator | Saturday 10 January 2026 14:27:32 +0000 (0:00:00.671) 0:00:32.977 ****** 2026-01-10 14:27:39.999815 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:27:39.999823 | orchestrator | 2026-01-10 14:27:39.999831 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:27:39.999839 | orchestrator | Saturday 10 January 2026 14:27:32 +0000 (0:00:00.217) 0:00:33.195 ****** 2026-01-10 14:27:39.999846 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:27:39.999854 | orchestrator | 2026-01-10 14:27:39.999862 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:27:39.999869 | orchestrator | Saturday 10 January 2026 14:27:32 +0000 (0:00:00.234) 0:00:33.429 ****** 2026-01-10 14:27:39.999877 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:27:39.999885 | orchestrator | 2026-01-10 14:27:39.999923 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:27:39.999931 | orchestrator | Saturday 10 January 2026 14:27:32 +0000 (0:00:00.213) 0:00:33.643 ****** 2026-01-10 14:27:39.999938 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:27:39.999945 | orchestrator | 2026-01-10 14:27:39.999952 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:27:39.999960 | orchestrator | Saturday 10 January 2026 14:27:32 +0000 (0:00:00.191) 0:00:33.834 ****** 2026-01-10 14:27:39.999967 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:27:39.999975 | orchestrator | 2026-01-10 14:27:39.999982 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:27:39.999990 | orchestrator | Saturday 10 January 2026 14:27:33 +0000 (0:00:00.206) 0:00:34.041 ****** 2026-01-10 14:27:39.999997 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:27:40.000005 | orchestrator | 2026-01-10 14:27:40.000013 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:27:40.000020 | orchestrator | Saturday 10 January 2026 14:27:33 +0000 (0:00:00.219) 0:00:34.261 ****** 2026-01-10 14:27:40.000028 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:27:40.000035 | orchestrator | 2026-01-10 14:27:40.000043 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:27:40.000050 | orchestrator | Saturday 10 January 2026 14:27:33 +0000 (0:00:00.203) 0:00:34.465 ****** 2026-01-10 14:27:40.000058 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:27:40.000065 | orchestrator | 2026-01-10 14:27:40.000073 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:27:40.000080 | orchestrator | Saturday 10 January 2026 14:27:33 +0000 (0:00:00.211) 0:00:34.677 ****** 2026-01-10 14:27:40.000088 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-01-10 14:27:40.000096 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-01-10 14:27:40.000104 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-01-10 14:27:40.000111 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-01-10 14:27:40.000125 | orchestrator | 2026-01-10 14:27:40.000172 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:27:40.000180 | orchestrator | Saturday 10 January 2026 14:27:34 +0000 (0:00:01.078) 0:00:35.755 ****** 2026-01-10 14:27:40.000188 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:27:40.000196 | orchestrator | 2026-01-10 14:27:40.000203 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:27:40.000211 | orchestrator | Saturday 10 January 2026 14:27:35 +0000 (0:00:00.198) 0:00:35.953 ****** 2026-01-10 14:27:40.000218 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:27:40.000226 | orchestrator | 2026-01-10 14:27:40.000233 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:27:40.000240 | orchestrator | Saturday 10 January 2026 14:27:35 +0000 (0:00:00.762) 0:00:36.716 ****** 2026-01-10 14:27:40.000247 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:27:40.000255 | orchestrator | 2026-01-10 14:27:40.000262 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:27:40.000269 | orchestrator | Saturday 10 January 2026 14:27:35 +0000 (0:00:00.196) 0:00:36.913 ****** 2026-01-10 14:27:40.000277 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:27:40.000284 | orchestrator | 2026-01-10 14:27:40.000292 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-01-10 14:27:40.000304 | orchestrator | Saturday 10 January 2026 14:27:36 +0000 (0:00:00.220) 0:00:37.133 ****** 2026-01-10 14:27:40.000312 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:27:40.000320 | orchestrator | 2026-01-10 14:27:40.000327 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-01-10 14:27:40.000334 | orchestrator | Saturday 10 January 2026 14:27:36 +0000 (0:00:00.156) 0:00:37.290 ****** 2026-01-10 14:27:40.000342 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '0fad3856-f6d1-50e2-a5cb-d9f4a0859299'}}) 2026-01-10 14:27:40.000349 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '39355231-3192-5ff7-9e27-947e8968f1e9'}}) 2026-01-10 14:27:40.000356 | orchestrator | 2026-01-10 14:27:40.000364 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-01-10 14:27:40.000371 | orchestrator | Saturday 10 January 2026 14:27:36 +0000 (0:00:00.201) 0:00:37.491 ****** 2026-01-10 14:27:40.000380 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-0fad3856-f6d1-50e2-a5cb-d9f4a0859299', 'data_vg': 'ceph-0fad3856-f6d1-50e2-a5cb-d9f4a0859299'}) 2026-01-10 14:27:40.000389 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-39355231-3192-5ff7-9e27-947e8968f1e9', 'data_vg': 'ceph-39355231-3192-5ff7-9e27-947e8968f1e9'}) 2026-01-10 14:27:40.000396 | orchestrator | 2026-01-10 14:27:40.000404 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-01-10 14:27:40.000412 | orchestrator | Saturday 10 January 2026 14:27:38 +0000 (0:00:01.900) 0:00:39.392 ****** 2026-01-10 14:27:40.000419 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-0fad3856-f6d1-50e2-a5cb-d9f4a0859299', 'data_vg': 'ceph-0fad3856-f6d1-50e2-a5cb-d9f4a0859299'})  2026-01-10 14:27:40.000428 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-39355231-3192-5ff7-9e27-947e8968f1e9', 'data_vg': 'ceph-39355231-3192-5ff7-9e27-947e8968f1e9'})  2026-01-10 14:27:40.000435 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:27:40.000442 | orchestrator | 2026-01-10 14:27:40.000450 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-01-10 14:27:40.000457 | orchestrator | Saturday 10 January 2026 14:27:38 +0000 (0:00:00.143) 0:00:39.536 ****** 2026-01-10 14:27:40.000465 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-0fad3856-f6d1-50e2-a5cb-d9f4a0859299', 'data_vg': 'ceph-0fad3856-f6d1-50e2-a5cb-d9f4a0859299'}) 2026-01-10 14:27:40.000479 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-39355231-3192-5ff7-9e27-947e8968f1e9', 'data_vg': 'ceph-39355231-3192-5ff7-9e27-947e8968f1e9'}) 2026-01-10 14:27:46.115921 | orchestrator | 2026-01-10 14:27:46.116033 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-01-10 14:27:46.116047 | orchestrator | Saturday 10 January 2026 14:27:39 +0000 (0:00:01.385) 0:00:40.922 ****** 2026-01-10 14:27:46.116055 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-0fad3856-f6d1-50e2-a5cb-d9f4a0859299', 'data_vg': 'ceph-0fad3856-f6d1-50e2-a5cb-d9f4a0859299'})  2026-01-10 14:27:46.116063 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-39355231-3192-5ff7-9e27-947e8968f1e9', 'data_vg': 'ceph-39355231-3192-5ff7-9e27-947e8968f1e9'})  2026-01-10 14:27:46.116070 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:27:46.116077 | orchestrator | 2026-01-10 14:27:46.116084 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-01-10 14:27:46.116091 | orchestrator | Saturday 10 January 2026 14:27:40 +0000 (0:00:00.167) 0:00:41.089 ****** 2026-01-10 14:27:46.116098 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:27:46.116106 | orchestrator | 2026-01-10 14:27:46.116113 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-01-10 14:27:46.116119 | orchestrator | Saturday 10 January 2026 14:27:40 +0000 (0:00:00.158) 0:00:41.248 ****** 2026-01-10 14:27:46.116126 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-0fad3856-f6d1-50e2-a5cb-d9f4a0859299', 'data_vg': 'ceph-0fad3856-f6d1-50e2-a5cb-d9f4a0859299'})  2026-01-10 14:27:46.116133 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-39355231-3192-5ff7-9e27-947e8968f1e9', 'data_vg': 'ceph-39355231-3192-5ff7-9e27-947e8968f1e9'})  2026-01-10 14:27:46.116140 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:27:46.116181 | orchestrator | 2026-01-10 14:27:46.116189 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-01-10 14:27:46.116196 | orchestrator | Saturday 10 January 2026 14:27:40 +0000 (0:00:00.151) 0:00:41.400 ****** 2026-01-10 14:27:46.116202 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:27:46.116208 | orchestrator | 2026-01-10 14:27:46.116214 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-01-10 14:27:46.116221 | orchestrator | Saturday 10 January 2026 14:27:40 +0000 (0:00:00.166) 0:00:41.566 ****** 2026-01-10 14:27:46.116227 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-0fad3856-f6d1-50e2-a5cb-d9f4a0859299', 'data_vg': 'ceph-0fad3856-f6d1-50e2-a5cb-d9f4a0859299'})  2026-01-10 14:27:46.116233 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-39355231-3192-5ff7-9e27-947e8968f1e9', 'data_vg': 'ceph-39355231-3192-5ff7-9e27-947e8968f1e9'})  2026-01-10 14:27:46.116245 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:27:46.116251 | orchestrator | 2026-01-10 14:27:46.116256 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-01-10 14:27:46.116279 | orchestrator | Saturday 10 January 2026 14:27:41 +0000 (0:00:00.409) 0:00:41.976 ****** 2026-01-10 14:27:46.116286 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:27:46.116300 | orchestrator | 2026-01-10 14:27:46.116306 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-01-10 14:27:46.116312 | orchestrator | Saturday 10 January 2026 14:27:41 +0000 (0:00:00.154) 0:00:42.130 ****** 2026-01-10 14:27:46.116318 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-0fad3856-f6d1-50e2-a5cb-d9f4a0859299', 'data_vg': 'ceph-0fad3856-f6d1-50e2-a5cb-d9f4a0859299'})  2026-01-10 14:27:46.116324 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-39355231-3192-5ff7-9e27-947e8968f1e9', 'data_vg': 'ceph-39355231-3192-5ff7-9e27-947e8968f1e9'})  2026-01-10 14:27:46.116331 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:27:46.116337 | orchestrator | 2026-01-10 14:27:46.116343 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-01-10 14:27:46.116347 | orchestrator | Saturday 10 January 2026 14:27:41 +0000 (0:00:00.167) 0:00:42.297 ****** 2026-01-10 14:27:46.116351 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:27:46.116371 | orchestrator | 2026-01-10 14:27:46.116376 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-01-10 14:27:46.116380 | orchestrator | Saturday 10 January 2026 14:27:41 +0000 (0:00:00.152) 0:00:42.450 ****** 2026-01-10 14:27:46.116385 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-0fad3856-f6d1-50e2-a5cb-d9f4a0859299', 'data_vg': 'ceph-0fad3856-f6d1-50e2-a5cb-d9f4a0859299'})  2026-01-10 14:27:46.116391 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-39355231-3192-5ff7-9e27-947e8968f1e9', 'data_vg': 'ceph-39355231-3192-5ff7-9e27-947e8968f1e9'})  2026-01-10 14:27:46.116397 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:27:46.116403 | orchestrator | 2026-01-10 14:27:46.116408 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-01-10 14:27:46.116414 | orchestrator | Saturday 10 January 2026 14:27:41 +0000 (0:00:00.156) 0:00:42.606 ****** 2026-01-10 14:27:46.116420 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-0fad3856-f6d1-50e2-a5cb-d9f4a0859299', 'data_vg': 'ceph-0fad3856-f6d1-50e2-a5cb-d9f4a0859299'})  2026-01-10 14:27:46.116426 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-39355231-3192-5ff7-9e27-947e8968f1e9', 'data_vg': 'ceph-39355231-3192-5ff7-9e27-947e8968f1e9'})  2026-01-10 14:27:46.116433 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:27:46.116439 | orchestrator | 2026-01-10 14:27:46.116445 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-01-10 14:27:46.116470 | orchestrator | Saturday 10 January 2026 14:27:41 +0000 (0:00:00.199) 0:00:42.805 ****** 2026-01-10 14:27:46.116477 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-0fad3856-f6d1-50e2-a5cb-d9f4a0859299', 'data_vg': 'ceph-0fad3856-f6d1-50e2-a5cb-d9f4a0859299'})  2026-01-10 14:27:46.116484 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-39355231-3192-5ff7-9e27-947e8968f1e9', 'data_vg': 'ceph-39355231-3192-5ff7-9e27-947e8968f1e9'})  2026-01-10 14:27:46.116490 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:27:46.116496 | orchestrator | 2026-01-10 14:27:46.116503 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-01-10 14:27:46.116509 | orchestrator | Saturday 10 January 2026 14:27:42 +0000 (0:00:00.214) 0:00:43.020 ****** 2026-01-10 14:27:46.116516 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:27:46.116522 | orchestrator | 2026-01-10 14:27:46.116528 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-01-10 14:27:46.116535 | orchestrator | Saturday 10 January 2026 14:27:42 +0000 (0:00:00.160) 0:00:43.181 ****** 2026-01-10 14:27:46.116541 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:27:46.116547 | orchestrator | 2026-01-10 14:27:46.116553 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-01-10 14:27:46.116559 | orchestrator | Saturday 10 January 2026 14:27:42 +0000 (0:00:00.170) 0:00:43.352 ****** 2026-01-10 14:27:46.116567 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:27:46.116574 | orchestrator | 2026-01-10 14:27:46.116580 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-01-10 14:27:46.116587 | orchestrator | Saturday 10 January 2026 14:27:42 +0000 (0:00:00.210) 0:00:43.563 ****** 2026-01-10 14:27:46.116593 | orchestrator | ok: [testbed-node-4] => { 2026-01-10 14:27:46.116600 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-01-10 14:27:46.116607 | orchestrator | } 2026-01-10 14:27:46.116615 | orchestrator | 2026-01-10 14:27:46.116621 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-01-10 14:27:46.116627 | orchestrator | Saturday 10 January 2026 14:27:42 +0000 (0:00:00.177) 0:00:43.740 ****** 2026-01-10 14:27:46.116632 | orchestrator | ok: [testbed-node-4] => { 2026-01-10 14:27:46.116639 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-01-10 14:27:46.116645 | orchestrator | } 2026-01-10 14:27:46.116651 | orchestrator | 2026-01-10 14:27:46.116658 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-01-10 14:27:46.116665 | orchestrator | Saturday 10 January 2026 14:27:42 +0000 (0:00:00.150) 0:00:43.891 ****** 2026-01-10 14:27:46.116681 | orchestrator | ok: [testbed-node-4] => { 2026-01-10 14:27:46.116687 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-01-10 14:27:46.116694 | orchestrator | } 2026-01-10 14:27:46.116703 | orchestrator | 2026-01-10 14:27:46.116712 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-01-10 14:27:46.116718 | orchestrator | Saturday 10 January 2026 14:27:43 +0000 (0:00:00.391) 0:00:44.283 ****** 2026-01-10 14:27:46.116725 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:27:46.116731 | orchestrator | 2026-01-10 14:27:46.116737 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-01-10 14:27:46.116745 | orchestrator | Saturday 10 January 2026 14:27:43 +0000 (0:00:00.536) 0:00:44.820 ****** 2026-01-10 14:27:46.116750 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:27:46.116757 | orchestrator | 2026-01-10 14:27:46.116763 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-01-10 14:27:46.116769 | orchestrator | Saturday 10 January 2026 14:27:44 +0000 (0:00:00.520) 0:00:45.340 ****** 2026-01-10 14:27:46.116776 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:27:46.116782 | orchestrator | 2026-01-10 14:27:46.116789 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-01-10 14:27:46.116795 | orchestrator | Saturday 10 January 2026 14:27:44 +0000 (0:00:00.542) 0:00:45.883 ****** 2026-01-10 14:27:46.116802 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:27:46.116808 | orchestrator | 2026-01-10 14:27:46.116814 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-01-10 14:27:46.116821 | orchestrator | Saturday 10 January 2026 14:27:45 +0000 (0:00:00.147) 0:00:46.030 ****** 2026-01-10 14:27:46.116827 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:27:46.116833 | orchestrator | 2026-01-10 14:27:46.116848 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-01-10 14:27:46.116855 | orchestrator | Saturday 10 January 2026 14:27:45 +0000 (0:00:00.123) 0:00:46.154 ****** 2026-01-10 14:27:46.116861 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:27:46.116867 | orchestrator | 2026-01-10 14:27:46.116873 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-01-10 14:27:46.116879 | orchestrator | Saturday 10 January 2026 14:27:45 +0000 (0:00:00.114) 0:00:46.268 ****** 2026-01-10 14:27:46.116884 | orchestrator | ok: [testbed-node-4] => { 2026-01-10 14:27:46.116890 | orchestrator |  "vgs_report": { 2026-01-10 14:27:46.116896 | orchestrator |  "vg": [] 2026-01-10 14:27:46.116902 | orchestrator |  } 2026-01-10 14:27:46.116908 | orchestrator | } 2026-01-10 14:27:46.116914 | orchestrator | 2026-01-10 14:27:46.116920 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-01-10 14:27:46.116926 | orchestrator | Saturday 10 January 2026 14:27:45 +0000 (0:00:00.152) 0:00:46.421 ****** 2026-01-10 14:27:46.116932 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:27:46.116938 | orchestrator | 2026-01-10 14:27:46.116944 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-01-10 14:27:46.116951 | orchestrator | Saturday 10 January 2026 14:27:45 +0000 (0:00:00.148) 0:00:46.569 ****** 2026-01-10 14:27:46.116957 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:27:46.116963 | orchestrator | 2026-01-10 14:27:46.116969 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-01-10 14:27:46.116976 | orchestrator | Saturday 10 January 2026 14:27:45 +0000 (0:00:00.154) 0:00:46.724 ****** 2026-01-10 14:27:46.116982 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:27:46.116992 | orchestrator | 2026-01-10 14:27:46.116998 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-01-10 14:27:46.117005 | orchestrator | Saturday 10 January 2026 14:27:45 +0000 (0:00:00.146) 0:00:46.870 ****** 2026-01-10 14:27:46.117011 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:27:46.117016 | orchestrator | 2026-01-10 14:27:46.117035 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-01-10 14:27:51.183000 | orchestrator | Saturday 10 January 2026 14:27:46 +0000 (0:00:00.167) 0:00:47.038 ****** 2026-01-10 14:27:51.183223 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:27:51.183246 | orchestrator | 2026-01-10 14:27:51.183259 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-01-10 14:27:51.183271 | orchestrator | Saturday 10 January 2026 14:27:46 +0000 (0:00:00.382) 0:00:47.420 ****** 2026-01-10 14:27:51.183282 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:27:51.183293 | orchestrator | 2026-01-10 14:27:51.183304 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-01-10 14:27:51.183315 | orchestrator | Saturday 10 January 2026 14:27:46 +0000 (0:00:00.138) 0:00:47.559 ****** 2026-01-10 14:27:51.183326 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:27:51.183336 | orchestrator | 2026-01-10 14:27:51.183347 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-01-10 14:27:51.183358 | orchestrator | Saturday 10 January 2026 14:27:46 +0000 (0:00:00.147) 0:00:47.706 ****** 2026-01-10 14:27:51.183369 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:27:51.183379 | orchestrator | 2026-01-10 14:27:51.183390 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-01-10 14:27:51.183401 | orchestrator | Saturday 10 January 2026 14:27:46 +0000 (0:00:00.158) 0:00:47.865 ****** 2026-01-10 14:27:51.183411 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:27:51.183422 | orchestrator | 2026-01-10 14:27:51.183433 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-01-10 14:27:51.183443 | orchestrator | Saturday 10 January 2026 14:27:47 +0000 (0:00:00.140) 0:00:48.006 ****** 2026-01-10 14:27:51.183454 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:27:51.183465 | orchestrator | 2026-01-10 14:27:51.183476 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-01-10 14:27:51.183487 | orchestrator | Saturday 10 January 2026 14:27:47 +0000 (0:00:00.125) 0:00:48.131 ****** 2026-01-10 14:27:51.183498 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:27:51.183508 | orchestrator | 2026-01-10 14:27:51.183519 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-01-10 14:27:51.183532 | orchestrator | Saturday 10 January 2026 14:27:47 +0000 (0:00:00.121) 0:00:48.253 ****** 2026-01-10 14:27:51.183544 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:27:51.183557 | orchestrator | 2026-01-10 14:27:51.183569 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-01-10 14:27:51.183582 | orchestrator | Saturday 10 January 2026 14:27:47 +0000 (0:00:00.133) 0:00:48.386 ****** 2026-01-10 14:27:51.183594 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:27:51.183606 | orchestrator | 2026-01-10 14:27:51.183618 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-01-10 14:27:51.183631 | orchestrator | Saturday 10 January 2026 14:27:47 +0000 (0:00:00.160) 0:00:48.546 ****** 2026-01-10 14:27:51.183643 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:27:51.183655 | orchestrator | 2026-01-10 14:27:51.183668 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-01-10 14:27:51.183698 | orchestrator | Saturday 10 January 2026 14:27:47 +0000 (0:00:00.153) 0:00:48.700 ****** 2026-01-10 14:27:51.183711 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-0fad3856-f6d1-50e2-a5cb-d9f4a0859299', 'data_vg': 'ceph-0fad3856-f6d1-50e2-a5cb-d9f4a0859299'})  2026-01-10 14:27:51.183724 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-39355231-3192-5ff7-9e27-947e8968f1e9', 'data_vg': 'ceph-39355231-3192-5ff7-9e27-947e8968f1e9'})  2026-01-10 14:27:51.183737 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:27:51.183749 | orchestrator | 2026-01-10 14:27:51.183761 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-01-10 14:27:51.183773 | orchestrator | Saturday 10 January 2026 14:27:47 +0000 (0:00:00.146) 0:00:48.846 ****** 2026-01-10 14:27:51.183786 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-0fad3856-f6d1-50e2-a5cb-d9f4a0859299', 'data_vg': 'ceph-0fad3856-f6d1-50e2-a5cb-d9f4a0859299'})  2026-01-10 14:27:51.183807 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-39355231-3192-5ff7-9e27-947e8968f1e9', 'data_vg': 'ceph-39355231-3192-5ff7-9e27-947e8968f1e9'})  2026-01-10 14:27:51.183820 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:27:51.183832 | orchestrator | 2026-01-10 14:27:51.183842 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-01-10 14:27:51.183853 | orchestrator | Saturday 10 January 2026 14:27:48 +0000 (0:00:00.169) 0:00:49.016 ****** 2026-01-10 14:27:51.183864 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-0fad3856-f6d1-50e2-a5cb-d9f4a0859299', 'data_vg': 'ceph-0fad3856-f6d1-50e2-a5cb-d9f4a0859299'})  2026-01-10 14:27:51.183874 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-39355231-3192-5ff7-9e27-947e8968f1e9', 'data_vg': 'ceph-39355231-3192-5ff7-9e27-947e8968f1e9'})  2026-01-10 14:27:51.183885 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:27:51.183896 | orchestrator | 2026-01-10 14:27:51.183906 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-01-10 14:27:51.183917 | orchestrator | Saturday 10 January 2026 14:27:48 +0000 (0:00:00.168) 0:00:49.184 ****** 2026-01-10 14:27:51.183928 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-0fad3856-f6d1-50e2-a5cb-d9f4a0859299', 'data_vg': 'ceph-0fad3856-f6d1-50e2-a5cb-d9f4a0859299'})  2026-01-10 14:27:51.183939 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-39355231-3192-5ff7-9e27-947e8968f1e9', 'data_vg': 'ceph-39355231-3192-5ff7-9e27-947e8968f1e9'})  2026-01-10 14:27:51.183950 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:27:51.183961 | orchestrator | 2026-01-10 14:27:51.183990 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-01-10 14:27:51.184001 | orchestrator | Saturday 10 January 2026 14:27:48 +0000 (0:00:00.365) 0:00:49.550 ****** 2026-01-10 14:27:51.184012 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-0fad3856-f6d1-50e2-a5cb-d9f4a0859299', 'data_vg': 'ceph-0fad3856-f6d1-50e2-a5cb-d9f4a0859299'})  2026-01-10 14:27:51.184023 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-39355231-3192-5ff7-9e27-947e8968f1e9', 'data_vg': 'ceph-39355231-3192-5ff7-9e27-947e8968f1e9'})  2026-01-10 14:27:51.184034 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:27:51.184045 | orchestrator | 2026-01-10 14:27:51.184055 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-01-10 14:27:51.184066 | orchestrator | Saturday 10 January 2026 14:27:48 +0000 (0:00:00.201) 0:00:49.751 ****** 2026-01-10 14:27:51.184077 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-0fad3856-f6d1-50e2-a5cb-d9f4a0859299', 'data_vg': 'ceph-0fad3856-f6d1-50e2-a5cb-d9f4a0859299'})  2026-01-10 14:27:51.184089 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-39355231-3192-5ff7-9e27-947e8968f1e9', 'data_vg': 'ceph-39355231-3192-5ff7-9e27-947e8968f1e9'})  2026-01-10 14:27:51.184108 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:27:51.184126 | orchestrator | 2026-01-10 14:27:51.184146 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-01-10 14:27:51.184246 | orchestrator | Saturday 10 January 2026 14:27:48 +0000 (0:00:00.152) 0:00:49.903 ****** 2026-01-10 14:27:51.184261 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-0fad3856-f6d1-50e2-a5cb-d9f4a0859299', 'data_vg': 'ceph-0fad3856-f6d1-50e2-a5cb-d9f4a0859299'})  2026-01-10 14:27:51.184272 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-39355231-3192-5ff7-9e27-947e8968f1e9', 'data_vg': 'ceph-39355231-3192-5ff7-9e27-947e8968f1e9'})  2026-01-10 14:27:51.184283 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:27:51.184294 | orchestrator | 2026-01-10 14:27:51.184304 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-01-10 14:27:51.184315 | orchestrator | Saturday 10 January 2026 14:27:49 +0000 (0:00:00.167) 0:00:50.071 ****** 2026-01-10 14:27:51.184336 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-0fad3856-f6d1-50e2-a5cb-d9f4a0859299', 'data_vg': 'ceph-0fad3856-f6d1-50e2-a5cb-d9f4a0859299'})  2026-01-10 14:27:51.184353 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-39355231-3192-5ff7-9e27-947e8968f1e9', 'data_vg': 'ceph-39355231-3192-5ff7-9e27-947e8968f1e9'})  2026-01-10 14:27:51.184364 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:27:51.184375 | orchestrator | 2026-01-10 14:27:51.184385 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-01-10 14:27:51.184396 | orchestrator | Saturday 10 January 2026 14:27:49 +0000 (0:00:00.160) 0:00:50.231 ****** 2026-01-10 14:27:51.184407 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:27:51.184418 | orchestrator | 2026-01-10 14:27:51.184428 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-01-10 14:27:51.184438 | orchestrator | Saturday 10 January 2026 14:27:49 +0000 (0:00:00.588) 0:00:50.820 ****** 2026-01-10 14:27:51.184449 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:27:51.184460 | orchestrator | 2026-01-10 14:27:51.184470 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-01-10 14:27:51.184481 | orchestrator | Saturday 10 January 2026 14:27:50 +0000 (0:00:00.592) 0:00:51.412 ****** 2026-01-10 14:27:51.184492 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:27:51.184502 | orchestrator | 2026-01-10 14:27:51.184513 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-01-10 14:27:51.184523 | orchestrator | Saturday 10 January 2026 14:27:50 +0000 (0:00:00.150) 0:00:51.562 ****** 2026-01-10 14:27:51.184534 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-0fad3856-f6d1-50e2-a5cb-d9f4a0859299', 'vg_name': 'ceph-0fad3856-f6d1-50e2-a5cb-d9f4a0859299'}) 2026-01-10 14:27:51.184546 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-39355231-3192-5ff7-9e27-947e8968f1e9', 'vg_name': 'ceph-39355231-3192-5ff7-9e27-947e8968f1e9'}) 2026-01-10 14:27:51.184557 | orchestrator | 2026-01-10 14:27:51.184567 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-01-10 14:27:51.184578 | orchestrator | Saturday 10 January 2026 14:27:50 +0000 (0:00:00.194) 0:00:51.757 ****** 2026-01-10 14:27:51.184588 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-0fad3856-f6d1-50e2-a5cb-d9f4a0859299', 'data_vg': 'ceph-0fad3856-f6d1-50e2-a5cb-d9f4a0859299'})  2026-01-10 14:27:51.184599 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-39355231-3192-5ff7-9e27-947e8968f1e9', 'data_vg': 'ceph-39355231-3192-5ff7-9e27-947e8968f1e9'})  2026-01-10 14:27:51.184610 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:27:51.184620 | orchestrator | 2026-01-10 14:27:51.184631 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-01-10 14:27:51.184642 | orchestrator | Saturday 10 January 2026 14:27:51 +0000 (0:00:00.176) 0:00:51.933 ****** 2026-01-10 14:27:51.184652 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-0fad3856-f6d1-50e2-a5cb-d9f4a0859299', 'data_vg': 'ceph-0fad3856-f6d1-50e2-a5cb-d9f4a0859299'})  2026-01-10 14:27:51.184672 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-39355231-3192-5ff7-9e27-947e8968f1e9', 'data_vg': 'ceph-39355231-3192-5ff7-9e27-947e8968f1e9'})  2026-01-10 14:27:57.641952 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:27:57.642096 | orchestrator | 2026-01-10 14:27:57.642107 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-01-10 14:27:57.642113 | orchestrator | Saturday 10 January 2026 14:27:51 +0000 (0:00:00.175) 0:00:52.109 ****** 2026-01-10 14:27:57.642118 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-0fad3856-f6d1-50e2-a5cb-d9f4a0859299', 'data_vg': 'ceph-0fad3856-f6d1-50e2-a5cb-d9f4a0859299'})  2026-01-10 14:27:57.642124 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-39355231-3192-5ff7-9e27-947e8968f1e9', 'data_vg': 'ceph-39355231-3192-5ff7-9e27-947e8968f1e9'})  2026-01-10 14:27:57.642129 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:27:57.642147 | orchestrator | 2026-01-10 14:27:57.642151 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-01-10 14:27:57.642156 | orchestrator | Saturday 10 January 2026 14:27:51 +0000 (0:00:00.166) 0:00:52.275 ****** 2026-01-10 14:27:57.642160 | orchestrator | ok: [testbed-node-4] => { 2026-01-10 14:27:57.642163 | orchestrator |  "lvm_report": { 2026-01-10 14:27:57.642169 | orchestrator |  "lv": [ 2026-01-10 14:27:57.642173 | orchestrator |  { 2026-01-10 14:27:57.642178 | orchestrator |  "lv_name": "osd-block-0fad3856-f6d1-50e2-a5cb-d9f4a0859299", 2026-01-10 14:27:57.642203 | orchestrator |  "vg_name": "ceph-0fad3856-f6d1-50e2-a5cb-d9f4a0859299" 2026-01-10 14:27:57.642207 | orchestrator |  }, 2026-01-10 14:27:57.642212 | orchestrator |  { 2026-01-10 14:27:57.642219 | orchestrator |  "lv_name": "osd-block-39355231-3192-5ff7-9e27-947e8968f1e9", 2026-01-10 14:27:57.642226 | orchestrator |  "vg_name": "ceph-39355231-3192-5ff7-9e27-947e8968f1e9" 2026-01-10 14:27:57.642233 | orchestrator |  } 2026-01-10 14:27:57.642241 | orchestrator |  ], 2026-01-10 14:27:57.642249 | orchestrator |  "pv": [ 2026-01-10 14:27:57.642256 | orchestrator |  { 2026-01-10 14:27:57.642263 | orchestrator |  "pv_name": "/dev/sdb", 2026-01-10 14:27:57.642270 | orchestrator |  "vg_name": "ceph-0fad3856-f6d1-50e2-a5cb-d9f4a0859299" 2026-01-10 14:27:57.642276 | orchestrator |  }, 2026-01-10 14:27:57.642280 | orchestrator |  { 2026-01-10 14:27:57.642284 | orchestrator |  "pv_name": "/dev/sdc", 2026-01-10 14:27:57.642288 | orchestrator |  "vg_name": "ceph-39355231-3192-5ff7-9e27-947e8968f1e9" 2026-01-10 14:27:57.642292 | orchestrator |  } 2026-01-10 14:27:57.642295 | orchestrator |  ] 2026-01-10 14:27:57.642300 | orchestrator |  } 2026-01-10 14:27:57.642304 | orchestrator | } 2026-01-10 14:27:57.642308 | orchestrator | 2026-01-10 14:27:57.642311 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-01-10 14:27:57.642315 | orchestrator | 2026-01-10 14:27:57.642319 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-01-10 14:27:57.642323 | orchestrator | Saturday 10 January 2026 14:27:51 +0000 (0:00:00.517) 0:00:52.792 ****** 2026-01-10 14:27:57.642327 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-01-10 14:27:57.642331 | orchestrator | 2026-01-10 14:27:57.642336 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-01-10 14:27:57.642340 | orchestrator | Saturday 10 January 2026 14:27:52 +0000 (0:00:00.297) 0:00:53.090 ****** 2026-01-10 14:27:57.642344 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:27:57.642349 | orchestrator | 2026-01-10 14:27:57.642355 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:27:57.642361 | orchestrator | Saturday 10 January 2026 14:27:52 +0000 (0:00:00.254) 0:00:53.344 ****** 2026-01-10 14:27:57.642538 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-01-10 14:27:57.642549 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-01-10 14:27:57.642557 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-01-10 14:27:57.642569 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-01-10 14:27:57.642581 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-01-10 14:27:57.642590 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-01-10 14:27:57.642597 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-01-10 14:27:57.642604 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-01-10 14:27:57.642612 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-01-10 14:27:57.642630 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-01-10 14:27:57.642637 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-01-10 14:27:57.642644 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-01-10 14:27:57.642651 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-01-10 14:27:57.642658 | orchestrator | 2026-01-10 14:27:57.642669 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:27:57.642675 | orchestrator | Saturday 10 January 2026 14:27:52 +0000 (0:00:00.446) 0:00:53.791 ****** 2026-01-10 14:27:57.642682 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:27:57.642686 | orchestrator | 2026-01-10 14:27:57.642691 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:27:57.642695 | orchestrator | Saturday 10 January 2026 14:27:53 +0000 (0:00:00.207) 0:00:53.999 ****** 2026-01-10 14:27:57.642699 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:27:57.642704 | orchestrator | 2026-01-10 14:27:57.642708 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:27:57.642731 | orchestrator | Saturday 10 January 2026 14:27:53 +0000 (0:00:00.206) 0:00:54.206 ****** 2026-01-10 14:27:57.642738 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:27:57.642743 | orchestrator | 2026-01-10 14:27:57.642753 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:27:57.642760 | orchestrator | Saturday 10 January 2026 14:27:53 +0000 (0:00:00.236) 0:00:54.442 ****** 2026-01-10 14:27:57.642766 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:27:57.642772 | orchestrator | 2026-01-10 14:27:57.642778 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:27:57.642828 | orchestrator | Saturday 10 January 2026 14:27:53 +0000 (0:00:00.235) 0:00:54.678 ****** 2026-01-10 14:27:57.642838 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:27:57.642843 | orchestrator | 2026-01-10 14:27:57.642847 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:27:57.642852 | orchestrator | Saturday 10 January 2026 14:27:53 +0000 (0:00:00.241) 0:00:54.919 ****** 2026-01-10 14:27:57.642856 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:27:57.642861 | orchestrator | 2026-01-10 14:27:57.642865 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:27:57.642869 | orchestrator | Saturday 10 January 2026 14:27:54 +0000 (0:00:00.662) 0:00:55.581 ****** 2026-01-10 14:27:57.642874 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:27:57.642878 | orchestrator | 2026-01-10 14:27:57.642882 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:27:57.642886 | orchestrator | Saturday 10 January 2026 14:27:54 +0000 (0:00:00.203) 0:00:55.785 ****** 2026-01-10 14:27:57.642889 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:27:57.642893 | orchestrator | 2026-01-10 14:27:57.642897 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:27:57.642900 | orchestrator | Saturday 10 January 2026 14:27:55 +0000 (0:00:00.210) 0:00:55.995 ****** 2026-01-10 14:27:57.642904 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_832b7b05-f737-40ff-a441-99af22cffa7c) 2026-01-10 14:27:57.642909 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_832b7b05-f737-40ff-a441-99af22cffa7c) 2026-01-10 14:27:57.642913 | orchestrator | 2026-01-10 14:27:57.642917 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:27:57.642920 | orchestrator | Saturday 10 January 2026 14:27:55 +0000 (0:00:00.442) 0:00:56.438 ****** 2026-01-10 14:27:57.642924 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_4515c98e-1f25-421e-81d3-264e20827141) 2026-01-10 14:27:57.642928 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_4515c98e-1f25-421e-81d3-264e20827141) 2026-01-10 14:27:57.642932 | orchestrator | 2026-01-10 14:27:57.642941 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:27:57.642948 | orchestrator | Saturday 10 January 2026 14:27:55 +0000 (0:00:00.459) 0:00:56.897 ****** 2026-01-10 14:27:57.642952 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_9cc4e4a6-fdb0-4f2b-8497-7d80ad86af00) 2026-01-10 14:27:57.642956 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_9cc4e4a6-fdb0-4f2b-8497-7d80ad86af00) 2026-01-10 14:27:57.642959 | orchestrator | 2026-01-10 14:27:57.642963 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:27:57.642967 | orchestrator | Saturday 10 January 2026 14:27:56 +0000 (0:00:00.420) 0:00:57.318 ****** 2026-01-10 14:27:57.642970 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_355a7212-75f2-41c4-a284-fbc15ac49d3c) 2026-01-10 14:27:57.642974 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_355a7212-75f2-41c4-a284-fbc15ac49d3c) 2026-01-10 14:27:57.642978 | orchestrator | 2026-01-10 14:27:57.642981 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-10 14:27:57.642985 | orchestrator | Saturday 10 January 2026 14:27:56 +0000 (0:00:00.453) 0:00:57.771 ****** 2026-01-10 14:27:57.642989 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-01-10 14:27:57.642992 | orchestrator | 2026-01-10 14:27:57.642996 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:27:57.643000 | orchestrator | Saturday 10 January 2026 14:27:57 +0000 (0:00:00.367) 0:00:58.138 ****** 2026-01-10 14:27:57.643003 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-01-10 14:27:57.643007 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-01-10 14:27:57.643011 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-01-10 14:27:57.643014 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-01-10 14:27:57.643018 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-01-10 14:27:57.643022 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-01-10 14:27:57.643025 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-01-10 14:27:57.643029 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-01-10 14:27:57.643033 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-01-10 14:27:57.643036 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-01-10 14:27:57.643040 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-01-10 14:27:57.643050 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-01-10 14:28:06.923637 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-01-10 14:28:06.923754 | orchestrator | 2026-01-10 14:28:06.923773 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:28:06.923839 | orchestrator | Saturday 10 January 2026 14:27:57 +0000 (0:00:00.422) 0:00:58.560 ****** 2026-01-10 14:28:06.923854 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:28:06.923869 | orchestrator | 2026-01-10 14:28:06.923883 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:28:06.923897 | orchestrator | Saturday 10 January 2026 14:27:57 +0000 (0:00:00.254) 0:00:58.815 ****** 2026-01-10 14:28:06.923911 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:28:06.923924 | orchestrator | 2026-01-10 14:28:06.923938 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:28:06.923951 | orchestrator | Saturday 10 January 2026 14:27:58 +0000 (0:00:00.701) 0:00:59.517 ****** 2026-01-10 14:28:06.923992 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:28:06.924006 | orchestrator | 2026-01-10 14:28:06.924018 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:28:06.924030 | orchestrator | Saturday 10 January 2026 14:27:58 +0000 (0:00:00.204) 0:00:59.721 ****** 2026-01-10 14:28:06.924043 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:28:06.924056 | orchestrator | 2026-01-10 14:28:06.924070 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:28:06.924083 | orchestrator | Saturday 10 January 2026 14:27:58 +0000 (0:00:00.196) 0:00:59.918 ****** 2026-01-10 14:28:06.924096 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:28:06.924110 | orchestrator | 2026-01-10 14:28:06.924123 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:28:06.924136 | orchestrator | Saturday 10 January 2026 14:27:59 +0000 (0:00:00.259) 0:01:00.177 ****** 2026-01-10 14:28:06.924149 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:28:06.924163 | orchestrator | 2026-01-10 14:28:06.924177 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:28:06.924191 | orchestrator | Saturday 10 January 2026 14:27:59 +0000 (0:00:00.193) 0:01:00.371 ****** 2026-01-10 14:28:06.924267 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:28:06.924284 | orchestrator | 2026-01-10 14:28:06.924297 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:28:06.924311 | orchestrator | Saturday 10 January 2026 14:27:59 +0000 (0:00:00.229) 0:01:00.600 ****** 2026-01-10 14:28:06.924324 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:28:06.924336 | orchestrator | 2026-01-10 14:28:06.924344 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:28:06.924353 | orchestrator | Saturday 10 January 2026 14:27:59 +0000 (0:00:00.212) 0:01:00.813 ****** 2026-01-10 14:28:06.924383 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-01-10 14:28:06.924398 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-01-10 14:28:06.924412 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-01-10 14:28:06.924424 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-01-10 14:28:06.924437 | orchestrator | 2026-01-10 14:28:06.924448 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:28:06.924460 | orchestrator | Saturday 10 January 2026 14:28:00 +0000 (0:00:00.733) 0:01:01.547 ****** 2026-01-10 14:28:06.924473 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:28:06.924486 | orchestrator | 2026-01-10 14:28:06.924500 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:28:06.924513 | orchestrator | Saturday 10 January 2026 14:28:00 +0000 (0:00:00.202) 0:01:01.749 ****** 2026-01-10 14:28:06.924526 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:28:06.924539 | orchestrator | 2026-01-10 14:28:06.924574 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:28:06.924587 | orchestrator | Saturday 10 January 2026 14:28:01 +0000 (0:00:00.190) 0:01:01.940 ****** 2026-01-10 14:28:06.924601 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:28:06.924613 | orchestrator | 2026-01-10 14:28:06.924626 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-10 14:28:06.924639 | orchestrator | Saturday 10 January 2026 14:28:01 +0000 (0:00:00.233) 0:01:02.174 ****** 2026-01-10 14:28:06.924653 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:28:06.924666 | orchestrator | 2026-01-10 14:28:06.924679 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-01-10 14:28:06.924692 | orchestrator | Saturday 10 January 2026 14:28:01 +0000 (0:00:00.217) 0:01:02.391 ****** 2026-01-10 14:28:06.924705 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:28:06.924718 | orchestrator | 2026-01-10 14:28:06.924731 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-01-10 14:28:06.924745 | orchestrator | Saturday 10 January 2026 14:28:01 +0000 (0:00:00.369) 0:01:02.761 ****** 2026-01-10 14:28:06.924757 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '4cb3fc90-004d-5443-9ae7-f5eff9c4438f'}}) 2026-01-10 14:28:06.924784 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'dec76364-a7ee-5469-8bc3-2dcf5060f83e'}}) 2026-01-10 14:28:06.924798 | orchestrator | 2026-01-10 14:28:06.924811 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-01-10 14:28:06.924824 | orchestrator | Saturday 10 January 2026 14:28:02 +0000 (0:00:00.212) 0:01:02.973 ****** 2026-01-10 14:28:06.924839 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-4cb3fc90-004d-5443-9ae7-f5eff9c4438f', 'data_vg': 'ceph-4cb3fc90-004d-5443-9ae7-f5eff9c4438f'}) 2026-01-10 14:28:06.924854 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-dec76364-a7ee-5469-8bc3-2dcf5060f83e', 'data_vg': 'ceph-dec76364-a7ee-5469-8bc3-2dcf5060f83e'}) 2026-01-10 14:28:06.924867 | orchestrator | 2026-01-10 14:28:06.924881 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-01-10 14:28:06.924918 | orchestrator | Saturday 10 January 2026 14:28:03 +0000 (0:00:01.822) 0:01:04.795 ****** 2026-01-10 14:28:06.924932 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4cb3fc90-004d-5443-9ae7-f5eff9c4438f', 'data_vg': 'ceph-4cb3fc90-004d-5443-9ae7-f5eff9c4438f'})  2026-01-10 14:28:06.924947 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-dec76364-a7ee-5469-8bc3-2dcf5060f83e', 'data_vg': 'ceph-dec76364-a7ee-5469-8bc3-2dcf5060f83e'})  2026-01-10 14:28:06.924960 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:28:06.924974 | orchestrator | 2026-01-10 14:28:06.924987 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-01-10 14:28:06.925001 | orchestrator | Saturday 10 January 2026 14:28:04 +0000 (0:00:00.192) 0:01:04.988 ****** 2026-01-10 14:28:06.925014 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-4cb3fc90-004d-5443-9ae7-f5eff9c4438f', 'data_vg': 'ceph-4cb3fc90-004d-5443-9ae7-f5eff9c4438f'}) 2026-01-10 14:28:06.925027 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-dec76364-a7ee-5469-8bc3-2dcf5060f83e', 'data_vg': 'ceph-dec76364-a7ee-5469-8bc3-2dcf5060f83e'}) 2026-01-10 14:28:06.925039 | orchestrator | 2026-01-10 14:28:06.925053 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-01-10 14:28:06.925066 | orchestrator | Saturday 10 January 2026 14:28:05 +0000 (0:00:01.388) 0:01:06.377 ****** 2026-01-10 14:28:06.925079 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4cb3fc90-004d-5443-9ae7-f5eff9c4438f', 'data_vg': 'ceph-4cb3fc90-004d-5443-9ae7-f5eff9c4438f'})  2026-01-10 14:28:06.925093 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-dec76364-a7ee-5469-8bc3-2dcf5060f83e', 'data_vg': 'ceph-dec76364-a7ee-5469-8bc3-2dcf5060f83e'})  2026-01-10 14:28:06.925106 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:28:06.925119 | orchestrator | 2026-01-10 14:28:06.925132 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-01-10 14:28:06.925145 | orchestrator | Saturday 10 January 2026 14:28:05 +0000 (0:00:00.176) 0:01:06.554 ****** 2026-01-10 14:28:06.925159 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:28:06.925172 | orchestrator | 2026-01-10 14:28:06.925185 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-01-10 14:28:06.925198 | orchestrator | Saturday 10 January 2026 14:28:05 +0000 (0:00:00.152) 0:01:06.706 ****** 2026-01-10 14:28:06.925241 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4cb3fc90-004d-5443-9ae7-f5eff9c4438f', 'data_vg': 'ceph-4cb3fc90-004d-5443-9ae7-f5eff9c4438f'})  2026-01-10 14:28:06.925256 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-dec76364-a7ee-5469-8bc3-2dcf5060f83e', 'data_vg': 'ceph-dec76364-a7ee-5469-8bc3-2dcf5060f83e'})  2026-01-10 14:28:06.925269 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:28:06.925282 | orchestrator | 2026-01-10 14:28:06.925295 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-01-10 14:28:06.925318 | orchestrator | Saturday 10 January 2026 14:28:05 +0000 (0:00:00.157) 0:01:06.864 ****** 2026-01-10 14:28:06.925331 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:28:06.925344 | orchestrator | 2026-01-10 14:28:06.925357 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-01-10 14:28:06.925370 | orchestrator | Saturday 10 January 2026 14:28:06 +0000 (0:00:00.128) 0:01:06.993 ****** 2026-01-10 14:28:06.925383 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4cb3fc90-004d-5443-9ae7-f5eff9c4438f', 'data_vg': 'ceph-4cb3fc90-004d-5443-9ae7-f5eff9c4438f'})  2026-01-10 14:28:06.925397 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-dec76364-a7ee-5469-8bc3-2dcf5060f83e', 'data_vg': 'ceph-dec76364-a7ee-5469-8bc3-2dcf5060f83e'})  2026-01-10 14:28:06.925410 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:28:06.925424 | orchestrator | 2026-01-10 14:28:06.925437 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-01-10 14:28:06.925450 | orchestrator | Saturday 10 January 2026 14:28:06 +0000 (0:00:00.159) 0:01:07.152 ****** 2026-01-10 14:28:06.925463 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:28:06.925476 | orchestrator | 2026-01-10 14:28:06.925489 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-01-10 14:28:06.925502 | orchestrator | Saturday 10 January 2026 14:28:06 +0000 (0:00:00.126) 0:01:07.279 ****** 2026-01-10 14:28:06.925516 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4cb3fc90-004d-5443-9ae7-f5eff9c4438f', 'data_vg': 'ceph-4cb3fc90-004d-5443-9ae7-f5eff9c4438f'})  2026-01-10 14:28:06.925529 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-dec76364-a7ee-5469-8bc3-2dcf5060f83e', 'data_vg': 'ceph-dec76364-a7ee-5469-8bc3-2dcf5060f83e'})  2026-01-10 14:28:06.925542 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:28:06.925555 | orchestrator | 2026-01-10 14:28:06.925568 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-01-10 14:28:06.925582 | orchestrator | Saturday 10 January 2026 14:28:06 +0000 (0:00:00.144) 0:01:07.424 ****** 2026-01-10 14:28:06.925595 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:28:06.925608 | orchestrator | 2026-01-10 14:28:06.925622 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-01-10 14:28:06.925635 | orchestrator | Saturday 10 January 2026 14:28:06 +0000 (0:00:00.279) 0:01:07.703 ****** 2026-01-10 14:28:06.925656 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4cb3fc90-004d-5443-9ae7-f5eff9c4438f', 'data_vg': 'ceph-4cb3fc90-004d-5443-9ae7-f5eff9c4438f'})  2026-01-10 14:28:12.637634 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-dec76364-a7ee-5469-8bc3-2dcf5060f83e', 'data_vg': 'ceph-dec76364-a7ee-5469-8bc3-2dcf5060f83e'})  2026-01-10 14:28:12.637712 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:28:12.637718 | orchestrator | 2026-01-10 14:28:12.637723 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-01-10 14:28:12.637729 | orchestrator | Saturday 10 January 2026 14:28:06 +0000 (0:00:00.146) 0:01:07.850 ****** 2026-01-10 14:28:12.637734 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4cb3fc90-004d-5443-9ae7-f5eff9c4438f', 'data_vg': 'ceph-4cb3fc90-004d-5443-9ae7-f5eff9c4438f'})  2026-01-10 14:28:12.637738 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-dec76364-a7ee-5469-8bc3-2dcf5060f83e', 'data_vg': 'ceph-dec76364-a7ee-5469-8bc3-2dcf5060f83e'})  2026-01-10 14:28:12.637743 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:28:12.637746 | orchestrator | 2026-01-10 14:28:12.637750 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-01-10 14:28:12.637754 | orchestrator | Saturday 10 January 2026 14:28:07 +0000 (0:00:00.146) 0:01:07.997 ****** 2026-01-10 14:28:12.637758 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4cb3fc90-004d-5443-9ae7-f5eff9c4438f', 'data_vg': 'ceph-4cb3fc90-004d-5443-9ae7-f5eff9c4438f'})  2026-01-10 14:28:12.637762 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-dec76364-a7ee-5469-8bc3-2dcf5060f83e', 'data_vg': 'ceph-dec76364-a7ee-5469-8bc3-2dcf5060f83e'})  2026-01-10 14:28:12.637781 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:28:12.637785 | orchestrator | 2026-01-10 14:28:12.637789 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-01-10 14:28:12.637793 | orchestrator | Saturday 10 January 2026 14:28:07 +0000 (0:00:00.139) 0:01:08.136 ****** 2026-01-10 14:28:12.637796 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:28:12.637800 | orchestrator | 2026-01-10 14:28:12.637804 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-01-10 14:28:12.637808 | orchestrator | Saturday 10 January 2026 14:28:07 +0000 (0:00:00.127) 0:01:08.264 ****** 2026-01-10 14:28:12.637811 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:28:12.637815 | orchestrator | 2026-01-10 14:28:12.637819 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-01-10 14:28:12.637822 | orchestrator | Saturday 10 January 2026 14:28:07 +0000 (0:00:00.128) 0:01:08.392 ****** 2026-01-10 14:28:12.637826 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:28:12.637830 | orchestrator | 2026-01-10 14:28:12.637834 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-01-10 14:28:12.637838 | orchestrator | Saturday 10 January 2026 14:28:07 +0000 (0:00:00.127) 0:01:08.519 ****** 2026-01-10 14:28:12.637841 | orchestrator | ok: [testbed-node-5] => { 2026-01-10 14:28:12.637846 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-01-10 14:28:12.637850 | orchestrator | } 2026-01-10 14:28:12.637854 | orchestrator | 2026-01-10 14:28:12.637858 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-01-10 14:28:12.637862 | orchestrator | Saturday 10 January 2026 14:28:07 +0000 (0:00:00.130) 0:01:08.650 ****** 2026-01-10 14:28:12.637865 | orchestrator | ok: [testbed-node-5] => { 2026-01-10 14:28:12.637869 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-01-10 14:28:12.637873 | orchestrator | } 2026-01-10 14:28:12.637877 | orchestrator | 2026-01-10 14:28:12.637881 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-01-10 14:28:12.637884 | orchestrator | Saturday 10 January 2026 14:28:07 +0000 (0:00:00.135) 0:01:08.785 ****** 2026-01-10 14:28:12.637888 | orchestrator | ok: [testbed-node-5] => { 2026-01-10 14:28:12.637892 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-01-10 14:28:12.637896 | orchestrator | } 2026-01-10 14:28:12.637900 | orchestrator | 2026-01-10 14:28:12.637903 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-01-10 14:28:12.637907 | orchestrator | Saturday 10 January 2026 14:28:07 +0000 (0:00:00.136) 0:01:08.921 ****** 2026-01-10 14:28:12.637911 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:28:12.637915 | orchestrator | 2026-01-10 14:28:12.637918 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-01-10 14:28:12.637922 | orchestrator | Saturday 10 January 2026 14:28:08 +0000 (0:00:00.532) 0:01:09.454 ****** 2026-01-10 14:28:12.637926 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:28:12.637930 | orchestrator | 2026-01-10 14:28:12.637933 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-01-10 14:28:12.637937 | orchestrator | Saturday 10 January 2026 14:28:09 +0000 (0:00:00.503) 0:01:09.957 ****** 2026-01-10 14:28:12.637941 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:28:12.637944 | orchestrator | 2026-01-10 14:28:12.637948 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-01-10 14:28:12.637952 | orchestrator | Saturday 10 January 2026 14:28:09 +0000 (0:00:00.680) 0:01:10.638 ****** 2026-01-10 14:28:12.637956 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:28:12.637959 | orchestrator | 2026-01-10 14:28:12.637963 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-01-10 14:28:12.637967 | orchestrator | Saturday 10 January 2026 14:28:09 +0000 (0:00:00.151) 0:01:10.790 ****** 2026-01-10 14:28:12.637970 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:28:12.637974 | orchestrator | 2026-01-10 14:28:12.637978 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-01-10 14:28:12.637985 | orchestrator | Saturday 10 January 2026 14:28:09 +0000 (0:00:00.096) 0:01:10.886 ****** 2026-01-10 14:28:12.637989 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:28:12.637993 | orchestrator | 2026-01-10 14:28:12.637996 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-01-10 14:28:12.638043 | orchestrator | Saturday 10 January 2026 14:28:10 +0000 (0:00:00.134) 0:01:11.021 ****** 2026-01-10 14:28:12.638048 | orchestrator | ok: [testbed-node-5] => { 2026-01-10 14:28:12.638052 | orchestrator |  "vgs_report": { 2026-01-10 14:28:12.638056 | orchestrator |  "vg": [] 2026-01-10 14:28:12.638077 | orchestrator |  } 2026-01-10 14:28:12.638084 | orchestrator | } 2026-01-10 14:28:12.638090 | orchestrator | 2026-01-10 14:28:12.638095 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-01-10 14:28:12.638101 | orchestrator | Saturday 10 January 2026 14:28:10 +0000 (0:00:00.139) 0:01:11.161 ****** 2026-01-10 14:28:12.638107 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:28:12.638112 | orchestrator | 2026-01-10 14:28:12.638118 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-01-10 14:28:12.638125 | orchestrator | Saturday 10 January 2026 14:28:10 +0000 (0:00:00.126) 0:01:11.287 ****** 2026-01-10 14:28:12.638132 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:28:12.638138 | orchestrator | 2026-01-10 14:28:12.638145 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-01-10 14:28:12.638151 | orchestrator | Saturday 10 January 2026 14:28:10 +0000 (0:00:00.133) 0:01:11.421 ****** 2026-01-10 14:28:12.638158 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:28:12.638164 | orchestrator | 2026-01-10 14:28:12.638171 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-01-10 14:28:12.638177 | orchestrator | Saturday 10 January 2026 14:28:10 +0000 (0:00:00.108) 0:01:11.530 ****** 2026-01-10 14:28:12.638184 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:28:12.638191 | orchestrator | 2026-01-10 14:28:12.638198 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-01-10 14:28:12.638205 | orchestrator | Saturday 10 January 2026 14:28:10 +0000 (0:00:00.154) 0:01:11.685 ****** 2026-01-10 14:28:12.638213 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:28:12.638217 | orchestrator | 2026-01-10 14:28:12.638256 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-01-10 14:28:12.638261 | orchestrator | Saturday 10 January 2026 14:28:10 +0000 (0:00:00.121) 0:01:11.806 ****** 2026-01-10 14:28:12.638265 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:28:12.638269 | orchestrator | 2026-01-10 14:28:12.638273 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-01-10 14:28:12.638277 | orchestrator | Saturday 10 January 2026 14:28:10 +0000 (0:00:00.112) 0:01:11.918 ****** 2026-01-10 14:28:12.638281 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:28:12.638285 | orchestrator | 2026-01-10 14:28:12.638289 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-01-10 14:28:12.638294 | orchestrator | Saturday 10 January 2026 14:28:11 +0000 (0:00:00.133) 0:01:12.051 ****** 2026-01-10 14:28:12.638298 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:28:12.638302 | orchestrator | 2026-01-10 14:28:12.638306 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-01-10 14:28:12.638310 | orchestrator | Saturday 10 January 2026 14:28:11 +0000 (0:00:00.287) 0:01:12.338 ****** 2026-01-10 14:28:12.638314 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:28:12.638318 | orchestrator | 2026-01-10 14:28:12.638327 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-01-10 14:28:12.638331 | orchestrator | Saturday 10 January 2026 14:28:11 +0000 (0:00:00.130) 0:01:12.469 ****** 2026-01-10 14:28:12.638335 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:28:12.638339 | orchestrator | 2026-01-10 14:28:12.638344 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-01-10 14:28:12.638352 | orchestrator | Saturday 10 January 2026 14:28:11 +0000 (0:00:00.123) 0:01:12.593 ****** 2026-01-10 14:28:12.638356 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:28:12.638360 | orchestrator | 2026-01-10 14:28:12.638365 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-01-10 14:28:12.638369 | orchestrator | Saturday 10 January 2026 14:28:11 +0000 (0:00:00.117) 0:01:12.710 ****** 2026-01-10 14:28:12.638373 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:28:12.638377 | orchestrator | 2026-01-10 14:28:12.638382 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-01-10 14:28:12.638386 | orchestrator | Saturday 10 January 2026 14:28:11 +0000 (0:00:00.128) 0:01:12.839 ****** 2026-01-10 14:28:12.638390 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:28:12.638394 | orchestrator | 2026-01-10 14:28:12.638398 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-01-10 14:28:12.638402 | orchestrator | Saturday 10 January 2026 14:28:12 +0000 (0:00:00.137) 0:01:12.977 ****** 2026-01-10 14:28:12.638406 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:28:12.638411 | orchestrator | 2026-01-10 14:28:12.638415 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-01-10 14:28:12.638419 | orchestrator | Saturday 10 January 2026 14:28:12 +0000 (0:00:00.122) 0:01:13.100 ****** 2026-01-10 14:28:12.638423 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4cb3fc90-004d-5443-9ae7-f5eff9c4438f', 'data_vg': 'ceph-4cb3fc90-004d-5443-9ae7-f5eff9c4438f'})  2026-01-10 14:28:12.638428 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-dec76364-a7ee-5469-8bc3-2dcf5060f83e', 'data_vg': 'ceph-dec76364-a7ee-5469-8bc3-2dcf5060f83e'})  2026-01-10 14:28:12.638432 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:28:12.638436 | orchestrator | 2026-01-10 14:28:12.638441 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-01-10 14:28:12.638445 | orchestrator | Saturday 10 January 2026 14:28:12 +0000 (0:00:00.132) 0:01:13.233 ****** 2026-01-10 14:28:12.638449 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4cb3fc90-004d-5443-9ae7-f5eff9c4438f', 'data_vg': 'ceph-4cb3fc90-004d-5443-9ae7-f5eff9c4438f'})  2026-01-10 14:28:12.638453 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-dec76364-a7ee-5469-8bc3-2dcf5060f83e', 'data_vg': 'ceph-dec76364-a7ee-5469-8bc3-2dcf5060f83e'})  2026-01-10 14:28:12.638457 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:28:12.638462 | orchestrator | 2026-01-10 14:28:12.638466 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-01-10 14:28:12.638470 | orchestrator | Saturday 10 January 2026 14:28:12 +0000 (0:00:00.162) 0:01:13.395 ****** 2026-01-10 14:28:12.638478 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4cb3fc90-004d-5443-9ae7-f5eff9c4438f', 'data_vg': 'ceph-4cb3fc90-004d-5443-9ae7-f5eff9c4438f'})  2026-01-10 14:28:15.694426 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-dec76364-a7ee-5469-8bc3-2dcf5060f83e', 'data_vg': 'ceph-dec76364-a7ee-5469-8bc3-2dcf5060f83e'})  2026-01-10 14:28:15.694554 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:28:15.694571 | orchestrator | 2026-01-10 14:28:15.694583 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-01-10 14:28:15.694595 | orchestrator | Saturday 10 January 2026 14:28:12 +0000 (0:00:00.167) 0:01:13.563 ****** 2026-01-10 14:28:15.694605 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4cb3fc90-004d-5443-9ae7-f5eff9c4438f', 'data_vg': 'ceph-4cb3fc90-004d-5443-9ae7-f5eff9c4438f'})  2026-01-10 14:28:15.694615 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-dec76364-a7ee-5469-8bc3-2dcf5060f83e', 'data_vg': 'ceph-dec76364-a7ee-5469-8bc3-2dcf5060f83e'})  2026-01-10 14:28:15.694624 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:28:15.694634 | orchestrator | 2026-01-10 14:28:15.694644 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-01-10 14:28:15.694678 | orchestrator | Saturday 10 January 2026 14:28:12 +0000 (0:00:00.138) 0:01:13.702 ****** 2026-01-10 14:28:15.694688 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4cb3fc90-004d-5443-9ae7-f5eff9c4438f', 'data_vg': 'ceph-4cb3fc90-004d-5443-9ae7-f5eff9c4438f'})  2026-01-10 14:28:15.694697 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-dec76364-a7ee-5469-8bc3-2dcf5060f83e', 'data_vg': 'ceph-dec76364-a7ee-5469-8bc3-2dcf5060f83e'})  2026-01-10 14:28:15.694707 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:28:15.694716 | orchestrator | 2026-01-10 14:28:15.694725 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-01-10 14:28:15.694739 | orchestrator | Saturday 10 January 2026 14:28:12 +0000 (0:00:00.138) 0:01:13.840 ****** 2026-01-10 14:28:15.694757 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4cb3fc90-004d-5443-9ae7-f5eff9c4438f', 'data_vg': 'ceph-4cb3fc90-004d-5443-9ae7-f5eff9c4438f'})  2026-01-10 14:28:15.694790 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-dec76364-a7ee-5469-8bc3-2dcf5060f83e', 'data_vg': 'ceph-dec76364-a7ee-5469-8bc3-2dcf5060f83e'})  2026-01-10 14:28:15.694807 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:28:15.694825 | orchestrator | 2026-01-10 14:28:15.694843 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-01-10 14:28:15.694862 | orchestrator | Saturday 10 January 2026 14:28:13 +0000 (0:00:00.379) 0:01:14.219 ****** 2026-01-10 14:28:15.694873 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4cb3fc90-004d-5443-9ae7-f5eff9c4438f', 'data_vg': 'ceph-4cb3fc90-004d-5443-9ae7-f5eff9c4438f'})  2026-01-10 14:28:15.694884 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-dec76364-a7ee-5469-8bc3-2dcf5060f83e', 'data_vg': 'ceph-dec76364-a7ee-5469-8bc3-2dcf5060f83e'})  2026-01-10 14:28:15.694896 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:28:15.694907 | orchestrator | 2026-01-10 14:28:15.694917 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-01-10 14:28:15.694928 | orchestrator | Saturday 10 January 2026 14:28:13 +0000 (0:00:00.161) 0:01:14.381 ****** 2026-01-10 14:28:15.694939 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4cb3fc90-004d-5443-9ae7-f5eff9c4438f', 'data_vg': 'ceph-4cb3fc90-004d-5443-9ae7-f5eff9c4438f'})  2026-01-10 14:28:15.694950 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-dec76364-a7ee-5469-8bc3-2dcf5060f83e', 'data_vg': 'ceph-dec76364-a7ee-5469-8bc3-2dcf5060f83e'})  2026-01-10 14:28:15.694960 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:28:15.694971 | orchestrator | 2026-01-10 14:28:15.694982 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-01-10 14:28:15.694992 | orchestrator | Saturday 10 January 2026 14:28:13 +0000 (0:00:00.151) 0:01:14.532 ****** 2026-01-10 14:28:15.695003 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:28:15.695014 | orchestrator | 2026-01-10 14:28:15.695025 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-01-10 14:28:15.695036 | orchestrator | Saturday 10 January 2026 14:28:14 +0000 (0:00:00.518) 0:01:15.051 ****** 2026-01-10 14:28:15.695047 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:28:15.695058 | orchestrator | 2026-01-10 14:28:15.695069 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-01-10 14:28:15.695080 | orchestrator | Saturday 10 January 2026 14:28:14 +0000 (0:00:00.521) 0:01:15.572 ****** 2026-01-10 14:28:15.695091 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:28:15.695101 | orchestrator | 2026-01-10 14:28:15.695112 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-01-10 14:28:15.695123 | orchestrator | Saturday 10 January 2026 14:28:14 +0000 (0:00:00.159) 0:01:15.731 ****** 2026-01-10 14:28:15.695134 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-4cb3fc90-004d-5443-9ae7-f5eff9c4438f', 'vg_name': 'ceph-4cb3fc90-004d-5443-9ae7-f5eff9c4438f'}) 2026-01-10 14:28:15.695146 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-dec76364-a7ee-5469-8bc3-2dcf5060f83e', 'vg_name': 'ceph-dec76364-a7ee-5469-8bc3-2dcf5060f83e'}) 2026-01-10 14:28:15.695165 | orchestrator | 2026-01-10 14:28:15.695176 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-01-10 14:28:15.695187 | orchestrator | Saturday 10 January 2026 14:28:14 +0000 (0:00:00.179) 0:01:15.911 ****** 2026-01-10 14:28:15.695216 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4cb3fc90-004d-5443-9ae7-f5eff9c4438f', 'data_vg': 'ceph-4cb3fc90-004d-5443-9ae7-f5eff9c4438f'})  2026-01-10 14:28:15.695226 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-dec76364-a7ee-5469-8bc3-2dcf5060f83e', 'data_vg': 'ceph-dec76364-a7ee-5469-8bc3-2dcf5060f83e'})  2026-01-10 14:28:15.695270 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:28:15.695288 | orchestrator | 2026-01-10 14:28:15.695304 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-01-10 14:28:15.695321 | orchestrator | Saturday 10 January 2026 14:28:15 +0000 (0:00:00.186) 0:01:16.098 ****** 2026-01-10 14:28:15.695333 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4cb3fc90-004d-5443-9ae7-f5eff9c4438f', 'data_vg': 'ceph-4cb3fc90-004d-5443-9ae7-f5eff9c4438f'})  2026-01-10 14:28:15.695343 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-dec76364-a7ee-5469-8bc3-2dcf5060f83e', 'data_vg': 'ceph-dec76364-a7ee-5469-8bc3-2dcf5060f83e'})  2026-01-10 14:28:15.695352 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:28:15.695362 | orchestrator | 2026-01-10 14:28:15.695371 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-01-10 14:28:15.695380 | orchestrator | Saturday 10 January 2026 14:28:15 +0000 (0:00:00.153) 0:01:16.252 ****** 2026-01-10 14:28:15.695390 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4cb3fc90-004d-5443-9ae7-f5eff9c4438f', 'data_vg': 'ceph-4cb3fc90-004d-5443-9ae7-f5eff9c4438f'})  2026-01-10 14:28:15.695399 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-dec76364-a7ee-5469-8bc3-2dcf5060f83e', 'data_vg': 'ceph-dec76364-a7ee-5469-8bc3-2dcf5060f83e'})  2026-01-10 14:28:15.695409 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:28:15.695424 | orchestrator | 2026-01-10 14:28:15.695446 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-01-10 14:28:15.695468 | orchestrator | Saturday 10 January 2026 14:28:15 +0000 (0:00:00.161) 0:01:16.413 ****** 2026-01-10 14:28:15.695482 | orchestrator | ok: [testbed-node-5] => { 2026-01-10 14:28:15.695497 | orchestrator |  "lvm_report": { 2026-01-10 14:28:15.695511 | orchestrator |  "lv": [ 2026-01-10 14:28:15.695526 | orchestrator |  { 2026-01-10 14:28:15.695551 | orchestrator |  "lv_name": "osd-block-4cb3fc90-004d-5443-9ae7-f5eff9c4438f", 2026-01-10 14:28:15.695569 | orchestrator |  "vg_name": "ceph-4cb3fc90-004d-5443-9ae7-f5eff9c4438f" 2026-01-10 14:28:15.695584 | orchestrator |  }, 2026-01-10 14:28:15.695600 | orchestrator |  { 2026-01-10 14:28:15.695610 | orchestrator |  "lv_name": "osd-block-dec76364-a7ee-5469-8bc3-2dcf5060f83e", 2026-01-10 14:28:15.695620 | orchestrator |  "vg_name": "ceph-dec76364-a7ee-5469-8bc3-2dcf5060f83e" 2026-01-10 14:28:15.695629 | orchestrator |  } 2026-01-10 14:28:15.695638 | orchestrator |  ], 2026-01-10 14:28:15.695647 | orchestrator |  "pv": [ 2026-01-10 14:28:15.695657 | orchestrator |  { 2026-01-10 14:28:15.695666 | orchestrator |  "pv_name": "/dev/sdb", 2026-01-10 14:28:15.695675 | orchestrator |  "vg_name": "ceph-4cb3fc90-004d-5443-9ae7-f5eff9c4438f" 2026-01-10 14:28:15.695685 | orchestrator |  }, 2026-01-10 14:28:15.695694 | orchestrator |  { 2026-01-10 14:28:15.695717 | orchestrator |  "pv_name": "/dev/sdc", 2026-01-10 14:28:15.695727 | orchestrator |  "vg_name": "ceph-dec76364-a7ee-5469-8bc3-2dcf5060f83e" 2026-01-10 14:28:15.695736 | orchestrator |  } 2026-01-10 14:28:15.695746 | orchestrator |  ] 2026-01-10 14:28:15.695765 | orchestrator |  } 2026-01-10 14:28:15.695774 | orchestrator | } 2026-01-10 14:28:15.695784 | orchestrator | 2026-01-10 14:28:15.695794 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-10 14:28:15.695803 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-01-10 14:28:15.695813 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-01-10 14:28:15.695823 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-01-10 14:28:15.695832 | orchestrator | 2026-01-10 14:28:15.695842 | orchestrator | 2026-01-10 14:28:15.695852 | orchestrator | 2026-01-10 14:28:15.695861 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-10 14:28:15.695871 | orchestrator | Saturday 10 January 2026 14:28:15 +0000 (0:00:00.168) 0:01:16.582 ****** 2026-01-10 14:28:15.695880 | orchestrator | =============================================================================== 2026-01-10 14:28:15.695889 | orchestrator | Create block VGs -------------------------------------------------------- 5.70s 2026-01-10 14:28:15.695899 | orchestrator | Create block LVs -------------------------------------------------------- 4.29s 2026-01-10 14:28:15.695908 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.82s 2026-01-10 14:28:15.695918 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 1.82s 2026-01-10 14:28:15.695927 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.69s 2026-01-10 14:28:15.695937 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.62s 2026-01-10 14:28:15.695946 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.59s 2026-01-10 14:28:15.695956 | orchestrator | Add known partitions to the list of available block devices ------------- 1.50s 2026-01-10 14:28:15.695975 | orchestrator | Add known links to the list of available block devices ------------------ 1.44s 2026-01-10 14:28:16.173005 | orchestrator | Add known partitions to the list of available block devices ------------- 1.08s 2026-01-10 14:28:16.173120 | orchestrator | Print LVM report data --------------------------------------------------- 1.05s 2026-01-10 14:28:16.173141 | orchestrator | Add known partitions to the list of available block devices ------------- 0.90s 2026-01-10 14:28:16.173160 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.87s 2026-01-10 14:28:16.173178 | orchestrator | Add known links to the list of available block devices ------------------ 0.83s 2026-01-10 14:28:16.173197 | orchestrator | Add known links to the list of available block devices ------------------ 0.80s 2026-01-10 14:28:16.173214 | orchestrator | Add known links to the list of available block devices ------------------ 0.77s 2026-01-10 14:28:16.173313 | orchestrator | Add known partitions to the list of available block devices ------------- 0.76s 2026-01-10 14:28:16.173337 | orchestrator | Print 'Create WAL VGs' -------------------------------------------------- 0.76s 2026-01-10 14:28:16.173354 | orchestrator | Get initial list of available block devices ----------------------------- 0.75s 2026-01-10 14:28:16.173374 | orchestrator | Create DB LVs for ceph_db_devices --------------------------------------- 0.74s 2026-01-10 14:28:28.716870 | orchestrator | 2026-01-10 14:28:28 | INFO  | Task 9d2a0873-438b-4be6-b6f7-aa18a3297201 (facts) was prepared for execution. 2026-01-10 14:28:28.716991 | orchestrator | 2026-01-10 14:28:28 | INFO  | It takes a moment until task 9d2a0873-438b-4be6-b6f7-aa18a3297201 (facts) has been started and output is visible here. 2026-01-10 14:28:41.846400 | orchestrator | 2026-01-10 14:28:41.846518 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-01-10 14:28:41.846532 | orchestrator | 2026-01-10 14:28:41.846540 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-01-10 14:28:41.846547 | orchestrator | Saturday 10 January 2026 14:28:33 +0000 (0:00:00.295) 0:00:00.295 ****** 2026-01-10 14:28:41.846583 | orchestrator | ok: [testbed-manager] 2026-01-10 14:28:41.846592 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:28:41.846600 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:28:41.846608 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:28:41.846615 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:28:41.846622 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:28:41.846630 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:28:41.846636 | orchestrator | 2026-01-10 14:28:41.846644 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-01-10 14:28:41.846651 | orchestrator | Saturday 10 January 2026 14:28:34 +0000 (0:00:01.127) 0:00:01.423 ****** 2026-01-10 14:28:41.846660 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:28:41.846671 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:28:41.846683 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:28:41.846690 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:28:41.846696 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:28:41.846703 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:28:41.846710 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:28:41.846719 | orchestrator | 2026-01-10 14:28:41.846731 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-01-10 14:28:41.846744 | orchestrator | 2026-01-10 14:28:41.846752 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-01-10 14:28:41.846759 | orchestrator | Saturday 10 January 2026 14:28:35 +0000 (0:00:01.442) 0:00:02.865 ****** 2026-01-10 14:28:41.846768 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:28:41.846780 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:28:41.846792 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:28:41.846875 | orchestrator | ok: [testbed-manager] 2026-01-10 14:28:41.846891 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:28:41.846897 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:28:41.846907 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:28:41.846918 | orchestrator | 2026-01-10 14:28:41.846931 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-01-10 14:28:41.846939 | orchestrator | 2026-01-10 14:28:41.846946 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-01-10 14:28:41.846999 | orchestrator | Saturday 10 January 2026 14:28:40 +0000 (0:00:04.908) 0:00:07.773 ****** 2026-01-10 14:28:41.847013 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:28:41.847523 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:28:41.847537 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:28:41.847547 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:28:41.847557 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:28:41.847564 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:28:41.847570 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:28:41.847577 | orchestrator | 2026-01-10 14:28:41.847584 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-10 14:28:41.847592 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-10 14:28:41.847599 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-10 14:28:41.847605 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-10 14:28:41.847610 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-10 14:28:41.847616 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-10 14:28:41.847621 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-10 14:28:41.847644 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-10 14:28:41.847649 | orchestrator | 2026-01-10 14:28:41.847654 | orchestrator | 2026-01-10 14:28:41.847659 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-10 14:28:41.847665 | orchestrator | Saturday 10 January 2026 14:28:41 +0000 (0:00:00.556) 0:00:08.330 ****** 2026-01-10 14:28:41.847669 | orchestrator | =============================================================================== 2026-01-10 14:28:41.847675 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.91s 2026-01-10 14:28:41.847680 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.44s 2026-01-10 14:28:41.847685 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.13s 2026-01-10 14:28:41.847690 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.56s 2026-01-10 14:28:54.325051 | orchestrator | 2026-01-10 14:28:54 | INFO  | Task e54c80cd-f8fd-405a-8e4a-fe801274a1e0 (frr) was prepared for execution. 2026-01-10 14:28:54.326051 | orchestrator | 2026-01-10 14:28:54 | INFO  | It takes a moment until task e54c80cd-f8fd-405a-8e4a-fe801274a1e0 (frr) has been started and output is visible here. 2026-01-10 14:29:21.724918 | orchestrator | 2026-01-10 14:29:21.725001 | orchestrator | PLAY [Apply role frr] ********************************************************** 2026-01-10 14:29:21.725008 | orchestrator | 2026-01-10 14:29:21.725013 | orchestrator | TASK [osism.services.frr : Include distribution specific install tasks] ******** 2026-01-10 14:29:21.725030 | orchestrator | Saturday 10 January 2026 14:28:58 +0000 (0:00:00.232) 0:00:00.232 ****** 2026-01-10 14:29:21.725035 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/frr/tasks/install-Debian-family.yml for testbed-manager 2026-01-10 14:29:21.725041 | orchestrator | 2026-01-10 14:29:21.725045 | orchestrator | TASK [osism.services.frr : Pin frr package version] **************************** 2026-01-10 14:29:21.725049 | orchestrator | Saturday 10 January 2026 14:28:58 +0000 (0:00:00.229) 0:00:00.461 ****** 2026-01-10 14:29:21.725053 | orchestrator | changed: [testbed-manager] 2026-01-10 14:29:21.725058 | orchestrator | 2026-01-10 14:29:21.725062 | orchestrator | TASK [osism.services.frr : Install frr package] ******************************** 2026-01-10 14:29:21.725069 | orchestrator | Saturday 10 January 2026 14:29:00 +0000 (0:00:01.272) 0:00:01.734 ****** 2026-01-10 14:29:21.725073 | orchestrator | changed: [testbed-manager] 2026-01-10 14:29:21.725077 | orchestrator | 2026-01-10 14:29:21.725080 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/vtysh.conf] ********************* 2026-01-10 14:29:21.725084 | orchestrator | Saturday 10 January 2026 14:29:11 +0000 (0:00:10.987) 0:00:12.721 ****** 2026-01-10 14:29:21.725088 | orchestrator | ok: [testbed-manager] 2026-01-10 14:29:21.725094 | orchestrator | 2026-01-10 14:29:21.725101 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/daemons] ************************ 2026-01-10 14:29:21.725107 | orchestrator | Saturday 10 January 2026 14:29:12 +0000 (0:00:01.087) 0:00:13.809 ****** 2026-01-10 14:29:21.725113 | orchestrator | changed: [testbed-manager] 2026-01-10 14:29:21.725119 | orchestrator | 2026-01-10 14:29:21.725130 | orchestrator | TASK [osism.services.frr : Set _frr_uplinks fact] ****************************** 2026-01-10 14:29:21.725137 | orchestrator | Saturday 10 January 2026 14:29:13 +0000 (0:00:01.015) 0:00:14.825 ****** 2026-01-10 14:29:21.725143 | orchestrator | ok: [testbed-manager] 2026-01-10 14:29:21.725150 | orchestrator | 2026-01-10 14:29:21.725156 | orchestrator | TASK [osism.services.frr : Check for frr.conf file in the configuration repository] *** 2026-01-10 14:29:21.725165 | orchestrator | Saturday 10 January 2026 14:29:14 +0000 (0:00:01.217) 0:00:16.043 ****** 2026-01-10 14:29:21.725173 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:29:21.725182 | orchestrator | 2026-01-10 14:29:21.725188 | orchestrator | TASK [osism.services.frr : Copy frr.conf file from the configuration repository] *** 2026-01-10 14:29:21.725195 | orchestrator | Saturday 10 January 2026 14:29:14 +0000 (0:00:00.149) 0:00:16.193 ****** 2026-01-10 14:29:21.725220 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:29:21.725224 | orchestrator | 2026-01-10 14:29:21.725228 | orchestrator | TASK [osism.services.frr : Copy default frr.conf file of type k3s_cilium] ****** 2026-01-10 14:29:21.725232 | orchestrator | Saturday 10 January 2026 14:29:14 +0000 (0:00:00.164) 0:00:16.357 ****** 2026-01-10 14:29:21.725235 | orchestrator | changed: [testbed-manager] 2026-01-10 14:29:21.725239 | orchestrator | 2026-01-10 14:29:21.725243 | orchestrator | TASK [osism.services.frr : Set sysctl parameters] ****************************** 2026-01-10 14:29:21.725246 | orchestrator | Saturday 10 January 2026 14:29:15 +0000 (0:00:01.030) 0:00:17.388 ****** 2026-01-10 14:29:21.725250 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.ip_forward', 'value': 1}) 2026-01-10 14:29:21.725254 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.send_redirects', 'value': 0}) 2026-01-10 14:29:21.725259 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.accept_redirects', 'value': 0}) 2026-01-10 14:29:21.725263 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.fib_multipath_hash_policy', 'value': 1}) 2026-01-10 14:29:21.725266 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.default.ignore_routes_with_linkdown', 'value': 1}) 2026-01-10 14:29:21.725270 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.rp_filter', 'value': 2}) 2026-01-10 14:29:21.725274 | orchestrator | 2026-01-10 14:29:21.725278 | orchestrator | TASK [osism.services.frr : Manage frr service] ********************************* 2026-01-10 14:29:21.725281 | orchestrator | Saturday 10 January 2026 14:29:18 +0000 (0:00:02.378) 0:00:19.767 ****** 2026-01-10 14:29:21.725285 | orchestrator | ok: [testbed-manager] 2026-01-10 14:29:21.725289 | orchestrator | 2026-01-10 14:29:21.725292 | orchestrator | RUNNING HANDLER [osism.services.frr : Restart frr service] ********************* 2026-01-10 14:29:21.725296 | orchestrator | Saturday 10 January 2026 14:29:19 +0000 (0:00:01.666) 0:00:21.433 ****** 2026-01-10 14:29:21.725300 | orchestrator | changed: [testbed-manager] 2026-01-10 14:29:21.725303 | orchestrator | 2026-01-10 14:29:21.725307 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-10 14:29:21.725311 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-10 14:29:21.725315 | orchestrator | 2026-01-10 14:29:21.725319 | orchestrator | 2026-01-10 14:29:21.725322 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-10 14:29:21.725326 | orchestrator | Saturday 10 January 2026 14:29:21 +0000 (0:00:01.480) 0:00:22.914 ****** 2026-01-10 14:29:21.725330 | orchestrator | =============================================================================== 2026-01-10 14:29:21.725333 | orchestrator | osism.services.frr : Install frr package ------------------------------- 10.99s 2026-01-10 14:29:21.725337 | orchestrator | osism.services.frr : Set sysctl parameters ------------------------------ 2.38s 2026-01-10 14:29:21.725341 | orchestrator | osism.services.frr : Manage frr service --------------------------------- 1.67s 2026-01-10 14:29:21.725344 | orchestrator | osism.services.frr : Restart frr service -------------------------------- 1.48s 2026-01-10 14:29:21.725348 | orchestrator | osism.services.frr : Pin frr package version ---------------------------- 1.27s 2026-01-10 14:29:21.725367 | orchestrator | osism.services.frr : Set _frr_uplinks fact ------------------------------ 1.22s 2026-01-10 14:29:21.725376 | orchestrator | osism.services.frr : Copy file: /etc/frr/vtysh.conf --------------------- 1.09s 2026-01-10 14:29:21.725382 | orchestrator | osism.services.frr : Copy default frr.conf file of type k3s_cilium ------ 1.03s 2026-01-10 14:29:21.725388 | orchestrator | osism.services.frr : Copy file: /etc/frr/daemons ------------------------ 1.02s 2026-01-10 14:29:21.725440 | orchestrator | osism.services.frr : Include distribution specific install tasks -------- 0.23s 2026-01-10 14:29:21.725447 | orchestrator | osism.services.frr : Copy frr.conf file from the configuration repository --- 0.16s 2026-01-10 14:29:21.725451 | orchestrator | osism.services.frr : Check for frr.conf file in the configuration repository --- 0.15s 2026-01-10 14:29:22.044888 | orchestrator | 2026-01-10 14:29:22.046682 | orchestrator | --> DEPLOY IN A NUTSHELL -- START -- Sat Jan 10 14:29:22 UTC 2026 2026-01-10 14:29:22.046736 | orchestrator | 2026-01-10 14:29:24.127882 | orchestrator | 2026-01-10 14:29:24 | INFO  | Collection nutshell is prepared for execution 2026-01-10 14:29:24.127978 | orchestrator | 2026-01-10 14:29:24 | INFO  | A [0] - dotfiles 2026-01-10 14:29:34.305210 | orchestrator | 2026-01-10 14:29:34 | INFO  | A [0] - homer 2026-01-10 14:29:34.305358 | orchestrator | 2026-01-10 14:29:34 | INFO  | A [0] - netdata 2026-01-10 14:29:34.305369 | orchestrator | 2026-01-10 14:29:34 | INFO  | A [0] - openstackclient 2026-01-10 14:29:34.305376 | orchestrator | 2026-01-10 14:29:34 | INFO  | A [0] - phpmyadmin 2026-01-10 14:29:34.305393 | orchestrator | 2026-01-10 14:29:34 | INFO  | A [0] - common 2026-01-10 14:29:34.311406 | orchestrator | 2026-01-10 14:29:34 | INFO  | A [1] -- loadbalancer 2026-01-10 14:29:34.311506 | orchestrator | 2026-01-10 14:29:34 | INFO  | A [2] --- opensearch 2026-01-10 14:29:34.311683 | orchestrator | 2026-01-10 14:29:34 | INFO  | A [2] --- mariadb-ng 2026-01-10 14:29:34.312351 | orchestrator | 2026-01-10 14:29:34 | INFO  | A [3] ---- horizon 2026-01-10 14:29:34.313389 | orchestrator | 2026-01-10 14:29:34 | INFO  | A [3] ---- keystone 2026-01-10 14:29:34.313507 | orchestrator | 2026-01-10 14:29:34 | INFO  | A [4] ----- neutron 2026-01-10 14:29:34.314115 | orchestrator | 2026-01-10 14:29:34 | INFO  | A [5] ------ wait-for-nova 2026-01-10 14:29:34.314330 | orchestrator | 2026-01-10 14:29:34 | INFO  | A [6] ------- octavia 2026-01-10 14:29:34.316672 | orchestrator | 2026-01-10 14:29:34 | INFO  | A [4] ----- barbican 2026-01-10 14:29:34.316707 | orchestrator | 2026-01-10 14:29:34 | INFO  | A [4] ----- designate 2026-01-10 14:29:34.316887 | orchestrator | 2026-01-10 14:29:34 | INFO  | A [4] ----- ironic 2026-01-10 14:29:34.317521 | orchestrator | 2026-01-10 14:29:34 | INFO  | A [4] ----- placement 2026-01-10 14:29:34.317553 | orchestrator | 2026-01-10 14:29:34 | INFO  | A [4] ----- magnum 2026-01-10 14:29:34.318655 | orchestrator | 2026-01-10 14:29:34 | INFO  | A [1] -- openvswitch 2026-01-10 14:29:34.318703 | orchestrator | 2026-01-10 14:29:34 | INFO  | A [2] --- ovn 2026-01-10 14:29:34.319083 | orchestrator | 2026-01-10 14:29:34 | INFO  | A [1] -- memcached 2026-01-10 14:29:34.319377 | orchestrator | 2026-01-10 14:29:34 | INFO  | A [1] -- redis 2026-01-10 14:29:34.319944 | orchestrator | 2026-01-10 14:29:34 | INFO  | A [1] -- rabbitmq-ng 2026-01-10 14:29:34.319968 | orchestrator | 2026-01-10 14:29:34 | INFO  | A [0] - kubernetes 2026-01-10 14:29:34.323299 | orchestrator | 2026-01-10 14:29:34 | INFO  | A [1] -- kubeconfig 2026-01-10 14:29:34.323352 | orchestrator | 2026-01-10 14:29:34 | INFO  | A [1] -- copy-kubeconfig 2026-01-10 14:29:34.323362 | orchestrator | 2026-01-10 14:29:34 | INFO  | A [0] - ceph 2026-01-10 14:29:34.325505 | orchestrator | 2026-01-10 14:29:34 | INFO  | A [1] -- ceph-pools 2026-01-10 14:29:34.325549 | orchestrator | 2026-01-10 14:29:34 | INFO  | A [2] --- copy-ceph-keys 2026-01-10 14:29:34.325560 | orchestrator | 2026-01-10 14:29:34 | INFO  | A [3] ---- cephclient 2026-01-10 14:29:34.325571 | orchestrator | 2026-01-10 14:29:34 | INFO  | A [4] ----- ceph-bootstrap-dashboard 2026-01-10 14:29:34.325581 | orchestrator | 2026-01-10 14:29:34 | INFO  | A [4] ----- wait-for-keystone 2026-01-10 14:29:34.326262 | orchestrator | 2026-01-10 14:29:34 | INFO  | A [5] ------ kolla-ceph-rgw 2026-01-10 14:29:34.326296 | orchestrator | 2026-01-10 14:29:34 | INFO  | A [5] ------ glance 2026-01-10 14:29:34.326339 | orchestrator | 2026-01-10 14:29:34 | INFO  | A [5] ------ cinder 2026-01-10 14:29:34.326351 | orchestrator | 2026-01-10 14:29:34 | INFO  | A [5] ------ nova 2026-01-10 14:29:34.326363 | orchestrator | 2026-01-10 14:29:34 | INFO  | A [4] ----- prometheus 2026-01-10 14:29:34.326917 | orchestrator | 2026-01-10 14:29:34 | INFO  | A [5] ------ grafana 2026-01-10 14:29:34.607582 | orchestrator | 2026-01-10 14:29:34 | INFO  | All tasks of the collection nutshell are prepared for execution 2026-01-10 14:29:34.607679 | orchestrator | 2026-01-10 14:29:34 | INFO  | Tasks are running in the background 2026-01-10 14:29:37.979285 | orchestrator | 2026-01-10 14:29:37 | INFO  | No task IDs specified, wait for all currently running tasks 2026-01-10 14:29:40.101106 | orchestrator | 2026-01-10 14:29:40 | INFO  | Task f63d2e19-0a2a-49e7-a0bb-3f9c0833119b is in state STARTED 2026-01-10 14:29:40.101333 | orchestrator | 2026-01-10 14:29:40 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:29:40.102189 | orchestrator | 2026-01-10 14:29:40 | INFO  | Task 7d5a6090-3c8b-4e0a-9098-398a44aadcf9 is in state STARTED 2026-01-10 14:29:40.102512 | orchestrator | 2026-01-10 14:29:40 | INFO  | Task 5d1dbc0e-dd11-45c5-bb7b-57b6226ec7e7 is in state STARTED 2026-01-10 14:29:40.103053 | orchestrator | 2026-01-10 14:29:40 | INFO  | Task 58117f0b-6074-45cc-99bd-993bb6ddc811 is in state STARTED 2026-01-10 14:29:40.104390 | orchestrator | 2026-01-10 14:29:40 | INFO  | Task 1b88017a-4044-4ec2-a2c8-da647970171f is in state STARTED 2026-01-10 14:29:40.104898 | orchestrator | 2026-01-10 14:29:40 | INFO  | Task 0ea35c0e-8b75-4234-8dc8-b273b3fb89c1 is in state STARTED 2026-01-10 14:29:40.107244 | orchestrator | 2026-01-10 14:29:40 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:29:43.163952 | orchestrator | 2026-01-10 14:29:43 | INFO  | Task f63d2e19-0a2a-49e7-a0bb-3f9c0833119b is in state STARTED 2026-01-10 14:29:43.164133 | orchestrator | 2026-01-10 14:29:43 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:29:43.164712 | orchestrator | 2026-01-10 14:29:43 | INFO  | Task 7d5a6090-3c8b-4e0a-9098-398a44aadcf9 is in state STARTED 2026-01-10 14:29:43.166946 | orchestrator | 2026-01-10 14:29:43 | INFO  | Task 5d1dbc0e-dd11-45c5-bb7b-57b6226ec7e7 is in state STARTED 2026-01-10 14:29:43.167678 | orchestrator | 2026-01-10 14:29:43 | INFO  | Task 58117f0b-6074-45cc-99bd-993bb6ddc811 is in state STARTED 2026-01-10 14:29:43.168754 | orchestrator | 2026-01-10 14:29:43 | INFO  | Task 1b88017a-4044-4ec2-a2c8-da647970171f is in state STARTED 2026-01-10 14:29:43.171266 | orchestrator | 2026-01-10 14:29:43 | INFO  | Task 0ea35c0e-8b75-4234-8dc8-b273b3fb89c1 is in state STARTED 2026-01-10 14:29:43.171320 | orchestrator | 2026-01-10 14:29:43 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:29:46.207958 | orchestrator | 2026-01-10 14:29:46 | INFO  | Task f63d2e19-0a2a-49e7-a0bb-3f9c0833119b is in state STARTED 2026-01-10 14:29:46.209426 | orchestrator | 2026-01-10 14:29:46 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:29:46.209978 | orchestrator | 2026-01-10 14:29:46 | INFO  | Task 7d5a6090-3c8b-4e0a-9098-398a44aadcf9 is in state STARTED 2026-01-10 14:29:46.210533 | orchestrator | 2026-01-10 14:29:46 | INFO  | Task 5d1dbc0e-dd11-45c5-bb7b-57b6226ec7e7 is in state STARTED 2026-01-10 14:29:46.211404 | orchestrator | 2026-01-10 14:29:46 | INFO  | Task 58117f0b-6074-45cc-99bd-993bb6ddc811 is in state STARTED 2026-01-10 14:29:46.215876 | orchestrator | 2026-01-10 14:29:46 | INFO  | Task 1b88017a-4044-4ec2-a2c8-da647970171f is in state STARTED 2026-01-10 14:29:46.216295 | orchestrator | 2026-01-10 14:29:46 | INFO  | Task 0ea35c0e-8b75-4234-8dc8-b273b3fb89c1 is in state STARTED 2026-01-10 14:29:46.216359 | orchestrator | 2026-01-10 14:29:46 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:29:49.557180 | orchestrator | 2026-01-10 14:29:49 | INFO  | Task f63d2e19-0a2a-49e7-a0bb-3f9c0833119b is in state STARTED 2026-01-10 14:29:49.558120 | orchestrator | 2026-01-10 14:29:49 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:29:49.558155 | orchestrator | 2026-01-10 14:29:49 | INFO  | Task 7d5a6090-3c8b-4e0a-9098-398a44aadcf9 is in state STARTED 2026-01-10 14:29:49.558698 | orchestrator | 2026-01-10 14:29:49 | INFO  | Task 5d1dbc0e-dd11-45c5-bb7b-57b6226ec7e7 is in state STARTED 2026-01-10 14:29:49.559133 | orchestrator | 2026-01-10 14:29:49 | INFO  | Task 58117f0b-6074-45cc-99bd-993bb6ddc811 is in state STARTED 2026-01-10 14:29:49.559754 | orchestrator | 2026-01-10 14:29:49 | INFO  | Task 1b88017a-4044-4ec2-a2c8-da647970171f is in state STARTED 2026-01-10 14:29:49.562584 | orchestrator | 2026-01-10 14:29:49 | INFO  | Task 0ea35c0e-8b75-4234-8dc8-b273b3fb89c1 is in state STARTED 2026-01-10 14:29:49.562610 | orchestrator | 2026-01-10 14:29:49 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:29:52.647354 | orchestrator | 2026-01-10 14:29:52 | INFO  | Task f63d2e19-0a2a-49e7-a0bb-3f9c0833119b is in state STARTED 2026-01-10 14:29:52.647448 | orchestrator | 2026-01-10 14:29:52 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:29:52.647458 | orchestrator | 2026-01-10 14:29:52 | INFO  | Task 7d5a6090-3c8b-4e0a-9098-398a44aadcf9 is in state STARTED 2026-01-10 14:29:52.647466 | orchestrator | 2026-01-10 14:29:52 | INFO  | Task 5d1dbc0e-dd11-45c5-bb7b-57b6226ec7e7 is in state STARTED 2026-01-10 14:29:52.647530 | orchestrator | 2026-01-10 14:29:52 | INFO  | Task 58117f0b-6074-45cc-99bd-993bb6ddc811 is in state STARTED 2026-01-10 14:29:52.647539 | orchestrator | 2026-01-10 14:29:52 | INFO  | Task 1b88017a-4044-4ec2-a2c8-da647970171f is in state STARTED 2026-01-10 14:29:52.647547 | orchestrator | 2026-01-10 14:29:52 | INFO  | Task 0ea35c0e-8b75-4234-8dc8-b273b3fb89c1 is in state STARTED 2026-01-10 14:29:52.647555 | orchestrator | 2026-01-10 14:29:52 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:29:55.774822 | orchestrator | 2026-01-10 14:29:55 | INFO  | Task f63d2e19-0a2a-49e7-a0bb-3f9c0833119b is in state STARTED 2026-01-10 14:29:55.784941 | orchestrator | 2026-01-10 14:29:55 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:29:55.801210 | orchestrator | 2026-01-10 14:29:55 | INFO  | Task 7d5a6090-3c8b-4e0a-9098-398a44aadcf9 is in state STARTED 2026-01-10 14:29:55.807099 | orchestrator | 2026-01-10 14:29:55 | INFO  | Task 5d1dbc0e-dd11-45c5-bb7b-57b6226ec7e7 is in state STARTED 2026-01-10 14:29:55.809287 | orchestrator | 2026-01-10 14:29:55 | INFO  | Task 58117f0b-6074-45cc-99bd-993bb6ddc811 is in state STARTED 2026-01-10 14:29:55.811259 | orchestrator | 2026-01-10 14:29:55 | INFO  | Task 1b88017a-4044-4ec2-a2c8-da647970171f is in state STARTED 2026-01-10 14:29:55.812781 | orchestrator | 2026-01-10 14:29:55 | INFO  | Task 0ea35c0e-8b75-4234-8dc8-b273b3fb89c1 is in state STARTED 2026-01-10 14:29:55.812814 | orchestrator | 2026-01-10 14:29:55 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:29:59.087652 | orchestrator | 2026-01-10 14:29:58 | INFO  | Task f63d2e19-0a2a-49e7-a0bb-3f9c0833119b is in state STARTED 2026-01-10 14:29:59.087777 | orchestrator | 2026-01-10 14:29:58 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:29:59.087800 | orchestrator | 2026-01-10 14:29:58 | INFO  | Task 7d5a6090-3c8b-4e0a-9098-398a44aadcf9 is in state STARTED 2026-01-10 14:29:59.087850 | orchestrator | 2026-01-10 14:29:58 | INFO  | Task 5d1dbc0e-dd11-45c5-bb7b-57b6226ec7e7 is in state STARTED 2026-01-10 14:29:59.087869 | orchestrator | 2026-01-10 14:29:58 | INFO  | Task 58117f0b-6074-45cc-99bd-993bb6ddc811 is in state STARTED 2026-01-10 14:29:59.087886 | orchestrator | 2026-01-10 14:29:58 | INFO  | Task 1b88017a-4044-4ec2-a2c8-da647970171f is in state STARTED 2026-01-10 14:29:59.087901 | orchestrator | 2026-01-10 14:29:58 | INFO  | Task 0ea35c0e-8b75-4234-8dc8-b273b3fb89c1 is in state STARTED 2026-01-10 14:29:59.087918 | orchestrator | 2026-01-10 14:29:58 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:30:02.006201 | orchestrator | 2026-01-10 14:30:02 | INFO  | Task f63d2e19-0a2a-49e7-a0bb-3f9c0833119b is in state STARTED 2026-01-10 14:30:02.006293 | orchestrator | 2026-01-10 14:30:02 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:30:02.008571 | orchestrator | 2026-01-10 14:30:02 | INFO  | Task 7d5a6090-3c8b-4e0a-9098-398a44aadcf9 is in state STARTED 2026-01-10 14:30:02.008914 | orchestrator | 2026-01-10 14:30:02 | INFO  | Task 5d1dbc0e-dd11-45c5-bb7b-57b6226ec7e7 is in state STARTED 2026-01-10 14:30:02.009593 | orchestrator | 2026-01-10 14:30:02 | INFO  | Task 58117f0b-6074-45cc-99bd-993bb6ddc811 is in state STARTED 2026-01-10 14:30:02.010805 | orchestrator | 2026-01-10 14:30:02 | INFO  | Task 1b88017a-4044-4ec2-a2c8-da647970171f is in state STARTED 2026-01-10 14:30:02.010862 | orchestrator | 2026-01-10 14:30:02 | INFO  | Task 0ea35c0e-8b75-4234-8dc8-b273b3fb89c1 is in state STARTED 2026-01-10 14:30:02.010872 | orchestrator | 2026-01-10 14:30:02 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:30:05.061761 | orchestrator | 2026-01-10 14:30:05 | INFO  | Task f63d2e19-0a2a-49e7-a0bb-3f9c0833119b is in state STARTED 2026-01-10 14:30:05.064574 | orchestrator | 2026-01-10 14:30:05 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:30:05.065109 | orchestrator | 2026-01-10 14:30:05 | INFO  | Task 7d5a6090-3c8b-4e0a-9098-398a44aadcf9 is in state STARTED 2026-01-10 14:30:05.065691 | orchestrator | 2026-01-10 14:30:05 | INFO  | Task 5d1dbc0e-dd11-45c5-bb7b-57b6226ec7e7 is in state STARTED 2026-01-10 14:30:05.066298 | orchestrator | 2026-01-10 14:30:05 | INFO  | Task 58117f0b-6074-45cc-99bd-993bb6ddc811 is in state STARTED 2026-01-10 14:30:05.067608 | orchestrator | 2026-01-10 14:30:05 | INFO  | Task 1b88017a-4044-4ec2-a2c8-da647970171f is in state STARTED 2026-01-10 14:30:05.067923 | orchestrator | 2026-01-10 14:30:05 | INFO  | Task 0ea35c0e-8b75-4234-8dc8-b273b3fb89c1 is in state SUCCESS 2026-01-10 14:30:05.068226 | orchestrator | 2026-01-10 14:30:05.068240 | orchestrator | PLAY [Apply role geerlingguy.dotfiles] ***************************************** 2026-01-10 14:30:05.068246 | orchestrator | 2026-01-10 14:30:05.068250 | orchestrator | TASK [geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally.] **** 2026-01-10 14:30:05.068254 | orchestrator | Saturday 10 January 2026 14:29:49 +0000 (0:00:01.326) 0:00:01.326 ****** 2026-01-10 14:30:05.068258 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:30:05.068263 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:30:05.068267 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:30:05.068272 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:30:05.068276 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:30:05.068280 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:30:05.068284 | orchestrator | changed: [testbed-manager] 2026-01-10 14:30:05.068288 | orchestrator | 2026-01-10 14:30:05.068292 | orchestrator | TASK [geerlingguy.dotfiles : Ensure all configured dotfiles are links.] ******** 2026-01-10 14:30:05.068296 | orchestrator | Saturday 10 January 2026 14:29:53 +0000 (0:00:03.938) 0:00:05.265 ****** 2026-01-10 14:30:05.068320 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2026-01-10 14:30:05.068324 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2026-01-10 14:30:05.068328 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2026-01-10 14:30:05.068332 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2026-01-10 14:30:05.068336 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2026-01-10 14:30:05.068340 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2026-01-10 14:30:05.068344 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2026-01-10 14:30:05.068348 | orchestrator | 2026-01-10 14:30:05.068352 | orchestrator | TASK [geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked.] *** 2026-01-10 14:30:05.068356 | orchestrator | Saturday 10 January 2026 14:29:55 +0000 (0:00:02.039) 0:00:07.304 ****** 2026-01-10 14:30:05.068363 | orchestrator | ok: [testbed-manager] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-01-10 14:29:54.016392', 'end': '2026-01-10 14:29:54.020927', 'delta': '0:00:00.004535', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-01-10 14:30:05.068374 | orchestrator | ok: [testbed-node-1] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-01-10 14:29:53.787544', 'end': '2026-01-10 14:29:53.794498', 'delta': '0:00:00.006954', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-01-10 14:30:05.068378 | orchestrator | ok: [testbed-node-0] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-01-10 14:29:53.754586', 'end': '2026-01-10 14:29:53.765487', 'delta': '0:00:00.010901', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-01-10 14:30:05.068600 | orchestrator | ok: [testbed-node-2] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-01-10 14:29:53.913093', 'end': '2026-01-10 14:29:53.920920', 'delta': '0:00:00.007827', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-01-10 14:30:05.068619 | orchestrator | ok: [testbed-node-3] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-01-10 14:29:54.324529', 'end': '2026-01-10 14:29:54.332281', 'delta': '0:00:00.007752', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-01-10 14:30:05.068626 | orchestrator | ok: [testbed-node-4] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-01-10 14:29:54.234873', 'end': '2026-01-10 14:29:54.246731', 'delta': '0:00:00.011858', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-01-10 14:30:05.068630 | orchestrator | ok: [testbed-node-5] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-01-10 14:29:54.593747', 'end': '2026-01-10 14:29:54.601086', 'delta': '0:00:00.007339', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-01-10 14:30:05.068634 | orchestrator | 2026-01-10 14:30:05.068638 | orchestrator | TASK [geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist.] **** 2026-01-10 14:30:05.068642 | orchestrator | Saturday 10 January 2026 14:29:58 +0000 (0:00:03.274) 0:00:10.579 ****** 2026-01-10 14:30:05.068646 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2026-01-10 14:30:05.068650 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2026-01-10 14:30:05.068654 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2026-01-10 14:30:05.068657 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2026-01-10 14:30:05.068661 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2026-01-10 14:30:05.068665 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2026-01-10 14:30:05.068669 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2026-01-10 14:30:05.068672 | orchestrator | 2026-01-10 14:30:05.068676 | orchestrator | TASK [geerlingguy.dotfiles : Link dotfiles into home folder.] ****************** 2026-01-10 14:30:05.068680 | orchestrator | Saturday 10 January 2026 14:30:01 +0000 (0:00:02.989) 0:00:13.568 ****** 2026-01-10 14:30:05.068684 | orchestrator | changed: [testbed-manager] => (item=.tmux.conf) 2026-01-10 14:30:05.068687 | orchestrator | changed: [testbed-node-0] => (item=.tmux.conf) 2026-01-10 14:30:05.068691 | orchestrator | changed: [testbed-node-1] => (item=.tmux.conf) 2026-01-10 14:30:05.068699 | orchestrator | changed: [testbed-node-3] => (item=.tmux.conf) 2026-01-10 14:30:05.068703 | orchestrator | changed: [testbed-node-2] => (item=.tmux.conf) 2026-01-10 14:30:05.068707 | orchestrator | changed: [testbed-node-4] => (item=.tmux.conf) 2026-01-10 14:30:05.068711 | orchestrator | changed: [testbed-node-5] => (item=.tmux.conf) 2026-01-10 14:30:05.068715 | orchestrator | 2026-01-10 14:30:05.068719 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-10 14:30:05.068728 | orchestrator | testbed-manager : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-10 14:30:05.068733 | orchestrator | testbed-node-0 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-10 14:30:05.068737 | orchestrator | testbed-node-1 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-10 14:30:05.068741 | orchestrator | testbed-node-2 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-10 14:30:05.068745 | orchestrator | testbed-node-3 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-10 14:30:05.068748 | orchestrator | testbed-node-4 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-10 14:30:05.068753 | orchestrator | testbed-node-5 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-10 14:30:05.068756 | orchestrator | 2026-01-10 14:30:05.068760 | orchestrator | 2026-01-10 14:30:05.068764 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-10 14:30:05.068768 | orchestrator | Saturday 10 January 2026 14:30:04 +0000 (0:00:02.896) 0:00:16.465 ****** 2026-01-10 14:30:05.068774 | orchestrator | =============================================================================== 2026-01-10 14:30:05.068781 | orchestrator | geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally. ---- 3.94s 2026-01-10 14:30:05.068787 | orchestrator | geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked. --- 3.28s 2026-01-10 14:30:05.068792 | orchestrator | geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist. ---- 2.99s 2026-01-10 14:30:05.068798 | orchestrator | geerlingguy.dotfiles : Link dotfiles into home folder. ------------------ 2.90s 2026-01-10 14:30:05.068804 | orchestrator | geerlingguy.dotfiles : Ensure all configured dotfiles are links. -------- 2.04s 2026-01-10 14:30:05.074006 | orchestrator | 2026-01-10 14:30:05 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:30:08.132785 | orchestrator | 2026-01-10 14:30:08 | INFO  | Task f63d2e19-0a2a-49e7-a0bb-3f9c0833119b is in state STARTED 2026-01-10 14:30:08.134101 | orchestrator | 2026-01-10 14:30:08 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:30:08.134344 | orchestrator | 2026-01-10 14:30:08 | INFO  | Task b8481bf6-201d-4cfb-b42e-198246b7eaca is in state STARTED 2026-01-10 14:30:08.135037 | orchestrator | 2026-01-10 14:30:08 | INFO  | Task 7d5a6090-3c8b-4e0a-9098-398a44aadcf9 is in state STARTED 2026-01-10 14:30:08.136079 | orchestrator | 2026-01-10 14:30:08 | INFO  | Task 5d1dbc0e-dd11-45c5-bb7b-57b6226ec7e7 is in state STARTED 2026-01-10 14:30:08.137214 | orchestrator | 2026-01-10 14:30:08 | INFO  | Task 58117f0b-6074-45cc-99bd-993bb6ddc811 is in state STARTED 2026-01-10 14:30:08.138085 | orchestrator | 2026-01-10 14:30:08 | INFO  | Task 1b88017a-4044-4ec2-a2c8-da647970171f is in state STARTED 2026-01-10 14:30:08.138111 | orchestrator | 2026-01-10 14:30:08 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:30:11.189480 | orchestrator | 2026-01-10 14:30:11 | INFO  | Task f63d2e19-0a2a-49e7-a0bb-3f9c0833119b is in state STARTED 2026-01-10 14:30:11.189614 | orchestrator | 2026-01-10 14:30:11 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:30:11.190477 | orchestrator | 2026-01-10 14:30:11 | INFO  | Task b8481bf6-201d-4cfb-b42e-198246b7eaca is in state STARTED 2026-01-10 14:30:11.192099 | orchestrator | 2026-01-10 14:30:11 | INFO  | Task 7d5a6090-3c8b-4e0a-9098-398a44aadcf9 is in state STARTED 2026-01-10 14:30:11.194818 | orchestrator | 2026-01-10 14:30:11 | INFO  | Task 5d1dbc0e-dd11-45c5-bb7b-57b6226ec7e7 is in state STARTED 2026-01-10 14:30:11.196856 | orchestrator | 2026-01-10 14:30:11 | INFO  | Task 58117f0b-6074-45cc-99bd-993bb6ddc811 is in state STARTED 2026-01-10 14:30:11.198550 | orchestrator | 2026-01-10 14:30:11 | INFO  | Task 1b88017a-4044-4ec2-a2c8-da647970171f is in state STARTED 2026-01-10 14:30:11.198588 | orchestrator | 2026-01-10 14:30:11 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:30:14.501434 | orchestrator | 2026-01-10 14:30:14 | INFO  | Task f63d2e19-0a2a-49e7-a0bb-3f9c0833119b is in state STARTED 2026-01-10 14:30:14.503584 | orchestrator | 2026-01-10 14:30:14 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:30:14.504823 | orchestrator | 2026-01-10 14:30:14 | INFO  | Task b8481bf6-201d-4cfb-b42e-198246b7eaca is in state STARTED 2026-01-10 14:30:14.506399 | orchestrator | 2026-01-10 14:30:14 | INFO  | Task 7d5a6090-3c8b-4e0a-9098-398a44aadcf9 is in state STARTED 2026-01-10 14:30:14.509295 | orchestrator | 2026-01-10 14:30:14 | INFO  | Task 5d1dbc0e-dd11-45c5-bb7b-57b6226ec7e7 is in state STARTED 2026-01-10 14:30:14.510931 | orchestrator | 2026-01-10 14:30:14 | INFO  | Task 58117f0b-6074-45cc-99bd-993bb6ddc811 is in state STARTED 2026-01-10 14:30:14.513043 | orchestrator | 2026-01-10 14:30:14 | INFO  | Task 1b88017a-4044-4ec2-a2c8-da647970171f is in state STARTED 2026-01-10 14:30:14.513133 | orchestrator | 2026-01-10 14:30:14 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:30:17.671799 | orchestrator | 2026-01-10 14:30:17 | INFO  | Task f63d2e19-0a2a-49e7-a0bb-3f9c0833119b is in state STARTED 2026-01-10 14:30:17.675331 | orchestrator | 2026-01-10 14:30:17 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:30:17.678105 | orchestrator | 2026-01-10 14:30:17 | INFO  | Task b8481bf6-201d-4cfb-b42e-198246b7eaca is in state STARTED 2026-01-10 14:30:17.680590 | orchestrator | 2026-01-10 14:30:17 | INFO  | Task 7d5a6090-3c8b-4e0a-9098-398a44aadcf9 is in state STARTED 2026-01-10 14:30:17.681774 | orchestrator | 2026-01-10 14:30:17 | INFO  | Task 5d1dbc0e-dd11-45c5-bb7b-57b6226ec7e7 is in state STARTED 2026-01-10 14:30:17.685947 | orchestrator | 2026-01-10 14:30:17 | INFO  | Task 58117f0b-6074-45cc-99bd-993bb6ddc811 is in state STARTED 2026-01-10 14:30:17.688691 | orchestrator | 2026-01-10 14:30:17 | INFO  | Task 1b88017a-4044-4ec2-a2c8-da647970171f is in state STARTED 2026-01-10 14:30:17.689056 | orchestrator | 2026-01-10 14:30:17 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:30:20.771662 | orchestrator | 2026-01-10 14:30:20 | INFO  | Task f63d2e19-0a2a-49e7-a0bb-3f9c0833119b is in state STARTED 2026-01-10 14:30:20.774287 | orchestrator | 2026-01-10 14:30:20 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:30:20.779070 | orchestrator | 2026-01-10 14:30:20 | INFO  | Task b8481bf6-201d-4cfb-b42e-198246b7eaca is in state STARTED 2026-01-10 14:30:20.780483 | orchestrator | 2026-01-10 14:30:20 | INFO  | Task 7d5a6090-3c8b-4e0a-9098-398a44aadcf9 is in state STARTED 2026-01-10 14:30:20.784411 | orchestrator | 2026-01-10 14:30:20 | INFO  | Task 5d1dbc0e-dd11-45c5-bb7b-57b6226ec7e7 is in state STARTED 2026-01-10 14:30:20.785043 | orchestrator | 2026-01-10 14:30:20 | INFO  | Task 58117f0b-6074-45cc-99bd-993bb6ddc811 is in state STARTED 2026-01-10 14:30:20.789128 | orchestrator | 2026-01-10 14:30:20 | INFO  | Task 1b88017a-4044-4ec2-a2c8-da647970171f is in state STARTED 2026-01-10 14:30:20.789234 | orchestrator | 2026-01-10 14:30:20 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:30:24.021654 | orchestrator | 2026-01-10 14:30:23 | INFO  | Task f63d2e19-0a2a-49e7-a0bb-3f9c0833119b is in state STARTED 2026-01-10 14:30:24.021734 | orchestrator | 2026-01-10 14:30:23 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:30:24.021744 | orchestrator | 2026-01-10 14:30:23 | INFO  | Task b8481bf6-201d-4cfb-b42e-198246b7eaca is in state STARTED 2026-01-10 14:30:24.021752 | orchestrator | 2026-01-10 14:30:23 | INFO  | Task 7d5a6090-3c8b-4e0a-9098-398a44aadcf9 is in state STARTED 2026-01-10 14:30:24.021760 | orchestrator | 2026-01-10 14:30:24 | INFO  | Task 5d1dbc0e-dd11-45c5-bb7b-57b6226ec7e7 is in state STARTED 2026-01-10 14:30:24.021767 | orchestrator | 2026-01-10 14:30:24 | INFO  | Task 58117f0b-6074-45cc-99bd-993bb6ddc811 is in state STARTED 2026-01-10 14:30:24.021774 | orchestrator | 2026-01-10 14:30:24 | INFO  | Task 1b88017a-4044-4ec2-a2c8-da647970171f is in state STARTED 2026-01-10 14:30:24.021781 | orchestrator | 2026-01-10 14:30:24 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:30:27.170526 | orchestrator | 2026-01-10 14:30:27 | INFO  | Task f63d2e19-0a2a-49e7-a0bb-3f9c0833119b is in state STARTED 2026-01-10 14:30:27.170657 | orchestrator | 2026-01-10 14:30:27 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:30:27.170679 | orchestrator | 2026-01-10 14:30:27 | INFO  | Task b8481bf6-201d-4cfb-b42e-198246b7eaca is in state STARTED 2026-01-10 14:30:27.170697 | orchestrator | 2026-01-10 14:30:27 | INFO  | Task 7d5a6090-3c8b-4e0a-9098-398a44aadcf9 is in state STARTED 2026-01-10 14:30:27.170714 | orchestrator | 2026-01-10 14:30:27 | INFO  | Task 5d1dbc0e-dd11-45c5-bb7b-57b6226ec7e7 is in state STARTED 2026-01-10 14:30:27.170731 | orchestrator | 2026-01-10 14:30:27 | INFO  | Task 58117f0b-6074-45cc-99bd-993bb6ddc811 is in state STARTED 2026-01-10 14:30:27.170748 | orchestrator | 2026-01-10 14:30:27 | INFO  | Task 1b88017a-4044-4ec2-a2c8-da647970171f is in state STARTED 2026-01-10 14:30:27.170765 | orchestrator | 2026-01-10 14:30:27 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:30:30.481446 | orchestrator | 2026-01-10 14:30:30 | INFO  | Task f63d2e19-0a2a-49e7-a0bb-3f9c0833119b is in state SUCCESS 2026-01-10 14:30:30.481835 | orchestrator | 2026-01-10 14:30:30 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:30:30.481892 | orchestrator | 2026-01-10 14:30:30 | INFO  | Task b8481bf6-201d-4cfb-b42e-198246b7eaca is in state STARTED 2026-01-10 14:30:30.481933 | orchestrator | 2026-01-10 14:30:30 | INFO  | Task 7d5a6090-3c8b-4e0a-9098-398a44aadcf9 is in state STARTED 2026-01-10 14:30:30.481974 | orchestrator | 2026-01-10 14:30:30 | INFO  | Task 5d1dbc0e-dd11-45c5-bb7b-57b6226ec7e7 is in state STARTED 2026-01-10 14:30:30.482011 | orchestrator | 2026-01-10 14:30:30 | INFO  | Task 58117f0b-6074-45cc-99bd-993bb6ddc811 is in state STARTED 2026-01-10 14:30:30.482223 | orchestrator | 2026-01-10 14:30:30 | INFO  | Task 1b88017a-4044-4ec2-a2c8-da647970171f is in state STARTED 2026-01-10 14:30:30.482256 | orchestrator | 2026-01-10 14:30:30 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:30:33.324149 | orchestrator | 2026-01-10 14:30:33 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:30:33.324352 | orchestrator | 2026-01-10 14:30:33 | INFO  | Task b8481bf6-201d-4cfb-b42e-198246b7eaca is in state STARTED 2026-01-10 14:30:33.327050 | orchestrator | 2026-01-10 14:30:33 | INFO  | Task 7d5a6090-3c8b-4e0a-9098-398a44aadcf9 is in state STARTED 2026-01-10 14:30:33.329596 | orchestrator | 2026-01-10 14:30:33 | INFO  | Task 5d1dbc0e-dd11-45c5-bb7b-57b6226ec7e7 is in state STARTED 2026-01-10 14:30:33.332062 | orchestrator | 2026-01-10 14:30:33 | INFO  | Task 58117f0b-6074-45cc-99bd-993bb6ddc811 is in state STARTED 2026-01-10 14:30:33.336418 | orchestrator | 2026-01-10 14:30:33 | INFO  | Task 1b88017a-4044-4ec2-a2c8-da647970171f is in state STARTED 2026-01-10 14:30:33.336479 | orchestrator | 2026-01-10 14:30:33 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:30:36.376674 | orchestrator | 2026-01-10 14:30:36 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:30:36.379933 | orchestrator | 2026-01-10 14:30:36 | INFO  | Task b8481bf6-201d-4cfb-b42e-198246b7eaca is in state STARTED 2026-01-10 14:30:36.380131 | orchestrator | 2026-01-10 14:30:36 | INFO  | Task 7d5a6090-3c8b-4e0a-9098-398a44aadcf9 is in state SUCCESS 2026-01-10 14:30:36.381629 | orchestrator | 2026-01-10 14:30:36 | INFO  | Task 5d1dbc0e-dd11-45c5-bb7b-57b6226ec7e7 is in state STARTED 2026-01-10 14:30:36.384455 | orchestrator | 2026-01-10 14:30:36 | INFO  | Task 58117f0b-6074-45cc-99bd-993bb6ddc811 is in state STARTED 2026-01-10 14:30:36.385436 | orchestrator | 2026-01-10 14:30:36 | INFO  | Task 1b88017a-4044-4ec2-a2c8-da647970171f is in state STARTED 2026-01-10 14:30:36.385474 | orchestrator | 2026-01-10 14:30:36 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:30:39.430894 | orchestrator | 2026-01-10 14:30:39 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:30:39.431764 | orchestrator | 2026-01-10 14:30:39 | INFO  | Task b8481bf6-201d-4cfb-b42e-198246b7eaca is in state STARTED 2026-01-10 14:30:39.433735 | orchestrator | 2026-01-10 14:30:39 | INFO  | Task 5d1dbc0e-dd11-45c5-bb7b-57b6226ec7e7 is in state STARTED 2026-01-10 14:30:39.435111 | orchestrator | 2026-01-10 14:30:39 | INFO  | Task 58117f0b-6074-45cc-99bd-993bb6ddc811 is in state STARTED 2026-01-10 14:30:39.436523 | orchestrator | 2026-01-10 14:30:39 | INFO  | Task 1b88017a-4044-4ec2-a2c8-da647970171f is in state STARTED 2026-01-10 14:30:39.436555 | orchestrator | 2026-01-10 14:30:39 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:30:42.542246 | orchestrator | 2026-01-10 14:30:42 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:30:42.542741 | orchestrator | 2026-01-10 14:30:42 | INFO  | Task b8481bf6-201d-4cfb-b42e-198246b7eaca is in state STARTED 2026-01-10 14:30:42.543821 | orchestrator | 2026-01-10 14:30:42 | INFO  | Task 5d1dbc0e-dd11-45c5-bb7b-57b6226ec7e7 is in state STARTED 2026-01-10 14:30:42.546117 | orchestrator | 2026-01-10 14:30:42 | INFO  | Task 58117f0b-6074-45cc-99bd-993bb6ddc811 is in state STARTED 2026-01-10 14:30:42.547160 | orchestrator | 2026-01-10 14:30:42 | INFO  | Task 1b88017a-4044-4ec2-a2c8-da647970171f is in state STARTED 2026-01-10 14:30:42.547209 | orchestrator | 2026-01-10 14:30:42 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:30:45.599735 | orchestrator | 2026-01-10 14:30:45 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:30:45.600298 | orchestrator | 2026-01-10 14:30:45 | INFO  | Task b8481bf6-201d-4cfb-b42e-198246b7eaca is in state STARTED 2026-01-10 14:30:45.601478 | orchestrator | 2026-01-10 14:30:45 | INFO  | Task 5d1dbc0e-dd11-45c5-bb7b-57b6226ec7e7 is in state STARTED 2026-01-10 14:30:45.604373 | orchestrator | 2026-01-10 14:30:45 | INFO  | Task 58117f0b-6074-45cc-99bd-993bb6ddc811 is in state STARTED 2026-01-10 14:30:45.605186 | orchestrator | 2026-01-10 14:30:45 | INFO  | Task 1b88017a-4044-4ec2-a2c8-da647970171f is in state STARTED 2026-01-10 14:30:45.605230 | orchestrator | 2026-01-10 14:30:45 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:30:48.664309 | orchestrator | 2026-01-10 14:30:48 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:30:48.664791 | orchestrator | 2026-01-10 14:30:48 | INFO  | Task b8481bf6-201d-4cfb-b42e-198246b7eaca is in state STARTED 2026-01-10 14:30:48.667151 | orchestrator | 2026-01-10 14:30:48 | INFO  | Task 5d1dbc0e-dd11-45c5-bb7b-57b6226ec7e7 is in state STARTED 2026-01-10 14:30:48.667938 | orchestrator | 2026-01-10 14:30:48 | INFO  | Task 58117f0b-6074-45cc-99bd-993bb6ddc811 is in state STARTED 2026-01-10 14:30:48.668814 | orchestrator | 2026-01-10 14:30:48 | INFO  | Task 1b88017a-4044-4ec2-a2c8-da647970171f is in state STARTED 2026-01-10 14:30:48.668839 | orchestrator | 2026-01-10 14:30:48 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:30:51.803834 | orchestrator | 2026-01-10 14:30:51 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:30:51.803904 | orchestrator | 2026-01-10 14:30:51 | INFO  | Task b8481bf6-201d-4cfb-b42e-198246b7eaca is in state STARTED 2026-01-10 14:30:51.805956 | orchestrator | 2026-01-10 14:30:51 | INFO  | Task 5d1dbc0e-dd11-45c5-bb7b-57b6226ec7e7 is in state STARTED 2026-01-10 14:30:51.806565 | orchestrator | 2026-01-10 14:30:51 | INFO  | Task 58117f0b-6074-45cc-99bd-993bb6ddc811 is in state STARTED 2026-01-10 14:30:51.807270 | orchestrator | 2026-01-10 14:30:51 | INFO  | Task 1b88017a-4044-4ec2-a2c8-da647970171f is in state STARTED 2026-01-10 14:30:51.807292 | orchestrator | 2026-01-10 14:30:51 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:30:54.899438 | orchestrator | 2026-01-10 14:30:54 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:30:54.899543 | orchestrator | 2026-01-10 14:30:54 | INFO  | Task b8481bf6-201d-4cfb-b42e-198246b7eaca is in state STARTED 2026-01-10 14:30:54.899556 | orchestrator | 2026-01-10 14:30:54 | INFO  | Task 5d1dbc0e-dd11-45c5-bb7b-57b6226ec7e7 is in state STARTED 2026-01-10 14:30:54.899568 | orchestrator | 2026-01-10 14:30:54 | INFO  | Task 58117f0b-6074-45cc-99bd-993bb6ddc811 is in state STARTED 2026-01-10 14:30:54.899578 | orchestrator | 2026-01-10 14:30:54 | INFO  | Task 1b88017a-4044-4ec2-a2c8-da647970171f is in state STARTED 2026-01-10 14:30:54.899589 | orchestrator | 2026-01-10 14:30:54 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:30:57.987026 | orchestrator | 2026-01-10 14:30:57 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:30:57.987101 | orchestrator | 2026-01-10 14:30:57 | INFO  | Task b8481bf6-201d-4cfb-b42e-198246b7eaca is in state STARTED 2026-01-10 14:30:57.988808 | orchestrator | 2026-01-10 14:30:57 | INFO  | Task 5d1dbc0e-dd11-45c5-bb7b-57b6226ec7e7 is in state STARTED 2026-01-10 14:30:57.990811 | orchestrator | 2026-01-10 14:30:57 | INFO  | Task 58117f0b-6074-45cc-99bd-993bb6ddc811 is in state STARTED 2026-01-10 14:30:57.994398 | orchestrator | 2026-01-10 14:30:57 | INFO  | Task 1b88017a-4044-4ec2-a2c8-da647970171f is in state STARTED 2026-01-10 14:30:57.994479 | orchestrator | 2026-01-10 14:30:57 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:31:01.031412 | orchestrator | 2026-01-10 14:31:01 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:31:01.032284 | orchestrator | 2026-01-10 14:31:01 | INFO  | Task b8481bf6-201d-4cfb-b42e-198246b7eaca is in state STARTED 2026-01-10 14:31:01.033229 | orchestrator | 2026-01-10 14:31:01 | INFO  | Task 5d1dbc0e-dd11-45c5-bb7b-57b6226ec7e7 is in state STARTED 2026-01-10 14:31:01.034095 | orchestrator | 2026-01-10 14:31:01 | INFO  | Task 58117f0b-6074-45cc-99bd-993bb6ddc811 is in state STARTED 2026-01-10 14:31:01.035168 | orchestrator | 2026-01-10 14:31:01 | INFO  | Task 1b88017a-4044-4ec2-a2c8-da647970171f is in state STARTED 2026-01-10 14:31:01.035193 | orchestrator | 2026-01-10 14:31:01 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:31:04.141151 | orchestrator | 2026-01-10 14:31:04 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:31:04.141247 | orchestrator | 2026-01-10 14:31:04 | INFO  | Task b8481bf6-201d-4cfb-b42e-198246b7eaca is in state STARTED 2026-01-10 14:31:04.141887 | orchestrator | 2026-01-10 14:31:04 | INFO  | Task 5d1dbc0e-dd11-45c5-bb7b-57b6226ec7e7 is in state STARTED 2026-01-10 14:31:04.142955 | orchestrator | 2026-01-10 14:31:04 | INFO  | Task 58117f0b-6074-45cc-99bd-993bb6ddc811 is in state STARTED 2026-01-10 14:31:04.143863 | orchestrator | 2026-01-10 14:31:04 | INFO  | Task 1b88017a-4044-4ec2-a2c8-da647970171f is in state STARTED 2026-01-10 14:31:04.143909 | orchestrator | 2026-01-10 14:31:04 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:31:07.206986 | orchestrator | 2026-01-10 14:31:07 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:31:07.208191 | orchestrator | 2026-01-10 14:31:07 | INFO  | Task b8481bf6-201d-4cfb-b42e-198246b7eaca is in state STARTED 2026-01-10 14:31:07.209607 | orchestrator | 2026-01-10 14:31:07 | INFO  | Task 5d1dbc0e-dd11-45c5-bb7b-57b6226ec7e7 is in state STARTED 2026-01-10 14:31:07.211147 | orchestrator | 2026-01-10 14:31:07 | INFO  | Task 58117f0b-6074-45cc-99bd-993bb6ddc811 is in state STARTED 2026-01-10 14:31:07.212315 | orchestrator | 2026-01-10 14:31:07 | INFO  | Task 1b88017a-4044-4ec2-a2c8-da647970171f is in state STARTED 2026-01-10 14:31:07.212575 | orchestrator | 2026-01-10 14:31:07 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:31:10.284379 | orchestrator | 2026-01-10 14:31:10 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:31:10.286148 | orchestrator | 2026-01-10 14:31:10 | INFO  | Task b8481bf6-201d-4cfb-b42e-198246b7eaca is in state STARTED 2026-01-10 14:31:10.286618 | orchestrator | 2026-01-10 14:31:10 | INFO  | Task 5d1dbc0e-dd11-45c5-bb7b-57b6226ec7e7 is in state STARTED 2026-01-10 14:31:10.287695 | orchestrator | 2026-01-10 14:31:10 | INFO  | Task 58117f0b-6074-45cc-99bd-993bb6ddc811 is in state STARTED 2026-01-10 14:31:10.288962 | orchestrator | 2026-01-10 14:31:10 | INFO  | Task 1b88017a-4044-4ec2-a2c8-da647970171f is in state STARTED 2026-01-10 14:31:10.289050 | orchestrator | 2026-01-10 14:31:10 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:31:13.331313 | orchestrator | 2026-01-10 14:31:13 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:31:13.333539 | orchestrator | 2026-01-10 14:31:13 | INFO  | Task b8481bf6-201d-4cfb-b42e-198246b7eaca is in state STARTED 2026-01-10 14:31:13.336601 | orchestrator | 2026-01-10 14:31:13 | INFO  | Task 5d1dbc0e-dd11-45c5-bb7b-57b6226ec7e7 is in state STARTED 2026-01-10 14:31:13.338863 | orchestrator | 2026-01-10 14:31:13 | INFO  | Task 58117f0b-6074-45cc-99bd-993bb6ddc811 is in state STARTED 2026-01-10 14:31:13.341544 | orchestrator | 2026-01-10 14:31:13 | INFO  | Task 1b88017a-4044-4ec2-a2c8-da647970171f is in state STARTED 2026-01-10 14:31:13.341775 | orchestrator | 2026-01-10 14:31:13 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:31:16.457512 | orchestrator | 2026-01-10 14:31:16 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:31:16.457564 | orchestrator | 2026-01-10 14:31:16 | INFO  | Task b8481bf6-201d-4cfb-b42e-198246b7eaca is in state STARTED 2026-01-10 14:31:16.458221 | orchestrator | 2026-01-10 14:31:16 | INFO  | Task 5d1dbc0e-dd11-45c5-bb7b-57b6226ec7e7 is in state STARTED 2026-01-10 14:31:16.459669 | orchestrator | 2026-01-10 14:31:16 | INFO  | Task 58117f0b-6074-45cc-99bd-993bb6ddc811 is in state STARTED 2026-01-10 14:31:16.460284 | orchestrator | 2026-01-10 14:31:16 | INFO  | Task 1b88017a-4044-4ec2-a2c8-da647970171f is in state SUCCESS 2026-01-10 14:31:16.465769 | orchestrator | 2026-01-10 14:31:16.465810 | orchestrator | 2026-01-10 14:31:16.465817 | orchestrator | PLAY [Apply role homer] ******************************************************** 2026-01-10 14:31:16.465824 | orchestrator | 2026-01-10 14:31:16.465830 | orchestrator | TASK [osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards] *** 2026-01-10 14:31:16.465836 | orchestrator | Saturday 10 January 2026 14:29:47 +0000 (0:00:00.724) 0:00:00.724 ****** 2026-01-10 14:31:16.465842 | orchestrator | ok: [testbed-manager] => { 2026-01-10 14:31:16.465849 | orchestrator |  "msg": "The support for the homer_url_kibana has been removed. Please use the homer_url_opensearch_dashboards parameter." 2026-01-10 14:31:16.465856 | orchestrator | } 2026-01-10 14:31:16.465862 | orchestrator | 2026-01-10 14:31:16.465868 | orchestrator | TASK [osism.services.homer : Create traefik external network] ****************** 2026-01-10 14:31:16.465874 | orchestrator | Saturday 10 January 2026 14:29:48 +0000 (0:00:00.189) 0:00:00.914 ****** 2026-01-10 14:31:16.465880 | orchestrator | ok: [testbed-manager] 2026-01-10 14:31:16.465887 | orchestrator | 2026-01-10 14:31:16.465893 | orchestrator | TASK [osism.services.homer : Create required directories] ********************** 2026-01-10 14:31:16.465899 | orchestrator | Saturday 10 January 2026 14:29:50 +0000 (0:00:02.226) 0:00:03.141 ****** 2026-01-10 14:31:16.465905 | orchestrator | changed: [testbed-manager] => (item=/opt/homer/configuration) 2026-01-10 14:31:16.465912 | orchestrator | ok: [testbed-manager] => (item=/opt/homer) 2026-01-10 14:31:16.465918 | orchestrator | 2026-01-10 14:31:16.465924 | orchestrator | TASK [osism.services.homer : Copy config.yml configuration file] *************** 2026-01-10 14:31:16.465930 | orchestrator | Saturday 10 January 2026 14:29:51 +0000 (0:00:01.514) 0:00:04.656 ****** 2026-01-10 14:31:16.465937 | orchestrator | changed: [testbed-manager] 2026-01-10 14:31:16.465943 | orchestrator | 2026-01-10 14:31:16.465950 | orchestrator | TASK [osism.services.homer : Copy docker-compose.yml file] ********************* 2026-01-10 14:31:16.465958 | orchestrator | Saturday 10 January 2026 14:29:55 +0000 (0:00:03.956) 0:00:08.612 ****** 2026-01-10 14:31:16.465962 | orchestrator | changed: [testbed-manager] 2026-01-10 14:31:16.465966 | orchestrator | 2026-01-10 14:31:16.465969 | orchestrator | TASK [osism.services.homer : Manage homer service] ***************************** 2026-01-10 14:31:16.465974 | orchestrator | Saturday 10 January 2026 14:29:57 +0000 (0:00:01.755) 0:00:10.368 ****** 2026-01-10 14:31:16.465977 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage homer service (10 retries left). 2026-01-10 14:31:16.465981 | orchestrator | ok: [testbed-manager] 2026-01-10 14:31:16.465985 | orchestrator | 2026-01-10 14:31:16.465988 | orchestrator | RUNNING HANDLER [osism.services.homer : Restart homer service] ***************** 2026-01-10 14:31:16.465992 | orchestrator | Saturday 10 January 2026 14:30:22 +0000 (0:00:25.512) 0:00:35.880 ****** 2026-01-10 14:31:16.465996 | orchestrator | changed: [testbed-manager] 2026-01-10 14:31:16.466000 | orchestrator | 2026-01-10 14:31:16.466003 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-10 14:31:16.466007 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-10 14:31:16.466053 | orchestrator | 2026-01-10 14:31:16.466061 | orchestrator | 2026-01-10 14:31:16.466067 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-10 14:31:16.466086 | orchestrator | Saturday 10 January 2026 14:30:28 +0000 (0:00:05.674) 0:00:41.555 ****** 2026-01-10 14:31:16.466092 | orchestrator | =============================================================================== 2026-01-10 14:31:16.466109 | orchestrator | osism.services.homer : Manage homer service ---------------------------- 25.51s 2026-01-10 14:31:16.466120 | orchestrator | osism.services.homer : Restart homer service ---------------------------- 5.67s 2026-01-10 14:31:16.466126 | orchestrator | osism.services.homer : Copy config.yml configuration file --------------- 3.96s 2026-01-10 14:31:16.466132 | orchestrator | osism.services.homer : Create traefik external network ------------------ 2.23s 2026-01-10 14:31:16.466138 | orchestrator | osism.services.homer : Copy docker-compose.yml file --------------------- 1.76s 2026-01-10 14:31:16.466144 | orchestrator | osism.services.homer : Create required directories ---------------------- 1.51s 2026-01-10 14:31:16.466150 | orchestrator | osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards --- 0.19s 2026-01-10 14:31:16.466157 | orchestrator | 2026-01-10 14:31:16.466163 | orchestrator | 2026-01-10 14:31:16.466168 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2026-01-10 14:31:16.466175 | orchestrator | 2026-01-10 14:31:16.466181 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2026-01-10 14:31:16.466187 | orchestrator | Saturday 10 January 2026 14:29:46 +0000 (0:00:00.516) 0:00:00.516 ****** 2026-01-10 14:31:16.466195 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2026-01-10 14:31:16.466202 | orchestrator | 2026-01-10 14:31:16.466209 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2026-01-10 14:31:16.466215 | orchestrator | Saturday 10 January 2026 14:29:47 +0000 (0:00:00.641) 0:00:01.157 ****** 2026-01-10 14:31:16.466221 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2026-01-10 14:31:16.466228 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2026-01-10 14:31:16.466234 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2026-01-10 14:31:16.466240 | orchestrator | 2026-01-10 14:31:16.466244 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2026-01-10 14:31:16.466248 | orchestrator | Saturday 10 January 2026 14:29:49 +0000 (0:00:01.864) 0:00:03.022 ****** 2026-01-10 14:31:16.466251 | orchestrator | changed: [testbed-manager] 2026-01-10 14:31:16.466255 | orchestrator | 2026-01-10 14:31:16.466258 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2026-01-10 14:31:16.466262 | orchestrator | Saturday 10 January 2026 14:29:51 +0000 (0:00:02.134) 0:00:05.156 ****** 2026-01-10 14:31:16.466275 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2026-01-10 14:31:16.466282 | orchestrator | ok: [testbed-manager] 2026-01-10 14:31:16.466288 | orchestrator | 2026-01-10 14:31:16.466294 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2026-01-10 14:31:16.466300 | orchestrator | Saturday 10 January 2026 14:30:24 +0000 (0:00:32.555) 0:00:37.712 ****** 2026-01-10 14:31:16.466305 | orchestrator | changed: [testbed-manager] 2026-01-10 14:31:16.466311 | orchestrator | 2026-01-10 14:31:16.466316 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2026-01-10 14:31:16.466322 | orchestrator | Saturday 10 January 2026 14:30:28 +0000 (0:00:04.192) 0:00:41.905 ****** 2026-01-10 14:31:16.466328 | orchestrator | ok: [testbed-manager] 2026-01-10 14:31:16.466334 | orchestrator | 2026-01-10 14:31:16.466340 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2026-01-10 14:31:16.466346 | orchestrator | Saturday 10 January 2026 14:30:29 +0000 (0:00:00.853) 0:00:42.758 ****** 2026-01-10 14:31:16.466352 | orchestrator | changed: [testbed-manager] 2026-01-10 14:31:16.466359 | orchestrator | 2026-01-10 14:31:16.466366 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2026-01-10 14:31:16.466378 | orchestrator | Saturday 10 January 2026 14:30:32 +0000 (0:00:03.339) 0:00:46.097 ****** 2026-01-10 14:31:16.466384 | orchestrator | changed: [testbed-manager] 2026-01-10 14:31:16.466388 | orchestrator | 2026-01-10 14:31:16.466392 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2026-01-10 14:31:16.466397 | orchestrator | Saturday 10 January 2026 14:30:34 +0000 (0:00:01.770) 0:00:47.869 ****** 2026-01-10 14:31:16.466401 | orchestrator | changed: [testbed-manager] 2026-01-10 14:31:16.466405 | orchestrator | 2026-01-10 14:31:16.466409 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2026-01-10 14:31:16.466413 | orchestrator | Saturday 10 January 2026 14:30:35 +0000 (0:00:00.669) 0:00:48.538 ****** 2026-01-10 14:31:16.466436 | orchestrator | ok: [testbed-manager] 2026-01-10 14:31:16.466440 | orchestrator | 2026-01-10 14:31:16.466445 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-10 14:31:16.466449 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-10 14:31:16.466454 | orchestrator | 2026-01-10 14:31:16.466458 | orchestrator | 2026-01-10 14:31:16.466462 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-10 14:31:16.466466 | orchestrator | Saturday 10 January 2026 14:30:35 +0000 (0:00:00.566) 0:00:49.105 ****** 2026-01-10 14:31:16.466471 | orchestrator | =============================================================================== 2026-01-10 14:31:16.466475 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 32.56s 2026-01-10 14:31:16.466480 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 4.19s 2026-01-10 14:31:16.466484 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 3.34s 2026-01-10 14:31:16.466488 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 2.13s 2026-01-10 14:31:16.466492 | orchestrator | osism.services.openstackclient : Create required directories ------------ 1.86s 2026-01-10 14:31:16.466497 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 1.77s 2026-01-10 14:31:16.466501 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 0.85s 2026-01-10 14:31:16.466508 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 0.67s 2026-01-10 14:31:16.466515 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 0.64s 2026-01-10 14:31:16.466521 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 0.57s 2026-01-10 14:31:16.466528 | orchestrator | 2026-01-10 14:31:16.466534 | orchestrator | 2026-01-10 14:31:16.466540 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-10 14:31:16.466546 | orchestrator | 2026-01-10 14:31:16.466552 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-10 14:31:16.466559 | orchestrator | Saturday 10 January 2026 14:29:48 +0000 (0:00:00.470) 0:00:00.470 ****** 2026-01-10 14:31:16.466566 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2026-01-10 14:31:16.466573 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2026-01-10 14:31:16.466579 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2026-01-10 14:31:16.466586 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2026-01-10 14:31:16.466593 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2026-01-10 14:31:16.466599 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2026-01-10 14:31:16.466603 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2026-01-10 14:31:16.466607 | orchestrator | 2026-01-10 14:31:16.466611 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2026-01-10 14:31:16.466615 | orchestrator | 2026-01-10 14:31:16.466619 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2026-01-10 14:31:16.466623 | orchestrator | Saturday 10 January 2026 14:29:49 +0000 (0:00:01.113) 0:00:01.584 ****** 2026-01-10 14:31:16.466666 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-10 14:31:16.466675 | orchestrator | 2026-01-10 14:31:16.466679 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2026-01-10 14:31:16.466684 | orchestrator | Saturday 10 January 2026 14:29:52 +0000 (0:00:03.468) 0:00:05.052 ****** 2026-01-10 14:31:16.466688 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:31:16.466693 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:31:16.466697 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:31:16.466701 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:31:16.466705 | orchestrator | ok: [testbed-manager] 2026-01-10 14:31:16.466714 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:31:16.466718 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:31:16.466722 | orchestrator | 2026-01-10 14:31:16.466726 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2026-01-10 14:31:16.466730 | orchestrator | Saturday 10 January 2026 14:29:56 +0000 (0:00:03.443) 0:00:08.496 ****** 2026-01-10 14:31:16.466734 | orchestrator | ok: [testbed-manager] 2026-01-10 14:31:16.466737 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:31:16.466741 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:31:16.466745 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:31:16.466748 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:31:16.466752 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:31:16.466755 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:31:16.466759 | orchestrator | 2026-01-10 14:31:16.466763 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2026-01-10 14:31:16.466767 | orchestrator | Saturday 10 January 2026 14:29:59 +0000 (0:00:03.623) 0:00:12.120 ****** 2026-01-10 14:31:16.466770 | orchestrator | changed: [testbed-manager] 2026-01-10 14:31:16.466774 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:31:16.466778 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:31:16.466781 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:31:16.466785 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:31:16.466791 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:31:16.466797 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:31:16.466803 | orchestrator | 2026-01-10 14:31:16.466808 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2026-01-10 14:31:16.466814 | orchestrator | Saturday 10 January 2026 14:30:01 +0000 (0:00:02.176) 0:00:14.296 ****** 2026-01-10 14:31:16.466820 | orchestrator | changed: [testbed-manager] 2026-01-10 14:31:16.466827 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:31:16.466834 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:31:16.466840 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:31:16.466846 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:31:16.466852 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:31:16.466859 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:31:16.466863 | orchestrator | 2026-01-10 14:31:16.466867 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2026-01-10 14:31:16.466870 | orchestrator | Saturday 10 January 2026 14:30:12 +0000 (0:00:10.369) 0:00:24.666 ****** 2026-01-10 14:31:16.466874 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:31:16.466878 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:31:16.466881 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:31:16.466885 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:31:16.466889 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:31:16.466892 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:31:16.466896 | orchestrator | changed: [testbed-manager] 2026-01-10 14:31:16.466900 | orchestrator | 2026-01-10 14:31:16.466904 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2026-01-10 14:31:16.466907 | orchestrator | Saturday 10 January 2026 14:30:51 +0000 (0:00:39.660) 0:01:04.326 ****** 2026-01-10 14:31:16.466911 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-10 14:31:16.466920 | orchestrator | 2026-01-10 14:31:16.466924 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2026-01-10 14:31:16.466928 | orchestrator | Saturday 10 January 2026 14:30:53 +0000 (0:00:01.687) 0:01:06.013 ****** 2026-01-10 14:31:16.466931 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2026-01-10 14:31:16.466935 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2026-01-10 14:31:16.466942 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2026-01-10 14:31:16.466946 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2026-01-10 14:31:16.466949 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2026-01-10 14:31:16.466953 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2026-01-10 14:31:16.466957 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2026-01-10 14:31:16.466960 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2026-01-10 14:31:16.466964 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2026-01-10 14:31:16.466968 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2026-01-10 14:31:16.466971 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2026-01-10 14:31:16.466975 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2026-01-10 14:31:16.466979 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2026-01-10 14:31:16.466982 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2026-01-10 14:31:16.466986 | orchestrator | 2026-01-10 14:31:16.466990 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2026-01-10 14:31:16.466994 | orchestrator | Saturday 10 January 2026 14:30:59 +0000 (0:00:05.502) 0:01:11.516 ****** 2026-01-10 14:31:16.466998 | orchestrator | ok: [testbed-manager] 2026-01-10 14:31:16.467001 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:31:16.467005 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:31:16.467009 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:31:16.467012 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:31:16.467016 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:31:16.467020 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:31:16.467023 | orchestrator | 2026-01-10 14:31:16.467027 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2026-01-10 14:31:16.467031 | orchestrator | Saturday 10 January 2026 14:31:00 +0000 (0:00:01.609) 0:01:13.126 ****** 2026-01-10 14:31:16.467035 | orchestrator | changed: [testbed-manager] 2026-01-10 14:31:16.467038 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:31:16.467042 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:31:16.467046 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:31:16.467049 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:31:16.467053 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:31:16.467057 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:31:16.467060 | orchestrator | 2026-01-10 14:31:16.467064 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2026-01-10 14:31:16.467072 | orchestrator | Saturday 10 January 2026 14:31:02 +0000 (0:00:01.760) 0:01:14.887 ****** 2026-01-10 14:31:16.467076 | orchestrator | ok: [testbed-manager] 2026-01-10 14:31:16.467079 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:31:16.467083 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:31:16.467087 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:31:16.467090 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:31:16.467094 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:31:16.467098 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:31:16.467101 | orchestrator | 2026-01-10 14:31:16.467105 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2026-01-10 14:31:16.467109 | orchestrator | Saturday 10 January 2026 14:31:04 +0000 (0:00:01.936) 0:01:16.823 ****** 2026-01-10 14:31:16.467113 | orchestrator | ok: [testbed-manager] 2026-01-10 14:31:16.467124 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:31:16.467131 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:31:16.467135 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:31:16.467138 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:31:16.467142 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:31:16.467146 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:31:16.467149 | orchestrator | 2026-01-10 14:31:16.467153 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2026-01-10 14:31:16.467157 | orchestrator | Saturday 10 January 2026 14:31:07 +0000 (0:00:03.099) 0:01:19.923 ****** 2026-01-10 14:31:16.467160 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2026-01-10 14:31:16.467166 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-10 14:31:16.467173 | orchestrator | 2026-01-10 14:31:16.467179 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2026-01-10 14:31:16.467185 | orchestrator | Saturday 10 January 2026 14:31:08 +0000 (0:00:01.474) 0:01:21.398 ****** 2026-01-10 14:31:16.467191 | orchestrator | changed: [testbed-manager] 2026-01-10 14:31:16.467199 | orchestrator | 2026-01-10 14:31:16.467206 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2026-01-10 14:31:16.467212 | orchestrator | Saturday 10 January 2026 14:31:11 +0000 (0:00:02.148) 0:01:23.547 ****** 2026-01-10 14:31:16.467218 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:31:16.467224 | orchestrator | changed: [testbed-manager] 2026-01-10 14:31:16.467230 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:31:16.467236 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:31:16.467242 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:31:16.467249 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:31:16.467255 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:31:16.467261 | orchestrator | 2026-01-10 14:31:16.467268 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-10 14:31:16.467273 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-10 14:31:16.467277 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-10 14:31:16.467281 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-10 14:31:16.467288 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-10 14:31:16.467292 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-10 14:31:16.467295 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-10 14:31:16.467299 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-10 14:31:16.467303 | orchestrator | 2026-01-10 14:31:16.467306 | orchestrator | 2026-01-10 14:31:16.467310 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-10 14:31:16.467314 | orchestrator | Saturday 10 January 2026 14:31:14 +0000 (0:00:03.143) 0:01:26.690 ****** 2026-01-10 14:31:16.467317 | orchestrator | =============================================================================== 2026-01-10 14:31:16.467322 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 39.66s 2026-01-10 14:31:16.467329 | orchestrator | osism.services.netdata : Add repository -------------------------------- 10.37s 2026-01-10 14:31:16.467334 | orchestrator | osism.services.netdata : Copy configuration files ----------------------- 5.50s 2026-01-10 14:31:16.467345 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 3.62s 2026-01-10 14:31:16.467351 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 3.47s 2026-01-10 14:31:16.467358 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 3.44s 2026-01-10 14:31:16.467364 | orchestrator | osism.services.netdata : Restart service netdata ------------------------ 3.14s 2026-01-10 14:31:16.467370 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 3.10s 2026-01-10 14:31:16.467376 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 2.18s 2026-01-10 14:31:16.467379 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 2.15s 2026-01-10 14:31:16.467383 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 1.94s 2026-01-10 14:31:16.467390 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 1.76s 2026-01-10 14:31:16.467394 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 1.69s 2026-01-10 14:31:16.467398 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 1.61s 2026-01-10 14:31:16.467404 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 1.48s 2026-01-10 14:31:16.467410 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.12s 2026-01-10 14:31:16.467416 | orchestrator | 2026-01-10 14:31:16 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:31:19.504392 | orchestrator | 2026-01-10 14:31:19 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:31:19.505302 | orchestrator | 2026-01-10 14:31:19 | INFO  | Task b8481bf6-201d-4cfb-b42e-198246b7eaca is in state STARTED 2026-01-10 14:31:19.506785 | orchestrator | 2026-01-10 14:31:19 | INFO  | Task 5d1dbc0e-dd11-45c5-bb7b-57b6226ec7e7 is in state STARTED 2026-01-10 14:31:19.509197 | orchestrator | 2026-01-10 14:31:19 | INFO  | Task 58117f0b-6074-45cc-99bd-993bb6ddc811 is in state STARTED 2026-01-10 14:31:19.509239 | orchestrator | 2026-01-10 14:31:19 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:31:22.561364 | orchestrator | 2026-01-10 14:31:22 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:31:22.561797 | orchestrator | 2026-01-10 14:31:22 | INFO  | Task b8481bf6-201d-4cfb-b42e-198246b7eaca is in state SUCCESS 2026-01-10 14:31:22.563596 | orchestrator | 2026-01-10 14:31:22 | INFO  | Task 5d1dbc0e-dd11-45c5-bb7b-57b6226ec7e7 is in state STARTED 2026-01-10 14:31:22.564643 | orchestrator | 2026-01-10 14:31:22 | INFO  | Task 58117f0b-6074-45cc-99bd-993bb6ddc811 is in state STARTED 2026-01-10 14:31:22.564953 | orchestrator | 2026-01-10 14:31:22 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:31:25.611574 | orchestrator | 2026-01-10 14:31:25 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:31:25.617308 | orchestrator | 2026-01-10 14:31:25 | INFO  | Task 5d1dbc0e-dd11-45c5-bb7b-57b6226ec7e7 is in state STARTED 2026-01-10 14:31:25.621052 | orchestrator | 2026-01-10 14:31:25 | INFO  | Task 58117f0b-6074-45cc-99bd-993bb6ddc811 is in state STARTED 2026-01-10 14:31:25.622282 | orchestrator | 2026-01-10 14:31:25 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:31:28.666913 | orchestrator | 2026-01-10 14:31:28 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:31:28.667371 | orchestrator | 2026-01-10 14:31:28 | INFO  | Task 5d1dbc0e-dd11-45c5-bb7b-57b6226ec7e7 is in state STARTED 2026-01-10 14:31:28.668952 | orchestrator | 2026-01-10 14:31:28 | INFO  | Task 58117f0b-6074-45cc-99bd-993bb6ddc811 is in state STARTED 2026-01-10 14:31:28.669159 | orchestrator | 2026-01-10 14:31:28 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:31:31.718245 | orchestrator | 2026-01-10 14:31:31 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:31:31.720017 | orchestrator | 2026-01-10 14:31:31 | INFO  | Task 5d1dbc0e-dd11-45c5-bb7b-57b6226ec7e7 is in state STARTED 2026-01-10 14:31:31.722617 | orchestrator | 2026-01-10 14:31:31 | INFO  | Task 58117f0b-6074-45cc-99bd-993bb6ddc811 is in state STARTED 2026-01-10 14:31:31.722695 | orchestrator | 2026-01-10 14:31:31 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:31:34.757857 | orchestrator | 2026-01-10 14:31:34 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:31:34.759069 | orchestrator | 2026-01-10 14:31:34 | INFO  | Task 5d1dbc0e-dd11-45c5-bb7b-57b6226ec7e7 is in state STARTED 2026-01-10 14:31:34.760232 | orchestrator | 2026-01-10 14:31:34 | INFO  | Task 58117f0b-6074-45cc-99bd-993bb6ddc811 is in state STARTED 2026-01-10 14:31:34.760286 | orchestrator | 2026-01-10 14:31:34 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:31:37.797283 | orchestrator | 2026-01-10 14:31:37 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:31:37.800530 | orchestrator | 2026-01-10 14:31:37 | INFO  | Task 5d1dbc0e-dd11-45c5-bb7b-57b6226ec7e7 is in state STARTED 2026-01-10 14:31:37.801427 | orchestrator | 2026-01-10 14:31:37 | INFO  | Task 58117f0b-6074-45cc-99bd-993bb6ddc811 is in state STARTED 2026-01-10 14:31:37.801480 | orchestrator | 2026-01-10 14:31:37 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:31:40.848346 | orchestrator | 2026-01-10 14:31:40 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:31:40.849243 | orchestrator | 2026-01-10 14:31:40 | INFO  | Task 5d1dbc0e-dd11-45c5-bb7b-57b6226ec7e7 is in state STARTED 2026-01-10 14:31:40.850323 | orchestrator | 2026-01-10 14:31:40 | INFO  | Task 58117f0b-6074-45cc-99bd-993bb6ddc811 is in state STARTED 2026-01-10 14:31:40.850364 | orchestrator | 2026-01-10 14:31:40 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:31:43.892586 | orchestrator | 2026-01-10 14:31:43 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:31:43.894319 | orchestrator | 2026-01-10 14:31:43 | INFO  | Task 5d1dbc0e-dd11-45c5-bb7b-57b6226ec7e7 is in state STARTED 2026-01-10 14:31:43.895801 | orchestrator | 2026-01-10 14:31:43 | INFO  | Task 58117f0b-6074-45cc-99bd-993bb6ddc811 is in state STARTED 2026-01-10 14:31:43.895848 | orchestrator | 2026-01-10 14:31:43 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:31:46.939438 | orchestrator | 2026-01-10 14:31:46 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:31:46.941649 | orchestrator | 2026-01-10 14:31:46 | INFO  | Task 5d1dbc0e-dd11-45c5-bb7b-57b6226ec7e7 is in state STARTED 2026-01-10 14:31:46.944141 | orchestrator | 2026-01-10 14:31:46 | INFO  | Task 58117f0b-6074-45cc-99bd-993bb6ddc811 is in state STARTED 2026-01-10 14:31:46.944206 | orchestrator | 2026-01-10 14:31:46 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:31:49.984602 | orchestrator | 2026-01-10 14:31:49 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:31:49.987137 | orchestrator | 2026-01-10 14:31:49 | INFO  | Task 5d1dbc0e-dd11-45c5-bb7b-57b6226ec7e7 is in state STARTED 2026-01-10 14:31:49.988687 | orchestrator | 2026-01-10 14:31:49 | INFO  | Task 58117f0b-6074-45cc-99bd-993bb6ddc811 is in state STARTED 2026-01-10 14:31:49.988734 | orchestrator | 2026-01-10 14:31:49 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:31:53.041443 | orchestrator | 2026-01-10 14:31:53 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:31:53.045269 | orchestrator | 2026-01-10 14:31:53 | INFO  | Task 5d1dbc0e-dd11-45c5-bb7b-57b6226ec7e7 is in state STARTED 2026-01-10 14:31:53.048573 | orchestrator | 2026-01-10 14:31:53 | INFO  | Task 58117f0b-6074-45cc-99bd-993bb6ddc811 is in state STARTED 2026-01-10 14:31:53.048650 | orchestrator | 2026-01-10 14:31:53 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:31:56.094590 | orchestrator | 2026-01-10 14:31:56 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:31:56.094677 | orchestrator | 2026-01-10 14:31:56 | INFO  | Task 5d1dbc0e-dd11-45c5-bb7b-57b6226ec7e7 is in state STARTED 2026-01-10 14:31:56.095815 | orchestrator | 2026-01-10 14:31:56 | INFO  | Task 58117f0b-6074-45cc-99bd-993bb6ddc811 is in state STARTED 2026-01-10 14:31:56.095869 | orchestrator | 2026-01-10 14:31:56 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:31:59.136842 | orchestrator | 2026-01-10 14:31:59 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:31:59.139625 | orchestrator | 2026-01-10 14:31:59 | INFO  | Task 5d1dbc0e-dd11-45c5-bb7b-57b6226ec7e7 is in state STARTED 2026-01-10 14:31:59.141818 | orchestrator | 2026-01-10 14:31:59 | INFO  | Task 58117f0b-6074-45cc-99bd-993bb6ddc811 is in state STARTED 2026-01-10 14:31:59.141882 | orchestrator | 2026-01-10 14:31:59 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:32:02.201411 | orchestrator | 2026-01-10 14:32:02 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:32:02.203109 | orchestrator | 2026-01-10 14:32:02 | INFO  | Task 5d1dbc0e-dd11-45c5-bb7b-57b6226ec7e7 is in state STARTED 2026-01-10 14:32:02.205287 | orchestrator | 2026-01-10 14:32:02 | INFO  | Task 58117f0b-6074-45cc-99bd-993bb6ddc811 is in state STARTED 2026-01-10 14:32:02.205318 | orchestrator | 2026-01-10 14:32:02 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:32:05.255899 | orchestrator | 2026-01-10 14:32:05 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:32:05.255988 | orchestrator | 2026-01-10 14:32:05 | INFO  | Task 5d1dbc0e-dd11-45c5-bb7b-57b6226ec7e7 is in state STARTED 2026-01-10 14:32:05.256006 | orchestrator | 2026-01-10 14:32:05 | INFO  | Task 58117f0b-6074-45cc-99bd-993bb6ddc811 is in state STARTED 2026-01-10 14:32:05.256013 | orchestrator | 2026-01-10 14:32:05 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:32:08.303423 | orchestrator | 2026-01-10 14:32:08 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:32:08.306349 | orchestrator | 2026-01-10 14:32:08 | INFO  | Task 5d1dbc0e-dd11-45c5-bb7b-57b6226ec7e7 is in state STARTED 2026-01-10 14:32:08.309171 | orchestrator | 2026-01-10 14:32:08 | INFO  | Task 58117f0b-6074-45cc-99bd-993bb6ddc811 is in state STARTED 2026-01-10 14:32:08.309241 | orchestrator | 2026-01-10 14:32:08 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:32:11.366460 | orchestrator | 2026-01-10 14:32:11 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:32:11.372135 | orchestrator | 2026-01-10 14:32:11 | INFO  | Task 5d1dbc0e-dd11-45c5-bb7b-57b6226ec7e7 is in state STARTED 2026-01-10 14:32:11.375618 | orchestrator | 2026-01-10 14:32:11 | INFO  | Task 58117f0b-6074-45cc-99bd-993bb6ddc811 is in state STARTED 2026-01-10 14:32:11.375948 | orchestrator | 2026-01-10 14:32:11 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:32:14.413909 | orchestrator | 2026-01-10 14:32:14 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:32:14.416493 | orchestrator | 2026-01-10 14:32:14 | INFO  | Task 5d1dbc0e-dd11-45c5-bb7b-57b6226ec7e7 is in state STARTED 2026-01-10 14:32:14.418325 | orchestrator | 2026-01-10 14:32:14 | INFO  | Task 58117f0b-6074-45cc-99bd-993bb6ddc811 is in state STARTED 2026-01-10 14:32:14.421407 | orchestrator | 2026-01-10 14:32:14 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:32:17.461609 | orchestrator | 2026-01-10 14:32:17 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:32:17.466466 | orchestrator | 2026-01-10 14:32:17 | INFO  | Task 5d1dbc0e-dd11-45c5-bb7b-57b6226ec7e7 is in state SUCCESS 2026-01-10 14:32:17.470155 | orchestrator | 2026-01-10 14:32:17.470200 | orchestrator | 2026-01-10 14:32:17.470206 | orchestrator | PLAY [Apply role phpmyadmin] *************************************************** 2026-01-10 14:32:17.470211 | orchestrator | 2026-01-10 14:32:17.470216 | orchestrator | TASK [osism.services.phpmyadmin : Create traefik external network] ************* 2026-01-10 14:32:17.470221 | orchestrator | Saturday 10 January 2026 14:30:09 +0000 (0:00:00.240) 0:00:00.240 ****** 2026-01-10 14:32:17.470225 | orchestrator | ok: [testbed-manager] 2026-01-10 14:32:17.470230 | orchestrator | 2026-01-10 14:32:17.470234 | orchestrator | TASK [osism.services.phpmyadmin : Create required directories] ***************** 2026-01-10 14:32:17.470239 | orchestrator | Saturday 10 January 2026 14:30:10 +0000 (0:00:01.133) 0:00:01.374 ****** 2026-01-10 14:32:17.470243 | orchestrator | changed: [testbed-manager] => (item=/opt/phpmyadmin) 2026-01-10 14:32:17.470248 | orchestrator | 2026-01-10 14:32:17.470252 | orchestrator | TASK [osism.services.phpmyadmin : Copy docker-compose.yml file] **************** 2026-01-10 14:32:17.470256 | orchestrator | Saturday 10 January 2026 14:30:11 +0000 (0:00:00.556) 0:00:01.931 ****** 2026-01-10 14:32:17.470261 | orchestrator | changed: [testbed-manager] 2026-01-10 14:32:17.470265 | orchestrator | 2026-01-10 14:32:17.470270 | orchestrator | TASK [osism.services.phpmyadmin : Manage phpmyadmin service] ******************* 2026-01-10 14:32:17.470274 | orchestrator | Saturday 10 January 2026 14:30:12 +0000 (0:00:01.009) 0:00:02.941 ****** 2026-01-10 14:32:17.470278 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage phpmyadmin service (10 retries left). 2026-01-10 14:32:17.470282 | orchestrator | ok: [testbed-manager] 2026-01-10 14:32:17.470287 | orchestrator | 2026-01-10 14:32:17.470300 | orchestrator | RUNNING HANDLER [osism.services.phpmyadmin : Restart phpmyadmin service] ******* 2026-01-10 14:32:17.470305 | orchestrator | Saturday 10 January 2026 14:31:13 +0000 (0:01:01.194) 0:01:04.135 ****** 2026-01-10 14:32:17.470309 | orchestrator | changed: [testbed-manager] 2026-01-10 14:32:17.470313 | orchestrator | 2026-01-10 14:32:17.470318 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-10 14:32:17.470322 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-10 14:32:17.470327 | orchestrator | 2026-01-10 14:32:17.470332 | orchestrator | 2026-01-10 14:32:17.470336 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-10 14:32:17.470340 | orchestrator | Saturday 10 January 2026 14:31:21 +0000 (0:00:07.942) 0:01:12.077 ****** 2026-01-10 14:32:17.470345 | orchestrator | =============================================================================== 2026-01-10 14:32:17.470349 | orchestrator | osism.services.phpmyadmin : Manage phpmyadmin service ------------------ 61.19s 2026-01-10 14:32:17.470353 | orchestrator | osism.services.phpmyadmin : Restart phpmyadmin service ------------------ 7.94s 2026-01-10 14:32:17.470357 | orchestrator | osism.services.phpmyadmin : Create traefik external network ------------- 1.13s 2026-01-10 14:32:17.470362 | orchestrator | osism.services.phpmyadmin : Copy docker-compose.yml file ---------------- 1.01s 2026-01-10 14:32:17.470366 | orchestrator | osism.services.phpmyadmin : Create required directories ----------------- 0.56s 2026-01-10 14:32:17.470370 | orchestrator | 2026-01-10 14:32:17.470375 | orchestrator | 2026-01-10 14:32:17.470379 | orchestrator | PLAY [Apply role common] ******************************************************* 2026-01-10 14:32:17.470393 | orchestrator | 2026-01-10 14:32:17.470398 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-01-10 14:32:17.470402 | orchestrator | Saturday 10 January 2026 14:29:39 +0000 (0:00:00.253) 0:00:00.253 ****** 2026-01-10 14:32:17.470407 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-10 14:32:17.470412 | orchestrator | 2026-01-10 14:32:17.470416 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2026-01-10 14:32:17.470420 | orchestrator | Saturday 10 January 2026 14:29:41 +0000 (0:00:01.450) 0:00:01.703 ****** 2026-01-10 14:32:17.470424 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2026-01-10 14:32:17.470429 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2026-01-10 14:32:17.470433 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-01-10 14:32:17.470437 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2026-01-10 14:32:17.470441 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2026-01-10 14:32:17.470446 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2026-01-10 14:32:17.470450 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-01-10 14:32:17.470454 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-01-10 14:32:17.470459 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-01-10 14:32:17.470463 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2026-01-10 14:32:17.470468 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-01-10 14:32:17.470472 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-01-10 14:32:17.470475 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2026-01-10 14:32:17.470479 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-01-10 14:32:17.470483 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-01-10 14:32:17.470486 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-01-10 14:32:17.470497 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-01-10 14:32:17.470501 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-01-10 14:32:17.470505 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-01-10 14:32:17.470508 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-01-10 14:32:17.470512 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-01-10 14:32:17.470516 | orchestrator | 2026-01-10 14:32:17.470520 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-01-10 14:32:17.470523 | orchestrator | Saturday 10 January 2026 14:29:45 +0000 (0:00:04.473) 0:00:06.177 ****** 2026-01-10 14:32:17.470527 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-10 14:32:17.470531 | orchestrator | 2026-01-10 14:32:17.470535 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2026-01-10 14:32:17.470539 | orchestrator | Saturday 10 January 2026 14:29:46 +0000 (0:00:01.187) 0:00:07.364 ****** 2026-01-10 14:32:17.470548 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-10 14:32:17.470557 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-10 14:32:17.470561 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-10 14:32:17.470565 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-10 14:32:17.470569 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-10 14:32:17.470582 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:32:17.470586 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-10 14:32:17.470592 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:32:17.470599 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-10 14:32:17.470606 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:32:17.470613 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:32:17.470619 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:32:17.470630 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:32:17.470636 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:32:17.470645 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:32:17.470664 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:32:17.470671 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:32:17.470678 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:32:17.470684 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:32:17.470690 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:32:17.470698 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:32:17.470702 | orchestrator | 2026-01-10 14:32:17.470706 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2026-01-10 14:32:17.470710 | orchestrator | Saturday 10 January 2026 14:29:52 +0000 (0:00:05.274) 0:00:12.639 ****** 2026-01-10 14:32:17.470717 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-10 14:32:17.470722 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-10 14:32:17.470730 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:32:17.470735 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:32:17.470750 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:32:17.470754 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-10 14:32:17.470758 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:32:17.470762 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:32:17.470766 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:32:17.470770 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-10 14:32:17.470778 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:32:17.470787 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:32:17.470791 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:32:17.470795 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:32:17.470799 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-10 14:32:17.470806 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:32:17.470812 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:32:17.470819 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:32:17.470825 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:32:17.470831 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:32:17.470838 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-10 14:32:17.470852 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:32:17.470858 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-10 14:32:17.470864 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:32:17.470870 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:32:17.470877 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:32:17.470884 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:32:17.470891 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:32:17.470897 | orchestrator | 2026-01-10 14:32:17.470904 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2026-01-10 14:32:17.470910 | orchestrator | Saturday 10 January 2026 14:29:55 +0000 (0:00:03.339) 0:00:15.978 ****** 2026-01-10 14:32:17.470921 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-10 14:32:17.470928 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-10 14:32:17.470950 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:32:17.470957 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-10 14:32:17.470967 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-10 14:32:17.470973 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:32:17.470980 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:32:17.470987 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:32:17.470993 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-10 14:32:17.471000 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:32:17.471010 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:32:17.471014 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:32:17.471018 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:32:17.471024 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-10 14:32:17.471028 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:32:17.471032 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:32:17.471036 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:32:17.471040 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:32:17.471044 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:32:17.471051 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:32:17.471055 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:32:17.471059 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:32:17.471065 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:32:17.471069 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:32:17.471073 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-10 14:32:17.471079 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:32:17.471083 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:32:17.471087 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:32:17.471091 | orchestrator | 2026-01-10 14:32:17.471095 | orchestrator | TASK [common : Ensure /var/log/journal exists on EL10 systems] ***************** 2026-01-10 14:32:17.471098 | orchestrator | Saturday 10 January 2026 14:30:01 +0000 (0:00:05.512) 0:00:21.490 ****** 2026-01-10 14:32:17.471102 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:32:17.471106 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:32:17.471110 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:32:17.471114 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:32:17.471117 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:32:17.471121 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:32:17.471125 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:32:17.471128 | orchestrator | 2026-01-10 14:32:17.471132 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2026-01-10 14:32:17.471136 | orchestrator | Saturday 10 January 2026 14:30:01 +0000 (0:00:00.916) 0:00:22.406 ****** 2026-01-10 14:32:17.471140 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:32:17.471143 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:32:17.471150 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:32:17.471153 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:32:17.471157 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:32:17.471161 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:32:17.471165 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:32:17.471168 | orchestrator | 2026-01-10 14:32:17.471172 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2026-01-10 14:32:17.471176 | orchestrator | Saturday 10 January 2026 14:30:03 +0000 (0:00:01.344) 0:00:23.751 ****** 2026-01-10 14:32:17.471180 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:32:17.471183 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:32:17.471187 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:32:17.471191 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:32:17.471195 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:32:17.471198 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:32:17.471202 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:32:17.471206 | orchestrator | 2026-01-10 14:32:17.471209 | orchestrator | TASK [common : Copying over kolla.target] ************************************** 2026-01-10 14:32:17.471213 | orchestrator | Saturday 10 January 2026 14:30:04 +0000 (0:00:00.851) 0:00:24.602 ****** 2026-01-10 14:32:17.471217 | orchestrator | changed: [testbed-manager] 2026-01-10 14:32:17.471221 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:32:17.471224 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:32:17.471228 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:32:17.471232 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:32:17.471235 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:32:17.471239 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:32:17.471243 | orchestrator | 2026-01-10 14:32:17.471246 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2026-01-10 14:32:17.471250 | orchestrator | Saturday 10 January 2026 14:30:06 +0000 (0:00:02.608) 0:00:27.211 ****** 2026-01-10 14:32:17.471256 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-10 14:32:17.471260 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-10 14:32:17.471266 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-10 14:32:17.471270 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-10 14:32:17.471277 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-10 14:32:17.471281 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-10 14:32:17.471284 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-10 14:32:17.471288 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:32:17.471297 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:32:17.471302 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:32:17.471307 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:32:17.471314 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:32:17.471318 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:32:17.471322 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:32:17.471326 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:32:17.471331 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:32:17.471336 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:32:17.471339 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:32:17.471345 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:32:17.471352 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:32:17.471355 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:32:17.471359 | orchestrator | 2026-01-10 14:32:17.471363 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2026-01-10 14:32:17.471367 | orchestrator | Saturday 10 January 2026 14:30:11 +0000 (0:00:05.059) 0:00:32.270 ****** 2026-01-10 14:32:17.471371 | orchestrator | [WARNING]: Skipped 2026-01-10 14:32:17.471375 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2026-01-10 14:32:17.471378 | orchestrator | to this access issue: 2026-01-10 14:32:17.471382 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2026-01-10 14:32:17.471386 | orchestrator | directory 2026-01-10 14:32:17.471390 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-10 14:32:17.471393 | orchestrator | 2026-01-10 14:32:17.471397 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2026-01-10 14:32:17.471401 | orchestrator | Saturday 10 January 2026 14:30:13 +0000 (0:00:01.225) 0:00:33.495 ****** 2026-01-10 14:32:17.471405 | orchestrator | [WARNING]: Skipped 2026-01-10 14:32:17.471408 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2026-01-10 14:32:17.471412 | orchestrator | to this access issue: 2026-01-10 14:32:17.471416 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2026-01-10 14:32:17.471419 | orchestrator | directory 2026-01-10 14:32:17.471423 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-10 14:32:17.471427 | orchestrator | 2026-01-10 14:32:17.471430 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2026-01-10 14:32:17.471434 | orchestrator | Saturday 10 January 2026 14:30:14 +0000 (0:00:00.976) 0:00:34.472 ****** 2026-01-10 14:32:17.471438 | orchestrator | [WARNING]: Skipped 2026-01-10 14:32:17.471441 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2026-01-10 14:32:17.471445 | orchestrator | to this access issue: 2026-01-10 14:32:17.471449 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2026-01-10 14:32:17.471453 | orchestrator | directory 2026-01-10 14:32:17.471456 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-10 14:32:17.471460 | orchestrator | 2026-01-10 14:32:17.471464 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2026-01-10 14:32:17.471468 | orchestrator | Saturday 10 January 2026 14:30:14 +0000 (0:00:00.777) 0:00:35.249 ****** 2026-01-10 14:32:17.471471 | orchestrator | [WARNING]: Skipped 2026-01-10 14:32:17.471475 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2026-01-10 14:32:17.471479 | orchestrator | to this access issue: 2026-01-10 14:32:17.471482 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2026-01-10 14:32:17.471486 | orchestrator | directory 2026-01-10 14:32:17.471490 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-10 14:32:17.471496 | orchestrator | 2026-01-10 14:32:17.471502 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2026-01-10 14:32:17.471506 | orchestrator | Saturday 10 January 2026 14:30:16 +0000 (0:00:01.192) 0:00:36.441 ****** 2026-01-10 14:32:17.471509 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:32:17.471513 | orchestrator | changed: [testbed-manager] 2026-01-10 14:32:17.471517 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:32:17.471520 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:32:17.471524 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:32:17.471528 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:32:17.471531 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:32:17.471535 | orchestrator | 2026-01-10 14:32:17.471539 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2026-01-10 14:32:17.471543 | orchestrator | Saturday 10 January 2026 14:30:21 +0000 (0:00:05.829) 0:00:42.271 ****** 2026-01-10 14:32:17.471546 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-01-10 14:32:17.471550 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-01-10 14:32:17.471554 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-01-10 14:32:17.471558 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-01-10 14:32:17.471564 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-01-10 14:32:17.471567 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-01-10 14:32:17.471571 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-01-10 14:32:17.471575 | orchestrator | 2026-01-10 14:32:17.471579 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2026-01-10 14:32:17.471582 | orchestrator | Saturday 10 January 2026 14:30:27 +0000 (0:00:05.880) 0:00:48.151 ****** 2026-01-10 14:32:17.471586 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:32:17.471590 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:32:17.471593 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:32:17.471597 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:32:17.471601 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:32:17.471604 | orchestrator | changed: [testbed-manager] 2026-01-10 14:32:17.471608 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:32:17.471612 | orchestrator | 2026-01-10 14:32:17.471616 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2026-01-10 14:32:17.471619 | orchestrator | Saturday 10 January 2026 14:30:32 +0000 (0:00:04.743) 0:00:52.895 ****** 2026-01-10 14:32:17.471623 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-10 14:32:17.471627 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:32:17.471635 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-10 14:32:17.471641 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:32:17.471645 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-10 14:32:17.471649 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:32:17.471653 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:32:17.471657 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:32:17.471661 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:32:17.471667 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-10 14:32:17.471674 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-10 14:32:17.471678 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:32:17.471684 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:32:17.471690 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-10 14:32:17.471694 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:32:17.471698 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-10 14:32:17.471702 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:32:17.471708 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:32:17.471712 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:32:17.471716 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:32:17.471722 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:32:17.471726 | orchestrator | 2026-01-10 14:32:17.471730 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2026-01-10 14:32:17.471734 | orchestrator | Saturday 10 January 2026 14:30:35 +0000 (0:00:02.761) 0:00:55.656 ****** 2026-01-10 14:32:17.471747 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-01-10 14:32:17.471751 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-01-10 14:32:17.471755 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-01-10 14:32:17.471759 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-01-10 14:32:17.471765 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-01-10 14:32:17.471768 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-01-10 14:32:17.471772 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-01-10 14:32:17.471776 | orchestrator | 2026-01-10 14:32:17.471780 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2026-01-10 14:32:17.471783 | orchestrator | Saturday 10 January 2026 14:30:38 +0000 (0:00:03.301) 0:00:58.957 ****** 2026-01-10 14:32:17.471787 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-01-10 14:32:17.471791 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-01-10 14:32:17.471795 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-01-10 14:32:17.471798 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-01-10 14:32:17.471802 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-01-10 14:32:17.471806 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-01-10 14:32:17.471812 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-01-10 14:32:17.471816 | orchestrator | 2026-01-10 14:32:17.471820 | orchestrator | TASK [service-check-containers : common | Check containers] ******************** 2026-01-10 14:32:17.471823 | orchestrator | Saturday 10 January 2026 14:30:41 +0000 (0:00:02.795) 0:01:01.753 ****** 2026-01-10 14:32:17.471827 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-10 14:32:17.471831 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-10 14:32:17.471835 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-10 14:32:17.471843 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-10 14:32:17.471847 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-10 14:32:17.471853 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-10 14:32:17.471857 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-10 14:32:17.471863 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:32:17.471867 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:32:17.471871 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:32:17.471905 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:32:17.471910 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:32:17.471916 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:32:17.471920 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:32:17.471926 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:32:17.471930 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:32:17.471934 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:32:17.471938 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:32:17.471942 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:32:17.471959 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:32:17.471964 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:32:17.471967 | orchestrator | 2026-01-10 14:32:17.471971 | orchestrator | TASK [service-check-containers : common | Notify handlers to restart containers] *** 2026-01-10 14:32:17.471975 | orchestrator | Saturday 10 January 2026 14:30:45 +0000 (0:00:04.316) 0:01:06.069 ****** 2026-01-10 14:32:17.471981 | orchestrator | changed: [testbed-manager] => { 2026-01-10 14:32:17.471985 | orchestrator |  "msg": "Notifying handlers" 2026-01-10 14:32:17.471991 | orchestrator | } 2026-01-10 14:32:17.471995 | orchestrator | changed: [testbed-node-0] => { 2026-01-10 14:32:17.471998 | orchestrator |  "msg": "Notifying handlers" 2026-01-10 14:32:17.472002 | orchestrator | } 2026-01-10 14:32:17.472006 | orchestrator | changed: [testbed-node-1] => { 2026-01-10 14:32:17.472009 | orchestrator |  "msg": "Notifying handlers" 2026-01-10 14:32:17.472013 | orchestrator | } 2026-01-10 14:32:17.472017 | orchestrator | changed: [testbed-node-2] => { 2026-01-10 14:32:17.472020 | orchestrator |  "msg": "Notifying handlers" 2026-01-10 14:32:17.472024 | orchestrator | } 2026-01-10 14:32:17.472028 | orchestrator | changed: [testbed-node-3] => { 2026-01-10 14:32:17.472031 | orchestrator |  "msg": "Notifying handlers" 2026-01-10 14:32:17.472035 | orchestrator | } 2026-01-10 14:32:17.472038 | orchestrator | changed: [testbed-node-4] => { 2026-01-10 14:32:17.472042 | orchestrator |  "msg": "Notifying handlers" 2026-01-10 14:32:17.472046 | orchestrator | } 2026-01-10 14:32:17.472049 | orchestrator | changed: [testbed-node-5] => { 2026-01-10 14:32:17.472053 | orchestrator |  "msg": "Notifying handlers" 2026-01-10 14:32:17.472057 | orchestrator | } 2026-01-10 14:32:17.472061 | orchestrator | 2026-01-10 14:32:17.472064 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-01-10 14:32:17.472068 | orchestrator | Saturday 10 January 2026 14:30:46 +0000 (0:00:01.210) 0:01:07.280 ****** 2026-01-10 14:32:17.472072 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-10 14:32:17.472076 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:32:17.472080 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:32:17.472084 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-10 14:32:17.472090 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:32:17.472097 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:32:17.472101 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:32:17.472107 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-10 14:32:17.472112 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:32:17.472116 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:32:17.472119 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:32:17.472123 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-10 14:32:17.472127 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:32:17.472131 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:32:17.472141 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-10 14:32:17.472148 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:32:17.472154 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:32:17.472158 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:32:17.472162 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:32:17.472166 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:32:17.472170 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-10 14:32:17.472174 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:32:17.472178 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:32:17.472182 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:32:17.472185 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-10 14:32:17.472191 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:32:17.472197 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:32:17.472201 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:32:17.472205 | orchestrator | 2026-01-10 14:32:17.472209 | orchestrator | TASK [common : Creating log volume] ******************************************** 2026-01-10 14:32:17.472213 | orchestrator | Saturday 10 January 2026 14:30:49 +0000 (0:00:02.454) 0:01:09.734 ****** 2026-01-10 14:32:17.472216 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:32:17.472220 | orchestrator | changed: [testbed-manager] 2026-01-10 14:32:17.472224 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:32:17.472228 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:32:17.472234 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:32:17.472237 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:32:17.472241 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:32:17.472245 | orchestrator | 2026-01-10 14:32:17.472249 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2026-01-10 14:32:17.472252 | orchestrator | Saturday 10 January 2026 14:30:52 +0000 (0:00:02.892) 0:01:12.627 ****** 2026-01-10 14:32:17.472256 | orchestrator | changed: [testbed-manager] 2026-01-10 14:32:17.472260 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:32:17.472264 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:32:17.472267 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:32:17.472271 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:32:17.472275 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:32:17.472278 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:32:17.472282 | orchestrator | 2026-01-10 14:32:17.472286 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-01-10 14:32:17.472290 | orchestrator | Saturday 10 January 2026 14:30:53 +0000 (0:00:01.665) 0:01:14.292 ****** 2026-01-10 14:32:17.472293 | orchestrator | 2026-01-10 14:32:17.472297 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-01-10 14:32:17.472301 | orchestrator | Saturday 10 January 2026 14:30:53 +0000 (0:00:00.087) 0:01:14.380 ****** 2026-01-10 14:32:17.472305 | orchestrator | 2026-01-10 14:32:17.472308 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-01-10 14:32:17.472312 | orchestrator | Saturday 10 January 2026 14:30:54 +0000 (0:00:00.100) 0:01:14.480 ****** 2026-01-10 14:32:17.472316 | orchestrator | 2026-01-10 14:32:17.472320 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-01-10 14:32:17.472323 | orchestrator | Saturday 10 January 2026 14:30:54 +0000 (0:00:00.126) 0:01:14.607 ****** 2026-01-10 14:32:17.472327 | orchestrator | 2026-01-10 14:32:17.472331 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-01-10 14:32:17.472335 | orchestrator | Saturday 10 January 2026 14:30:54 +0000 (0:00:00.548) 0:01:15.155 ****** 2026-01-10 14:32:17.472338 | orchestrator | 2026-01-10 14:32:17.472342 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-01-10 14:32:17.472346 | orchestrator | Saturday 10 January 2026 14:30:54 +0000 (0:00:00.093) 0:01:15.249 ****** 2026-01-10 14:32:17.472349 | orchestrator | 2026-01-10 14:32:17.472353 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-01-10 14:32:17.472359 | orchestrator | Saturday 10 January 2026 14:30:54 +0000 (0:00:00.087) 0:01:15.336 ****** 2026-01-10 14:32:17.472363 | orchestrator | 2026-01-10 14:32:17.472367 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2026-01-10 14:32:17.472370 | orchestrator | Saturday 10 January 2026 14:30:55 +0000 (0:00:00.117) 0:01:15.453 ****** 2026-01-10 14:32:17.472374 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:32:17.472378 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:32:17.472381 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:32:17.472385 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:32:17.472389 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:32:17.472393 | orchestrator | changed: [testbed-manager] 2026-01-10 14:32:17.472396 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:32:17.472400 | orchestrator | 2026-01-10 14:32:17.472404 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2026-01-10 14:32:17.472407 | orchestrator | Saturday 10 January 2026 14:31:27 +0000 (0:00:32.850) 0:01:48.304 ****** 2026-01-10 14:32:17.472411 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:32:17.472415 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:32:17.472419 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:32:17.472422 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:32:17.472426 | orchestrator | changed: [testbed-manager] 2026-01-10 14:32:17.472430 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:32:17.472433 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:32:17.472437 | orchestrator | 2026-01-10 14:32:17.472441 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2026-01-10 14:32:17.472445 | orchestrator | Saturday 10 January 2026 14:32:04 +0000 (0:00:36.238) 0:02:24.542 ****** 2026-01-10 14:32:17.472453 | orchestrator | ok: [testbed-manager] 2026-01-10 14:32:17.472456 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:32:17.472460 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:32:17.472464 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:32:17.472468 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:32:17.472471 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:32:17.472475 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:32:17.472479 | orchestrator | 2026-01-10 14:32:17.472482 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2026-01-10 14:32:17.472486 | orchestrator | Saturday 10 January 2026 14:32:06 +0000 (0:00:02.305) 0:02:26.847 ****** 2026-01-10 14:32:17.472492 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:32:17.472496 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:32:17.472500 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:32:17.472503 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:32:17.472507 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:32:17.472511 | orchestrator | changed: [testbed-manager] 2026-01-10 14:32:17.472514 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:32:17.472518 | orchestrator | 2026-01-10 14:32:17.472522 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-10 14:32:17.472526 | orchestrator | testbed-manager : ok=24  changed=16  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-01-10 14:32:17.472530 | orchestrator | testbed-node-0 : ok=20  changed=16  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-01-10 14:32:17.472534 | orchestrator | testbed-node-1 : ok=20  changed=16  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-01-10 14:32:17.472537 | orchestrator | testbed-node-2 : ok=20  changed=16  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-01-10 14:32:17.472541 | orchestrator | testbed-node-3 : ok=20  changed=16  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-01-10 14:32:17.472548 | orchestrator | testbed-node-4 : ok=20  changed=16  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-01-10 14:32:17.472551 | orchestrator | testbed-node-5 : ok=20  changed=16  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-01-10 14:32:17.472555 | orchestrator | 2026-01-10 14:32:17.472559 | orchestrator | 2026-01-10 14:32:17.472563 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-10 14:32:17.472566 | orchestrator | Saturday 10 January 2026 14:32:16 +0000 (0:00:09.680) 0:02:36.528 ****** 2026-01-10 14:32:17.472570 | orchestrator | =============================================================================== 2026-01-10 14:32:17.472574 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 36.24s 2026-01-10 14:32:17.472578 | orchestrator | common : Restart fluentd container ------------------------------------- 32.85s 2026-01-10 14:32:17.472581 | orchestrator | common : Restart cron container ----------------------------------------- 9.68s 2026-01-10 14:32:17.472585 | orchestrator | common : Copying over cron logrotate config file ------------------------ 5.88s 2026-01-10 14:32:17.472589 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 5.83s 2026-01-10 14:32:17.472592 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 5.51s 2026-01-10 14:32:17.472596 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 5.27s 2026-01-10 14:32:17.472600 | orchestrator | common : Copying over config.json files for services -------------------- 5.06s 2026-01-10 14:32:17.472604 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 4.74s 2026-01-10 14:32:17.472609 | orchestrator | common : Ensuring config directories exist ------------------------------ 4.47s 2026-01-10 14:32:17.472613 | orchestrator | service-check-containers : common | Check containers -------------------- 4.32s 2026-01-10 14:32:17.472617 | orchestrator | service-cert-copy : common | Copying over backend internal TLS certificate --- 3.34s 2026-01-10 14:32:17.472621 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 3.30s 2026-01-10 14:32:17.472624 | orchestrator | common : Creating log volume -------------------------------------------- 2.89s 2026-01-10 14:32:17.472628 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 2.80s 2026-01-10 14:32:17.472632 | orchestrator | common : Ensuring config directories have correct owner and permission --- 2.76s 2026-01-10 14:32:17.472635 | orchestrator | common : Copying over kolla.target -------------------------------------- 2.61s 2026-01-10 14:32:17.472639 | orchestrator | service-check-containers : Include tasks -------------------------------- 2.45s 2026-01-10 14:32:17.472643 | orchestrator | common : Initializing toolbox container using normal user --------------- 2.31s 2026-01-10 14:32:17.472646 | orchestrator | common : Link kolla_logs volume to /var/log/kolla ----------------------- 1.67s 2026-01-10 14:32:17.472690 | orchestrator | 2026-01-10 14:32:17 | INFO  | Task 58117f0b-6074-45cc-99bd-993bb6ddc811 is in state STARTED 2026-01-10 14:32:17.472697 | orchestrator | 2026-01-10 14:32:17 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:32:20.514524 | orchestrator | 2026-01-10 14:32:20 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:32:20.515520 | orchestrator | 2026-01-10 14:32:20 | INFO  | Task 9a40c5aa-ab95-47c5-ab1e-0d26ff04627f is in state STARTED 2026-01-10 14:32:20.523411 | orchestrator | 2026-01-10 14:32:20 | INFO  | Task 99392778-5155-4436-834d-e71f805258ed is in state STARTED 2026-01-10 14:32:20.523475 | orchestrator | 2026-01-10 14:32:20 | INFO  | Task 58117f0b-6074-45cc-99bd-993bb6ddc811 is in state STARTED 2026-01-10 14:32:20.523480 | orchestrator | 2026-01-10 14:32:20 | INFO  | Task 37b8a5c0-171e-41fe-88e8-f4b1ac333197 is in state STARTED 2026-01-10 14:32:20.523485 | orchestrator | 2026-01-10 14:32:20 | INFO  | Task 35fd8ed6-bee3-4b01-a98c-0d469af062b0 is in state STARTED 2026-01-10 14:32:20.523510 | orchestrator | 2026-01-10 14:32:20 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:32:23.559061 | orchestrator | 2026-01-10 14:32:23 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:32:23.565330 | orchestrator | 2026-01-10 14:32:23 | INFO  | Task 9a40c5aa-ab95-47c5-ab1e-0d26ff04627f is in state STARTED 2026-01-10 14:32:23.570259 | orchestrator | 2026-01-10 14:32:23 | INFO  | Task 99392778-5155-4436-834d-e71f805258ed is in state STARTED 2026-01-10 14:32:23.581147 | orchestrator | 2026-01-10 14:32:23 | INFO  | Task 58117f0b-6074-45cc-99bd-993bb6ddc811 is in state STARTED 2026-01-10 14:32:23.589386 | orchestrator | 2026-01-10 14:32:23 | INFO  | Task 37b8a5c0-171e-41fe-88e8-f4b1ac333197 is in state STARTED 2026-01-10 14:32:23.596511 | orchestrator | 2026-01-10 14:32:23 | INFO  | Task 35fd8ed6-bee3-4b01-a98c-0d469af062b0 is in state STARTED 2026-01-10 14:32:23.596633 | orchestrator | 2026-01-10 14:32:23 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:32:26.635632 | orchestrator | 2026-01-10 14:32:26 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:32:26.637010 | orchestrator | 2026-01-10 14:32:26 | INFO  | Task 9a40c5aa-ab95-47c5-ab1e-0d26ff04627f is in state STARTED 2026-01-10 14:32:26.637053 | orchestrator | 2026-01-10 14:32:26 | INFO  | Task 99392778-5155-4436-834d-e71f805258ed is in state STARTED 2026-01-10 14:32:26.638060 | orchestrator | 2026-01-10 14:32:26 | INFO  | Task 58117f0b-6074-45cc-99bd-993bb6ddc811 is in state STARTED 2026-01-10 14:32:26.641390 | orchestrator | 2026-01-10 14:32:26 | INFO  | Task 37b8a5c0-171e-41fe-88e8-f4b1ac333197 is in state STARTED 2026-01-10 14:32:26.641432 | orchestrator | 2026-01-10 14:32:26 | INFO  | Task 35fd8ed6-bee3-4b01-a98c-0d469af062b0 is in state STARTED 2026-01-10 14:32:26.641446 | orchestrator | 2026-01-10 14:32:26 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:32:29.686877 | orchestrator | 2026-01-10 14:32:29 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:32:29.686956 | orchestrator | 2026-01-10 14:32:29 | INFO  | Task 9a40c5aa-ab95-47c5-ab1e-0d26ff04627f is in state STARTED 2026-01-10 14:32:29.687061 | orchestrator | 2026-01-10 14:32:29 | INFO  | Task 99392778-5155-4436-834d-e71f805258ed is in state STARTED 2026-01-10 14:32:29.687876 | orchestrator | 2026-01-10 14:32:29 | INFO  | Task 58117f0b-6074-45cc-99bd-993bb6ddc811 is in state STARTED 2026-01-10 14:32:29.688629 | orchestrator | 2026-01-10 14:32:29 | INFO  | Task 37b8a5c0-171e-41fe-88e8-f4b1ac333197 is in state STARTED 2026-01-10 14:32:29.689301 | orchestrator | 2026-01-10 14:32:29 | INFO  | Task 35fd8ed6-bee3-4b01-a98c-0d469af062b0 is in state STARTED 2026-01-10 14:32:29.689337 | orchestrator | 2026-01-10 14:32:29 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:32:32.727203 | orchestrator | 2026-01-10 14:32:32 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:32:32.730914 | orchestrator | 2026-01-10 14:32:32 | INFO  | Task 9a40c5aa-ab95-47c5-ab1e-0d26ff04627f is in state STARTED 2026-01-10 14:32:32.732347 | orchestrator | 2026-01-10 14:32:32 | INFO  | Task 99392778-5155-4436-834d-e71f805258ed is in state STARTED 2026-01-10 14:32:32.735145 | orchestrator | 2026-01-10 14:32:32 | INFO  | Task 58117f0b-6074-45cc-99bd-993bb6ddc811 is in state STARTED 2026-01-10 14:32:32.736154 | orchestrator | 2026-01-10 14:32:32 | INFO  | Task 37b8a5c0-171e-41fe-88e8-f4b1ac333197 is in state STARTED 2026-01-10 14:32:32.736754 | orchestrator | 2026-01-10 14:32:32 | INFO  | Task 35fd8ed6-bee3-4b01-a98c-0d469af062b0 is in state STARTED 2026-01-10 14:32:32.736863 | orchestrator | 2026-01-10 14:32:32 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:32:35.767359 | orchestrator | 2026-01-10 14:32:35 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:32:35.767926 | orchestrator | 2026-01-10 14:32:35 | INFO  | Task 9a40c5aa-ab95-47c5-ab1e-0d26ff04627f is in state STARTED 2026-01-10 14:32:35.769239 | orchestrator | 2026-01-10 14:32:35 | INFO  | Task 99392778-5155-4436-834d-e71f805258ed is in state STARTED 2026-01-10 14:32:35.769944 | orchestrator | 2026-01-10 14:32:35 | INFO  | Task 58117f0b-6074-45cc-99bd-993bb6ddc811 is in state STARTED 2026-01-10 14:32:35.771031 | orchestrator | 2026-01-10 14:32:35 | INFO  | Task 37b8a5c0-171e-41fe-88e8-f4b1ac333197 is in state STARTED 2026-01-10 14:32:35.771991 | orchestrator | 2026-01-10 14:32:35 | INFO  | Task 35fd8ed6-bee3-4b01-a98c-0d469af062b0 is in state STARTED 2026-01-10 14:32:35.772094 | orchestrator | 2026-01-10 14:32:35 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:32:38.815326 | orchestrator | 2026-01-10 14:32:38 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:32:38.816113 | orchestrator | 2026-01-10 14:32:38 | INFO  | Task 9a40c5aa-ab95-47c5-ab1e-0d26ff04627f is in state STARTED 2026-01-10 14:32:38.819133 | orchestrator | 2026-01-10 14:32:38 | INFO  | Task 99392778-5155-4436-834d-e71f805258ed is in state STARTED 2026-01-10 14:32:38.820297 | orchestrator | 2026-01-10 14:32:38 | INFO  | Task 58117f0b-6074-45cc-99bd-993bb6ddc811 is in state STARTED 2026-01-10 14:32:38.821113 | orchestrator | 2026-01-10 14:32:38 | INFO  | Task 37b8a5c0-171e-41fe-88e8-f4b1ac333197 is in state STARTED 2026-01-10 14:32:38.822201 | orchestrator | 2026-01-10 14:32:38 | INFO  | Task 35fd8ed6-bee3-4b01-a98c-0d469af062b0 is in state STARTED 2026-01-10 14:32:38.822471 | orchestrator | 2026-01-10 14:32:38 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:32:41.930276 | orchestrator | 2026-01-10 14:32:41 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:32:41.932287 | orchestrator | 2026-01-10 14:32:41 | INFO  | Task 9a40c5aa-ab95-47c5-ab1e-0d26ff04627f is in state STARTED 2026-01-10 14:32:41.934065 | orchestrator | 2026-01-10 14:32:41 | INFO  | Task 99392778-5155-4436-834d-e71f805258ed is in state STARTED 2026-01-10 14:32:41.934981 | orchestrator | 2026-01-10 14:32:41 | INFO  | Task 58117f0b-6074-45cc-99bd-993bb6ddc811 is in state STARTED 2026-01-10 14:32:41.937538 | orchestrator | 2026-01-10 14:32:41 | INFO  | Task 37b8a5c0-171e-41fe-88e8-f4b1ac333197 is in state STARTED 2026-01-10 14:32:41.937637 | orchestrator | 2026-01-10 14:32:41 | INFO  | Task 35fd8ed6-bee3-4b01-a98c-0d469af062b0 is in state STARTED 2026-01-10 14:32:41.937768 | orchestrator | 2026-01-10 14:32:41 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:32:45.027232 | orchestrator | 2026-01-10 14:32:45 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:32:45.030476 | orchestrator | 2026-01-10 14:32:45 | INFO  | Task 9a40c5aa-ab95-47c5-ab1e-0d26ff04627f is in state STARTED 2026-01-10 14:32:45.031927 | orchestrator | 2026-01-10 14:32:45 | INFO  | Task 99392778-5155-4436-834d-e71f805258ed is in state STARTED 2026-01-10 14:32:45.036403 | orchestrator | 2026-01-10 14:32:45 | INFO  | Task 58117f0b-6074-45cc-99bd-993bb6ddc811 is in state STARTED 2026-01-10 14:32:45.036577 | orchestrator | 2026-01-10 14:32:45 | INFO  | Task 37b8a5c0-171e-41fe-88e8-f4b1ac333197 is in state STARTED 2026-01-10 14:32:45.038069 | orchestrator | 2026-01-10 14:32:45 | INFO  | Task 35fd8ed6-bee3-4b01-a98c-0d469af062b0 is in state STARTED 2026-01-10 14:32:45.038863 | orchestrator | 2026-01-10 14:32:45 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:32:48.102146 | orchestrator | 2026-01-10 14:32:48 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:32:48.102174 | orchestrator | 2026-01-10 14:32:48 | INFO  | Task 9a40c5aa-ab95-47c5-ab1e-0d26ff04627f is in state STARTED 2026-01-10 14:32:48.102326 | orchestrator | 2026-01-10 14:32:48 | INFO  | Task 99392778-5155-4436-834d-e71f805258ed is in state STARTED 2026-01-10 14:32:48.104421 | orchestrator | 2026-01-10 14:32:48 | INFO  | Task 58117f0b-6074-45cc-99bd-993bb6ddc811 is in state STARTED 2026-01-10 14:32:48.109183 | orchestrator | 2026-01-10 14:32:48 | INFO  | Task 37b8a5c0-171e-41fe-88e8-f4b1ac333197 is in state STARTED 2026-01-10 14:32:48.118253 | orchestrator | 2026-01-10 14:32:48 | INFO  | Task 35fd8ed6-bee3-4b01-a98c-0d469af062b0 is in state STARTED 2026-01-10 14:32:48.118307 | orchestrator | 2026-01-10 14:32:48 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:32:51.160670 | orchestrator | 2026-01-10 14:32:51 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:32:51.163775 | orchestrator | 2026-01-10 14:32:51 | INFO  | Task b05dd80a-5179-44ef-9a2e-e676ef037cb7 is in state STARTED 2026-01-10 14:32:51.164364 | orchestrator | 2026-01-10 14:32:51 | INFO  | Task 9a40c5aa-ab95-47c5-ab1e-0d26ff04627f is in state STARTED 2026-01-10 14:32:51.165266 | orchestrator | 2026-01-10 14:32:51 | INFO  | Task 99392778-5155-4436-834d-e71f805258ed is in state STARTED 2026-01-10 14:32:51.165985 | orchestrator | 2026-01-10 14:32:51 | INFO  | Task 58117f0b-6074-45cc-99bd-993bb6ddc811 is in state STARTED 2026-01-10 14:32:51.166866 | orchestrator | 2026-01-10 14:32:51 | INFO  | Task 37b8a5c0-171e-41fe-88e8-f4b1ac333197 is in state SUCCESS 2026-01-10 14:32:51.167028 | orchestrator | 2026-01-10 14:32:51.167045 | orchestrator | 2026-01-10 14:32:51.167052 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-10 14:32:51.167059 | orchestrator | 2026-01-10 14:32:51.167066 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-10 14:32:51.167074 | orchestrator | Saturday 10 January 2026 14:32:27 +0000 (0:00:00.482) 0:00:00.482 ****** 2026-01-10 14:32:51.167081 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:32:51.167089 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:32:51.167097 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:32:51.167104 | orchestrator | 2026-01-10 14:32:51.167112 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-10 14:32:51.167119 | orchestrator | Saturday 10 January 2026 14:32:27 +0000 (0:00:00.407) 0:00:00.890 ****** 2026-01-10 14:32:51.167126 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2026-01-10 14:32:51.167134 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2026-01-10 14:32:51.167141 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2026-01-10 14:32:51.167147 | orchestrator | 2026-01-10 14:32:51.167154 | orchestrator | PLAY [Apply role memcached] **************************************************** 2026-01-10 14:32:51.167161 | orchestrator | 2026-01-10 14:32:51.167168 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2026-01-10 14:32:51.167185 | orchestrator | Saturday 10 January 2026 14:32:28 +0000 (0:00:01.170) 0:00:02.060 ****** 2026-01-10 14:32:51.167192 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-1, testbed-node-0, testbed-node-2 2026-01-10 14:32:51.167200 | orchestrator | 2026-01-10 14:32:51.167207 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2026-01-10 14:32:51.167214 | orchestrator | Saturday 10 January 2026 14:32:29 +0000 (0:00:01.030) 0:00:03.091 ****** 2026-01-10 14:32:51.167221 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-01-10 14:32:51.167228 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-01-10 14:32:51.167249 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-01-10 14:32:51.167257 | orchestrator | 2026-01-10 14:32:51.167264 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2026-01-10 14:32:51.167271 | orchestrator | Saturday 10 January 2026 14:32:31 +0000 (0:00:01.320) 0:00:04.411 ****** 2026-01-10 14:32:51.167277 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-01-10 14:32:51.167285 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-01-10 14:32:51.167292 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-01-10 14:32:51.167299 | orchestrator | 2026-01-10 14:32:51.167306 | orchestrator | TASK [service-check-containers : memcached | Check containers] ***************** 2026-01-10 14:32:51.167313 | orchestrator | Saturday 10 January 2026 14:32:33 +0000 (0:00:02.127) 0:00:06.538 ****** 2026-01-10 14:32:51.167323 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2025.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-01-10 14:32:51.167333 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2025.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-01-10 14:32:51.167350 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2025.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-01-10 14:32:51.167358 | orchestrator | 2026-01-10 14:32:51.167365 | orchestrator | TASK [service-check-containers : memcached | Notify handlers to restart containers] *** 2026-01-10 14:32:51.167372 | orchestrator | Saturday 10 January 2026 14:32:34 +0000 (0:00:01.325) 0:00:07.864 ****** 2026-01-10 14:32:51.167379 | orchestrator | changed: [testbed-node-0] => { 2026-01-10 14:32:51.167386 | orchestrator |  "msg": "Notifying handlers" 2026-01-10 14:32:51.167393 | orchestrator | } 2026-01-10 14:32:51.167400 | orchestrator | changed: [testbed-node-1] => { 2026-01-10 14:32:51.167407 | orchestrator |  "msg": "Notifying handlers" 2026-01-10 14:32:51.167414 | orchestrator | } 2026-01-10 14:32:51.167421 | orchestrator | changed: [testbed-node-2] => { 2026-01-10 14:32:51.167428 | orchestrator |  "msg": "Notifying handlers" 2026-01-10 14:32:51.167435 | orchestrator | } 2026-01-10 14:32:51.167442 | orchestrator | 2026-01-10 14:32:51.167449 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-01-10 14:32:51.167461 | orchestrator | Saturday 10 January 2026 14:32:35 +0000 (0:00:01.013) 0:00:08.878 ****** 2026-01-10 14:32:51.167472 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2025.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-01-10 14:32:51.167480 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:32:51.167488 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2025.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-01-10 14:32:51.167497 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:32:51.167503 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2025.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-01-10 14:32:51.167509 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:32:51.167515 | orchestrator | 2026-01-10 14:32:51.167521 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2026-01-10 14:32:51.167527 | orchestrator | Saturday 10 January 2026 14:32:37 +0000 (0:00:01.950) 0:00:10.828 ****** 2026-01-10 14:32:51.167534 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:32:51.167540 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:32:51.167546 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:32:51.167552 | orchestrator | 2026-01-10 14:32:51.167559 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-10 14:32:51.167565 | orchestrator | testbed-node-0 : ok=8  changed=5  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-10 14:32:51.167573 | orchestrator | testbed-node-1 : ok=8  changed=5  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-10 14:32:51.167579 | orchestrator | testbed-node-2 : ok=8  changed=5  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-10 14:32:51.167586 | orchestrator | 2026-01-10 14:32:51.167593 | orchestrator | 2026-01-10 14:32:51.167600 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-10 14:32:51.167608 | orchestrator | Saturday 10 January 2026 14:32:46 +0000 (0:00:08.939) 0:00:19.768 ****** 2026-01-10 14:32:51.167625 | orchestrator | =============================================================================== 2026-01-10 14:32:51.167632 | orchestrator | memcached : Restart memcached container --------------------------------- 8.94s 2026-01-10 14:32:51.167640 | orchestrator | memcached : Copying over config.json files for services ----------------- 2.13s 2026-01-10 14:32:51.167647 | orchestrator | service-check-containers : Include tasks -------------------------------- 1.95s 2026-01-10 14:32:51.167654 | orchestrator | service-check-containers : memcached | Check containers ----------------- 1.33s 2026-01-10 14:32:51.167662 | orchestrator | memcached : Ensuring config directories exist --------------------------- 1.32s 2026-01-10 14:32:51.167670 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.17s 2026-01-10 14:32:51.167678 | orchestrator | memcached : include_tasks ----------------------------------------------- 1.03s 2026-01-10 14:32:51.167686 | orchestrator | service-check-containers : memcached | Notify handlers to restart containers --- 1.02s 2026-01-10 14:32:51.167695 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.41s 2026-01-10 14:32:51.169421 | orchestrator | 2026-01-10 14:32:51 | INFO  | Task 35fd8ed6-bee3-4b01-a98c-0d469af062b0 is in state STARTED 2026-01-10 14:32:51.169468 | orchestrator | 2026-01-10 14:32:51 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:32:54.225852 | orchestrator | 2026-01-10 14:32:54 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:32:54.295893 | orchestrator | 2026-01-10 14:32:54 | INFO  | Task b05dd80a-5179-44ef-9a2e-e676ef037cb7 is in state STARTED 2026-01-10 14:32:54.295948 | orchestrator | 2026-01-10 14:32:54 | INFO  | Task 9a40c5aa-ab95-47c5-ab1e-0d26ff04627f is in state STARTED 2026-01-10 14:32:54.295958 | orchestrator | 2026-01-10 14:32:54 | INFO  | Task 99392778-5155-4436-834d-e71f805258ed is in state STARTED 2026-01-10 14:32:54.295968 | orchestrator | 2026-01-10 14:32:54 | INFO  | Task 58117f0b-6074-45cc-99bd-993bb6ddc811 is in state STARTED 2026-01-10 14:32:54.295973 | orchestrator | 2026-01-10 14:32:54 | INFO  | Task 35fd8ed6-bee3-4b01-a98c-0d469af062b0 is in state STARTED 2026-01-10 14:32:54.295981 | orchestrator | 2026-01-10 14:32:54 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:32:57.308317 | orchestrator | 2026-01-10 14:32:57 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:32:57.312937 | orchestrator | 2026-01-10 14:32:57 | INFO  | Task b05dd80a-5179-44ef-9a2e-e676ef037cb7 is in state STARTED 2026-01-10 14:32:57.313346 | orchestrator | 2026-01-10 14:32:57 | INFO  | Task 9a40c5aa-ab95-47c5-ab1e-0d26ff04627f is in state STARTED 2026-01-10 14:32:57.314064 | orchestrator | 2026-01-10 14:32:57 | INFO  | Task 99392778-5155-4436-834d-e71f805258ed is in state STARTED 2026-01-10 14:32:57.314723 | orchestrator | 2026-01-10 14:32:57 | INFO  | Task 58117f0b-6074-45cc-99bd-993bb6ddc811 is in state STARTED 2026-01-10 14:32:57.315390 | orchestrator | 2026-01-10 14:32:57 | INFO  | Task 35fd8ed6-bee3-4b01-a98c-0d469af062b0 is in state STARTED 2026-01-10 14:32:57.315410 | orchestrator | 2026-01-10 14:32:57 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:33:00.351579 | orchestrator | 2026-01-10 14:33:00 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:33:00.351641 | orchestrator | 2026-01-10 14:33:00 | INFO  | Task b05dd80a-5179-44ef-9a2e-e676ef037cb7 is in state STARTED 2026-01-10 14:33:00.351650 | orchestrator | 2026-01-10 14:33:00 | INFO  | Task 9a40c5aa-ab95-47c5-ab1e-0d26ff04627f is in state STARTED 2026-01-10 14:33:00.351658 | orchestrator | 2026-01-10 14:33:00 | INFO  | Task 99392778-5155-4436-834d-e71f805258ed is in state STARTED 2026-01-10 14:33:00.351664 | orchestrator | 2026-01-10 14:33:00 | INFO  | Task 58117f0b-6074-45cc-99bd-993bb6ddc811 is in state STARTED 2026-01-10 14:33:00.351689 | orchestrator | 2026-01-10 14:33:00 | INFO  | Task 35fd8ed6-bee3-4b01-a98c-0d469af062b0 is in state STARTED 2026-01-10 14:33:00.351697 | orchestrator | 2026-01-10 14:33:00 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:33:03.378646 | orchestrator | 2026-01-10 14:33:03 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:33:03.380575 | orchestrator | 2026-01-10 14:33:03 | INFO  | Task b05dd80a-5179-44ef-9a2e-e676ef037cb7 is in state STARTED 2026-01-10 14:33:03.382383 | orchestrator | 2026-01-10 14:33:03 | INFO  | Task 9a40c5aa-ab95-47c5-ab1e-0d26ff04627f is in state SUCCESS 2026-01-10 14:33:03.383565 | orchestrator | 2026-01-10 14:33:03.383599 | orchestrator | 2026-01-10 14:33:03.383606 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-10 14:33:03.383614 | orchestrator | 2026-01-10 14:33:03.383620 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-10 14:33:03.383626 | orchestrator | Saturday 10 January 2026 14:32:24 +0000 (0:00:00.696) 0:00:00.696 ****** 2026-01-10 14:33:03.383632 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:33:03.383638 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:33:03.383643 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:33:03.383648 | orchestrator | 2026-01-10 14:33:03.383654 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-10 14:33:03.383659 | orchestrator | Saturday 10 January 2026 14:32:25 +0000 (0:00:00.704) 0:00:01.401 ****** 2026-01-10 14:33:03.383665 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2026-01-10 14:33:03.383669 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2026-01-10 14:33:03.383672 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2026-01-10 14:33:03.383675 | orchestrator | 2026-01-10 14:33:03.383679 | orchestrator | PLAY [Apply role redis] ******************************************************** 2026-01-10 14:33:03.383685 | orchestrator | 2026-01-10 14:33:03.383690 | orchestrator | TASK [redis : include_tasks] *************************************************** 2026-01-10 14:33:03.383695 | orchestrator | Saturday 10 January 2026 14:32:26 +0000 (0:00:00.975) 0:00:02.376 ****** 2026-01-10 14:33:03.383700 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:33:03.383705 | orchestrator | 2026-01-10 14:33:03.383711 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2026-01-10 14:33:03.383716 | orchestrator | Saturday 10 January 2026 14:32:27 +0000 (0:00:01.021) 0:00:03.397 ****** 2026-01-10 14:33:03.383728 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-10 14:33:03.383736 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-10 14:33:03.383741 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-10 14:33:03.383758 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-10 14:33:03.383772 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-10 14:33:03.383778 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-10 14:33:03.383783 | orchestrator | 2026-01-10 14:33:03.383788 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2026-01-10 14:33:03.383794 | orchestrator | Saturday 10 January 2026 14:32:29 +0000 (0:00:01.924) 0:00:05.321 ****** 2026-01-10 14:33:03.383826 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-10 14:33:03.383831 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-10 14:33:03.383837 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-10 14:33:03.383847 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-10 14:33:03.383852 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-10 14:33:03.383862 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-10 14:33:03.383867 | orchestrator | 2026-01-10 14:33:03.383873 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2026-01-10 14:33:03.383878 | orchestrator | Saturday 10 January 2026 14:32:32 +0000 (0:00:03.297) 0:00:08.619 ****** 2026-01-10 14:33:03.383886 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-10 14:33:03.383891 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-10 14:33:03.383897 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-10 14:33:03.383905 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-10 14:33:03.383911 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-10 14:33:03.383920 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-10 14:33:03.383926 | orchestrator | 2026-01-10 14:33:03.383931 | orchestrator | TASK [service-check-containers : redis | Check containers] ********************* 2026-01-10 14:33:03.383937 | orchestrator | Saturday 10 January 2026 14:32:36 +0000 (0:00:03.230) 0:00:11.850 ****** 2026-01-10 14:33:03.383942 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-10 14:33:03.383949 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-10 14:33:03.383955 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-10 14:33:03.383963 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-10 14:33:03.383969 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-10 14:33:03.383977 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-10 14:33:03.383983 | orchestrator | 2026-01-10 14:33:03.383988 | orchestrator | TASK [service-check-containers : redis | Notify handlers to restart containers] *** 2026-01-10 14:33:03.383993 | orchestrator | Saturday 10 January 2026 14:32:38 +0000 (0:00:02.359) 0:00:14.210 ****** 2026-01-10 14:33:03.383999 | orchestrator | changed: [testbed-node-0] => { 2026-01-10 14:33:03.384004 | orchestrator |  "msg": "Notifying handlers" 2026-01-10 14:33:03.384010 | orchestrator | } 2026-01-10 14:33:03.384040 | orchestrator | changed: [testbed-node-1] => { 2026-01-10 14:33:03.384046 | orchestrator |  "msg": "Notifying handlers" 2026-01-10 14:33:03.384051 | orchestrator | } 2026-01-10 14:33:03.384055 | orchestrator | changed: [testbed-node-2] => { 2026-01-10 14:33:03.384060 | orchestrator |  "msg": "Notifying handlers" 2026-01-10 14:33:03.384066 | orchestrator | } 2026-01-10 14:33:03.384070 | orchestrator | 2026-01-10 14:33:03.384077 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-01-10 14:33:03.384082 | orchestrator | Saturday 10 January 2026 14:32:38 +0000 (0:00:00.503) 0:00:14.713 ****** 2026-01-10 14:33:03.384088 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}})  2026-01-10 14:33:03.384098 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}})  2026-01-10 14:33:03.384104 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}})  2026-01-10 14:33:03.384109 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:33:03.384119 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}})  2026-01-10 14:33:03.384124 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:33:03.384130 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}})  2026-01-10 14:33:03.384140 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}})  2026-01-10 14:33:03.384145 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:33:03.384151 | orchestrator | 2026-01-10 14:33:03.384157 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-01-10 14:33:03.384162 | orchestrator | Saturday 10 January 2026 14:32:41 +0000 (0:00:02.122) 0:00:16.835 ****** 2026-01-10 14:33:03.384168 | orchestrator | 2026-01-10 14:33:03.384174 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-01-10 14:33:03.384182 | orchestrator | Saturday 10 January 2026 14:32:41 +0000 (0:00:00.078) 0:00:16.914 ****** 2026-01-10 14:33:03.384189 | orchestrator | 2026-01-10 14:33:03.384195 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-01-10 14:33:03.384200 | orchestrator | Saturday 10 January 2026 14:32:41 +0000 (0:00:00.121) 0:00:17.035 ****** 2026-01-10 14:33:03.384206 | orchestrator | 2026-01-10 14:33:03.384212 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2026-01-10 14:33:03.384220 | orchestrator | Saturday 10 January 2026 14:32:41 +0000 (0:00:00.103) 0:00:17.139 ****** 2026-01-10 14:33:03.384226 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:33:03.384232 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:33:03.384238 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:33:03.384244 | orchestrator | 2026-01-10 14:33:03.384251 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2026-01-10 14:33:03.384256 | orchestrator | Saturday 10 January 2026 14:32:50 +0000 (0:00:08.984) 0:00:26.124 ****** 2026-01-10 14:33:03.384262 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:33:03.384267 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:33:03.384272 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:33:03.384278 | orchestrator | 2026-01-10 14:33:03.384283 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-10 14:33:03.384290 | orchestrator | testbed-node-0 : ok=10  changed=7  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-10 14:33:03.384296 | orchestrator | testbed-node-1 : ok=10  changed=7  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-10 14:33:03.384302 | orchestrator | testbed-node-2 : ok=10  changed=7  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-10 14:33:03.384307 | orchestrator | 2026-01-10 14:33:03.384313 | orchestrator | 2026-01-10 14:33:03.384318 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-10 14:33:03.384324 | orchestrator | Saturday 10 January 2026 14:32:59 +0000 (0:00:09.270) 0:00:35.394 ****** 2026-01-10 14:33:03.384329 | orchestrator | =============================================================================== 2026-01-10 14:33:03.384335 | orchestrator | redis : Restart redis-sentinel container -------------------------------- 9.27s 2026-01-10 14:33:03.384341 | orchestrator | redis : Restart redis container ----------------------------------------- 8.98s 2026-01-10 14:33:03.384352 | orchestrator | redis : Copying over default config.json files -------------------------- 3.30s 2026-01-10 14:33:03.384362 | orchestrator | redis : Copying over redis config files --------------------------------- 3.23s 2026-01-10 14:33:03.384372 | orchestrator | service-check-containers : redis | Check containers --------------------- 2.36s 2026-01-10 14:33:03.384382 | orchestrator | service-check-containers : Include tasks -------------------------------- 2.12s 2026-01-10 14:33:03.384391 | orchestrator | redis : Ensuring config directories exist ------------------------------- 1.92s 2026-01-10 14:33:03.384401 | orchestrator | redis : include_tasks --------------------------------------------------- 1.02s 2026-01-10 14:33:03.384410 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.98s 2026-01-10 14:33:03.384420 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.70s 2026-01-10 14:33:03.384430 | orchestrator | service-check-containers : redis | Notify handlers to restart containers --- 0.50s 2026-01-10 14:33:03.384440 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.30s 2026-01-10 14:33:03.384450 | orchestrator | 2026-01-10 14:33:03 | INFO  | Task 99392778-5155-4436-834d-e71f805258ed is in state STARTED 2026-01-10 14:33:03.384501 | orchestrator | 2026-01-10 14:33:03 | INFO  | Task 58117f0b-6074-45cc-99bd-993bb6ddc811 is in state STARTED 2026-01-10 14:33:03.385011 | orchestrator | 2026-01-10 14:33:03 | INFO  | Task 35fd8ed6-bee3-4b01-a98c-0d469af062b0 is in state STARTED 2026-01-10 14:33:03.386320 | orchestrator | 2026-01-10 14:33:03 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:33:06.417520 | orchestrator | 2026-01-10 14:33:06 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:33:06.418262 | orchestrator | 2026-01-10 14:33:06 | INFO  | Task b05dd80a-5179-44ef-9a2e-e676ef037cb7 is in state STARTED 2026-01-10 14:33:06.419325 | orchestrator | 2026-01-10 14:33:06 | INFO  | Task 99392778-5155-4436-834d-e71f805258ed is in state STARTED 2026-01-10 14:33:06.422451 | orchestrator | 2026-01-10 14:33:06 | INFO  | Task 58117f0b-6074-45cc-99bd-993bb6ddc811 is in state STARTED 2026-01-10 14:33:06.426755 | orchestrator | 2026-01-10 14:33:06 | INFO  | Task 35fd8ed6-bee3-4b01-a98c-0d469af062b0 is in state STARTED 2026-01-10 14:33:06.426863 | orchestrator | 2026-01-10 14:33:06 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:33:09.468429 | orchestrator | 2026-01-10 14:33:09 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:33:09.468507 | orchestrator | 2026-01-10 14:33:09 | INFO  | Task b05dd80a-5179-44ef-9a2e-e676ef037cb7 is in state STARTED 2026-01-10 14:33:09.472043 | orchestrator | 2026-01-10 14:33:09 | INFO  | Task 99392778-5155-4436-834d-e71f805258ed is in state STARTED 2026-01-10 14:33:09.472690 | orchestrator | 2026-01-10 14:33:09 | INFO  | Task 58117f0b-6074-45cc-99bd-993bb6ddc811 is in state STARTED 2026-01-10 14:33:09.473580 | orchestrator | 2026-01-10 14:33:09 | INFO  | Task 35fd8ed6-bee3-4b01-a98c-0d469af062b0 is in state STARTED 2026-01-10 14:33:09.473607 | orchestrator | 2026-01-10 14:33:09 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:33:12.558736 | orchestrator | 2026-01-10 14:33:12 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:33:12.559231 | orchestrator | 2026-01-10 14:33:12 | INFO  | Task b05dd80a-5179-44ef-9a2e-e676ef037cb7 is in state STARTED 2026-01-10 14:33:12.560191 | orchestrator | 2026-01-10 14:33:12 | INFO  | Task 99392778-5155-4436-834d-e71f805258ed is in state STARTED 2026-01-10 14:33:12.561016 | orchestrator | 2026-01-10 14:33:12 | INFO  | Task 58117f0b-6074-45cc-99bd-993bb6ddc811 is in state STARTED 2026-01-10 14:33:12.561895 | orchestrator | 2026-01-10 14:33:12 | INFO  | Task 35fd8ed6-bee3-4b01-a98c-0d469af062b0 is in state STARTED 2026-01-10 14:33:12.562078 | orchestrator | 2026-01-10 14:33:12 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:33:15.588389 | orchestrator | 2026-01-10 14:33:15 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:33:15.588694 | orchestrator | 2026-01-10 14:33:15 | INFO  | Task b05dd80a-5179-44ef-9a2e-e676ef037cb7 is in state STARTED 2026-01-10 14:33:15.589567 | orchestrator | 2026-01-10 14:33:15 | INFO  | Task 99392778-5155-4436-834d-e71f805258ed is in state STARTED 2026-01-10 14:33:15.591943 | orchestrator | 2026-01-10 14:33:15 | INFO  | Task 58117f0b-6074-45cc-99bd-993bb6ddc811 is in state STARTED 2026-01-10 14:33:15.594633 | orchestrator | 2026-01-10 14:33:15 | INFO  | Task 35fd8ed6-bee3-4b01-a98c-0d469af062b0 is in state STARTED 2026-01-10 14:33:15.594682 | orchestrator | 2026-01-10 14:33:15 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:33:18.618089 | orchestrator | 2026-01-10 14:33:18 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:33:18.618726 | orchestrator | 2026-01-10 14:33:18 | INFO  | Task b05dd80a-5179-44ef-9a2e-e676ef037cb7 is in state STARTED 2026-01-10 14:33:18.619639 | orchestrator | 2026-01-10 14:33:18 | INFO  | Task 99392778-5155-4436-834d-e71f805258ed is in state STARTED 2026-01-10 14:33:18.620314 | orchestrator | 2026-01-10 14:33:18 | INFO  | Task 58117f0b-6074-45cc-99bd-993bb6ddc811 is in state STARTED 2026-01-10 14:33:18.621027 | orchestrator | 2026-01-10 14:33:18 | INFO  | Task 35fd8ed6-bee3-4b01-a98c-0d469af062b0 is in state STARTED 2026-01-10 14:33:18.621208 | orchestrator | 2026-01-10 14:33:18 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:33:21.653346 | orchestrator | 2026-01-10 14:33:21 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:33:21.654376 | orchestrator | 2026-01-10 14:33:21 | INFO  | Task b05dd80a-5179-44ef-9a2e-e676ef037cb7 is in state STARTED 2026-01-10 14:33:21.655448 | orchestrator | 2026-01-10 14:33:21 | INFO  | Task 99392778-5155-4436-834d-e71f805258ed is in state STARTED 2026-01-10 14:33:21.656539 | orchestrator | 2026-01-10 14:33:21 | INFO  | Task 58117f0b-6074-45cc-99bd-993bb6ddc811 is in state STARTED 2026-01-10 14:33:21.657536 | orchestrator | 2026-01-10 14:33:21 | INFO  | Task 35fd8ed6-bee3-4b01-a98c-0d469af062b0 is in state STARTED 2026-01-10 14:33:21.657812 | orchestrator | 2026-01-10 14:33:21 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:33:24.701893 | orchestrator | 2026-01-10 14:33:24 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:33:24.705423 | orchestrator | 2026-01-10 14:33:24 | INFO  | Task b05dd80a-5179-44ef-9a2e-e676ef037cb7 is in state STARTED 2026-01-10 14:33:24.708010 | orchestrator | 2026-01-10 14:33:24 | INFO  | Task 99392778-5155-4436-834d-e71f805258ed is in state STARTED 2026-01-10 14:33:24.708737 | orchestrator | 2026-01-10 14:33:24 | INFO  | Task 58117f0b-6074-45cc-99bd-993bb6ddc811 is in state STARTED 2026-01-10 14:33:24.710762 | orchestrator | 2026-01-10 14:33:24 | INFO  | Task 35fd8ed6-bee3-4b01-a98c-0d469af062b0 is in state STARTED 2026-01-10 14:33:24.711081 | orchestrator | 2026-01-10 14:33:24 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:33:27.838519 | orchestrator | 2026-01-10 14:33:27 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:33:27.839443 | orchestrator | 2026-01-10 14:33:27 | INFO  | Task b05dd80a-5179-44ef-9a2e-e676ef037cb7 is in state STARTED 2026-01-10 14:33:27.841488 | orchestrator | 2026-01-10 14:33:27 | INFO  | Task 99392778-5155-4436-834d-e71f805258ed is in state STARTED 2026-01-10 14:33:27.842420 | orchestrator | 2026-01-10 14:33:27 | INFO  | Task 58117f0b-6074-45cc-99bd-993bb6ddc811 is in state STARTED 2026-01-10 14:33:27.844107 | orchestrator | 2026-01-10 14:33:27 | INFO  | Task 35fd8ed6-bee3-4b01-a98c-0d469af062b0 is in state STARTED 2026-01-10 14:33:27.844158 | orchestrator | 2026-01-10 14:33:27 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:33:30.895495 | orchestrator | 2026-01-10 14:33:30 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:33:30.897065 | orchestrator | 2026-01-10 14:33:30 | INFO  | Task b05dd80a-5179-44ef-9a2e-e676ef037cb7 is in state STARTED 2026-01-10 14:33:30.904299 | orchestrator | 2026-01-10 14:33:30 | INFO  | Task 99392778-5155-4436-834d-e71f805258ed is in state STARTED 2026-01-10 14:33:30.917495 | orchestrator | 2026-01-10 14:33:30 | INFO  | Task 58117f0b-6074-45cc-99bd-993bb6ddc811 is in state STARTED 2026-01-10 14:33:30.927480 | orchestrator | 2026-01-10 14:33:30 | INFO  | Task 35fd8ed6-bee3-4b01-a98c-0d469af062b0 is in state STARTED 2026-01-10 14:33:30.928039 | orchestrator | 2026-01-10 14:33:30 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:33:33.966738 | orchestrator | 2026-01-10 14:33:33 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:33:33.967329 | orchestrator | 2026-01-10 14:33:33 | INFO  | Task b05dd80a-5179-44ef-9a2e-e676ef037cb7 is in state STARTED 2026-01-10 14:33:33.967660 | orchestrator | 2026-01-10 14:33:33 | INFO  | Task 99392778-5155-4436-834d-e71f805258ed is in state STARTED 2026-01-10 14:33:33.971116 | orchestrator | 2026-01-10 14:33:33 | INFO  | Task 58117f0b-6074-45cc-99bd-993bb6ddc811 is in state STARTED 2026-01-10 14:33:33.971159 | orchestrator | 2026-01-10 14:33:33 | INFO  | Task 35fd8ed6-bee3-4b01-a98c-0d469af062b0 is in state STARTED 2026-01-10 14:33:33.971165 | orchestrator | 2026-01-10 14:33:33 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:33:37.068047 | orchestrator | 2026-01-10 14:33:37 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:33:37.070154 | orchestrator | 2026-01-10 14:33:37 | INFO  | Task b05dd80a-5179-44ef-9a2e-e676ef037cb7 is in state STARTED 2026-01-10 14:33:37.072052 | orchestrator | 2026-01-10 14:33:37 | INFO  | Task 99392778-5155-4436-834d-e71f805258ed is in state STARTED 2026-01-10 14:33:37.073915 | orchestrator | 2026-01-10 14:33:37 | INFO  | Task 58117f0b-6074-45cc-99bd-993bb6ddc811 is in state STARTED 2026-01-10 14:33:37.075411 | orchestrator | 2026-01-10 14:33:37 | INFO  | Task 35fd8ed6-bee3-4b01-a98c-0d469af062b0 is in state STARTED 2026-01-10 14:33:37.075483 | orchestrator | 2026-01-10 14:33:37 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:33:40.122237 | orchestrator | 2026-01-10 14:33:40 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:33:40.128086 | orchestrator | 2026-01-10 14:33:40 | INFO  | Task b05dd80a-5179-44ef-9a2e-e676ef037cb7 is in state STARTED 2026-01-10 14:33:40.129975 | orchestrator | 2026-01-10 14:33:40 | INFO  | Task 99392778-5155-4436-834d-e71f805258ed is in state STARTED 2026-01-10 14:33:40.131202 | orchestrator | 2026-01-10 14:33:40 | INFO  | Task 58117f0b-6074-45cc-99bd-993bb6ddc811 is in state STARTED 2026-01-10 14:33:40.132080 | orchestrator | 2026-01-10 14:33:40 | INFO  | Task 35fd8ed6-bee3-4b01-a98c-0d469af062b0 is in state STARTED 2026-01-10 14:33:40.132203 | orchestrator | 2026-01-10 14:33:40 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:33:43.218362 | orchestrator | 2026-01-10 14:33:43.218445 | orchestrator | 2026-01-10 14:33:43.218452 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-10 14:33:43.218457 | orchestrator | 2026-01-10 14:33:43.218463 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-10 14:33:43.218468 | orchestrator | Saturday 10 January 2026 14:32:26 +0000 (0:00:00.810) 0:00:00.810 ****** 2026-01-10 14:33:43.218472 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:33:43.218478 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:33:43.218482 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:33:43.218486 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:33:43.218489 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:33:43.218493 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:33:43.218497 | orchestrator | 2026-01-10 14:33:43.218501 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-10 14:33:43.218536 | orchestrator | Saturday 10 January 2026 14:32:27 +0000 (0:00:01.498) 0:00:02.309 ****** 2026-01-10 14:33:43.218540 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-01-10 14:33:43.218546 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-01-10 14:33:43.218549 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-01-10 14:33:43.218553 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-01-10 14:33:43.218557 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-01-10 14:33:43.218561 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-01-10 14:33:43.218584 | orchestrator | 2026-01-10 14:33:43.218588 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2026-01-10 14:33:43.218592 | orchestrator | 2026-01-10 14:33:43.218595 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2026-01-10 14:33:43.218610 | orchestrator | Saturday 10 January 2026 14:32:29 +0000 (0:00:01.523) 0:00:03.833 ****** 2026-01-10 14:33:43.218615 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-10 14:33:43.218620 | orchestrator | 2026-01-10 14:33:43.218623 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-01-10 14:33:43.218627 | orchestrator | Saturday 10 January 2026 14:32:31 +0000 (0:00:02.115) 0:00:05.949 ****** 2026-01-10 14:33:43.218631 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-01-10 14:33:43.218635 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-01-10 14:33:43.218639 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-01-10 14:33:43.218643 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-01-10 14:33:43.218646 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-01-10 14:33:43.218650 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-01-10 14:33:43.218654 | orchestrator | 2026-01-10 14:33:43.218657 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-01-10 14:33:43.218661 | orchestrator | Saturday 10 January 2026 14:32:32 +0000 (0:00:01.591) 0:00:07.540 ****** 2026-01-10 14:33:43.218665 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-01-10 14:33:43.218668 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-01-10 14:33:43.218672 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-01-10 14:33:43.218676 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-01-10 14:33:43.218679 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-01-10 14:33:43.218683 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-01-10 14:33:43.218687 | orchestrator | 2026-01-10 14:33:43.218690 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-01-10 14:33:43.218694 | orchestrator | Saturday 10 January 2026 14:32:34 +0000 (0:00:02.082) 0:00:09.622 ****** 2026-01-10 14:33:43.218698 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2026-01-10 14:33:43.218701 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:33:43.218706 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2026-01-10 14:33:43.218712 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:33:43.218718 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2026-01-10 14:33:43.218725 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:33:43.218734 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2026-01-10 14:33:43.218741 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:33:43.218746 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2026-01-10 14:33:43.218752 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:33:43.218758 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2026-01-10 14:33:43.218764 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:33:43.218769 | orchestrator | 2026-01-10 14:33:43.218775 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2026-01-10 14:33:43.218781 | orchestrator | Saturday 10 January 2026 14:32:37 +0000 (0:00:02.562) 0:00:12.185 ****** 2026-01-10 14:33:43.218787 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:33:43.218792 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:33:43.218797 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:33:43.218803 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:33:43.218809 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:33:43.218815 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:33:43.218821 | orchestrator | 2026-01-10 14:33:43.218881 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2026-01-10 14:33:43.218890 | orchestrator | Saturday 10 January 2026 14:32:39 +0000 (0:00:02.208) 0:00:14.394 ****** 2026-01-10 14:33:43.218919 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-10 14:33:43.218931 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-10 14:33:43.218947 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-10 14:33:43.218956 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-10 14:33:43.218962 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-10 14:33:43.218968 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-10 14:33:43.218987 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-10 14:33:43.218993 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-10 14:33:43.219004 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-10 14:33:43.219011 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-10 14:33:43.219017 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-10 14:33:43.219033 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-10 14:33:43.219039 | orchestrator | 2026-01-10 14:33:43.219045 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2026-01-10 14:33:43.219052 | orchestrator | Saturday 10 January 2026 14:32:42 +0000 (0:00:02.752) 0:00:17.146 ****** 2026-01-10 14:33:43.219059 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-10 14:33:43.219070 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-10 14:33:43.219077 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-10 14:33:43.219084 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-10 14:33:43.219090 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-10 14:33:43.219116 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-10 14:33:43.219124 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-10 14:33:43.219134 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-10 14:33:43.219141 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-10 14:33:43.219148 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-10 14:33:43.219159 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-10 14:33:43.219170 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-10 14:33:43.219174 | orchestrator | 2026-01-10 14:33:43.219178 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2026-01-10 14:33:43.219183 | orchestrator | Saturday 10 January 2026 14:32:46 +0000 (0:00:03.790) 0:00:20.937 ****** 2026-01-10 14:33:43.219187 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:33:43.219191 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:33:43.219195 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:33:43.219200 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:33:43.219204 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:33:43.219208 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:33:43.219212 | orchestrator | 2026-01-10 14:33:43.219217 | orchestrator | TASK [service-check-containers : openvswitch | Check containers] *************** 2026-01-10 14:33:43.219221 | orchestrator | Saturday 10 January 2026 14:32:48 +0000 (0:00:02.187) 0:00:23.125 ****** 2026-01-10 14:33:43.219228 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-10 14:33:43.219233 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-10 14:33:43.219238 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-10 14:33:43.219247 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-10 14:33:43.219255 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-10 14:33:43.219260 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-10 14:33:43.219272 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-10 14:33:43.219276 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-10 14:33:43.219284 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-10 14:33:43.219288 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-10 14:33:43.219296 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-10 14:33:43.219300 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-10 14:33:43.219304 | orchestrator | 2026-01-10 14:33:43.219307 | orchestrator | TASK [service-check-containers : openvswitch | Notify handlers to restart containers] *** 2026-01-10 14:33:43.219312 | orchestrator | Saturday 10 January 2026 14:32:51 +0000 (0:00:03.529) 0:00:26.654 ****** 2026-01-10 14:33:43.219316 | orchestrator | changed: [testbed-node-0] => { 2026-01-10 14:33:43.219320 | orchestrator |  "msg": "Notifying handlers" 2026-01-10 14:33:43.219324 | orchestrator | } 2026-01-10 14:33:43.219328 | orchestrator | changed: [testbed-node-1] => { 2026-01-10 14:33:43.219331 | orchestrator |  "msg": "Notifying handlers" 2026-01-10 14:33:43.219335 | orchestrator | } 2026-01-10 14:33:43.219339 | orchestrator | changed: [testbed-node-2] => { 2026-01-10 14:33:43.219343 | orchestrator |  "msg": "Notifying handlers" 2026-01-10 14:33:43.219346 | orchestrator | } 2026-01-10 14:33:43.219350 | orchestrator | changed: [testbed-node-3] => { 2026-01-10 14:33:43.219354 | orchestrator |  "msg": "Notifying handlers" 2026-01-10 14:33:43.219357 | orchestrator | } 2026-01-10 14:33:43.219365 | orchestrator | changed: [testbed-node-4] => { 2026-01-10 14:33:43.219368 | orchestrator |  "msg": "Notifying handlers" 2026-01-10 14:33:43.219372 | orchestrator | } 2026-01-10 14:33:43.219376 | orchestrator | changed: [testbed-node-5] => { 2026-01-10 14:33:43.219380 | orchestrator |  "msg": "Notifying handlers" 2026-01-10 14:33:43.219383 | orchestrator | } 2026-01-10 14:33:43.219387 | orchestrator | 2026-01-10 14:33:43.219391 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-01-10 14:33:43.219394 | orchestrator | Saturday 10 January 2026 14:32:54 +0000 (0:00:02.463) 0:00:29.118 ****** 2026-01-10 14:33:43.219398 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-01-10 14:33:43.219402 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-01-10 14:33:43.219409 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions':2026-01-10 14:33:43 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:33:43.219415 | orchestrator | 2026-01-10 14:33:43 | INFO  | Task b05dd80a-5179-44ef-9a2e-e676ef037cb7 is in state STARTED 2026-01-10 14:33:43.219420 | orchestrator | 2026-01-10 14:33:43 | INFO  | Task 99392778-5155-4436-834d-e71f805258ed is in state SUCCESS 2026-01-10 14:33:43.219791 | orchestrator | 2026-01-10 14:33:43 | INFO  | Task 58117f0b-6074-45cc-99bd-993bb6ddc811 is in state STARTED 2026-01-10 14:33:43.219802 | orchestrator | 2026-01-10 14:33:43 | INFO  | Task 35fd8ed6-bee3-4b01-a98c-0d469af062b0 is in state STARTED 2026-01-10 14:33:43.219806 | orchestrator | 2026-01-10 14:33:43 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:33:43.219993 | orchestrator | {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-01-10 14:33:43.220010 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:33:43.220017 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-01-10 14:33:43.220035 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-01-10 14:33:43.220041 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-01-10 14:33:43.220052 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-01-10 14:33:43.220058 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-01-10 14:33:43.220070 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-01-10 14:33:43.220077 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-01-10 14:33:43.220087 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:33:43.220093 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:33:43.220099 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:33:43.220104 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:33:43.220111 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-01-10 14:33:43.220121 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-01-10 14:33:43.220128 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:33:43.220134 | orchestrator | 2026-01-10 14:33:43.220142 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-01-10 14:33:43.220147 | orchestrator | Saturday 10 January 2026 14:32:57 +0000 (0:00:02.962) 0:00:32.080 ****** 2026-01-10 14:33:43.220151 | orchestrator | 2026-01-10 14:33:43.220155 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-01-10 14:33:43.220161 | orchestrator | Saturday 10 January 2026 14:32:57 +0000 (0:00:00.273) 0:00:32.353 ****** 2026-01-10 14:33:43.220167 | orchestrator | 2026-01-10 14:33:43.220172 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-01-10 14:33:43.220180 | orchestrator | Saturday 10 January 2026 14:32:58 +0000 (0:00:00.392) 0:00:32.746 ****** 2026-01-10 14:33:43.220187 | orchestrator | 2026-01-10 14:33:43.220195 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-01-10 14:33:43.220202 | orchestrator | Saturday 10 January 2026 14:32:58 +0000 (0:00:00.405) 0:00:33.152 ****** 2026-01-10 14:33:43.220207 | orchestrator | 2026-01-10 14:33:43.220213 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-01-10 14:33:43.220218 | orchestrator | Saturday 10 January 2026 14:32:58 +0000 (0:00:00.527) 0:00:33.679 ****** 2026-01-10 14:33:43.220225 | orchestrator | 2026-01-10 14:33:43.220231 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-01-10 14:33:43.220236 | orchestrator | Saturday 10 January 2026 14:32:59 +0000 (0:00:00.340) 0:00:34.019 ****** 2026-01-10 14:33:43.220242 | orchestrator | 2026-01-10 14:33:43.220247 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2026-01-10 14:33:43.220259 | orchestrator | Saturday 10 January 2026 14:32:59 +0000 (0:00:00.330) 0:00:34.350 ****** 2026-01-10 14:33:43.220264 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:33:43.220271 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:33:43.220276 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:33:43.220282 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:33:43.220287 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:33:43.220293 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:33:43.220298 | orchestrator | 2026-01-10 14:33:43.220303 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2026-01-10 14:33:43.220313 | orchestrator | Saturday 10 January 2026 14:33:09 +0000 (0:00:09.573) 0:00:43.924 ****** 2026-01-10 14:33:43.220319 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:33:43.220326 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:33:43.220331 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:33:43.220337 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:33:43.220344 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:33:43.220350 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:33:43.220356 | orchestrator | 2026-01-10 14:33:43.220361 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-01-10 14:33:43.220368 | orchestrator | Saturday 10 January 2026 14:33:11 +0000 (0:00:02.475) 0:00:46.400 ****** 2026-01-10 14:33:43.220373 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:33:43.220379 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:33:43.220384 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:33:43.220390 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:33:43.220395 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:33:43.220401 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:33:43.220407 | orchestrator | 2026-01-10 14:33:43.220413 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2026-01-10 14:33:43.220419 | orchestrator | Saturday 10 January 2026 14:33:21 +0000 (0:00:10.139) 0:00:56.539 ****** 2026-01-10 14:33:43.220425 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2026-01-10 14:33:43.220431 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2026-01-10 14:33:43.220438 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2026-01-10 14:33:43.220442 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2026-01-10 14:33:43.220446 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2026-01-10 14:33:43.220450 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2026-01-10 14:33:43.220454 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2026-01-10 14:33:43.220458 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2026-01-10 14:33:43.220462 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2026-01-10 14:33:43.220465 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2026-01-10 14:33:43.220469 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2026-01-10 14:33:43.220473 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2026-01-10 14:33:43.220476 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-01-10 14:33:43.220483 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-01-10 14:33:43.220494 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-01-10 14:33:43.220498 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-01-10 14:33:43.220502 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-01-10 14:33:43.220507 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-01-10 14:33:43.220513 | orchestrator | 2026-01-10 14:33:43.220521 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2026-01-10 14:33:43.220529 | orchestrator | Saturday 10 January 2026 14:33:28 +0000 (0:00:07.071) 0:01:03.611 ****** 2026-01-10 14:33:43.220535 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2026-01-10 14:33:43.220541 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:33:43.220548 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2026-01-10 14:33:43.220554 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:33:43.220559 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2026-01-10 14:33:43.220565 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:33:43.220569 | orchestrator | changed: [testbed-node-0] => (item=br-ex) 2026-01-10 14:33:43.220575 | orchestrator | changed: [testbed-node-1] => (item=br-ex) 2026-01-10 14:33:43.220580 | orchestrator | changed: [testbed-node-2] => (item=br-ex) 2026-01-10 14:33:43.220586 | orchestrator | 2026-01-10 14:33:43.220591 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2026-01-10 14:33:43.220597 | orchestrator | Saturday 10 January 2026 14:33:31 +0000 (0:00:02.194) 0:01:05.806 ****** 2026-01-10 14:33:43.220603 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2026-01-10 14:33:43.220609 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2026-01-10 14:33:43.220614 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:33:43.220620 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:33:43.220626 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2026-01-10 14:33:43.220632 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:33:43.220638 | orchestrator | changed: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2026-01-10 14:33:43.220649 | orchestrator | changed: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2026-01-10 14:33:43.220655 | orchestrator | changed: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2026-01-10 14:33:43.220661 | orchestrator | 2026-01-10 14:33:43.220667 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-01-10 14:33:43.220673 | orchestrator | Saturday 10 January 2026 14:33:34 +0000 (0:00:03.559) 0:01:09.366 ****** 2026-01-10 14:33:43.220678 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:33:43.220684 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:33:43.220690 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:33:43.220696 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:33:43.220701 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:33:43.220707 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:33:43.220713 | orchestrator | 2026-01-10 14:33:43.220719 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-10 14:33:43.220726 | orchestrator | testbed-node-0 : ok=16  changed=12  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-01-10 14:33:43.220733 | orchestrator | testbed-node-1 : ok=16  changed=12  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-01-10 14:33:43.220739 | orchestrator | testbed-node-2 : ok=16  changed=12  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-01-10 14:33:43.220745 | orchestrator | testbed-node-3 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-01-10 14:33:43.220756 | orchestrator | testbed-node-4 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-01-10 14:33:43.220763 | orchestrator | testbed-node-5 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-01-10 14:33:43.220768 | orchestrator | 2026-01-10 14:33:43.220774 | orchestrator | 2026-01-10 14:33:43.220780 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-10 14:33:43.220786 | orchestrator | Saturday 10 January 2026 14:33:42 +0000 (0:00:08.030) 0:01:17.396 ****** 2026-01-10 14:33:43.220792 | orchestrator | =============================================================================== 2026-01-10 14:33:43.220798 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 18.17s 2026-01-10 14:33:43.220803 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------- 9.57s 2026-01-10 14:33:43.220809 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 7.07s 2026-01-10 14:33:43.220815 | orchestrator | openvswitch : Copying over config.json files for services --------------- 3.79s 2026-01-10 14:33:43.220821 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 3.56s 2026-01-10 14:33:43.220827 | orchestrator | service-check-containers : openvswitch | Check containers --------------- 3.53s 2026-01-10 14:33:43.220858 | orchestrator | service-check-containers : Include tasks -------------------------------- 2.96s 2026-01-10 14:33:43.220865 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 2.75s 2026-01-10 14:33:43.220871 | orchestrator | module-load : Drop module persistence ----------------------------------- 2.56s 2026-01-10 14:33:43.220877 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 2.48s 2026-01-10 14:33:43.220883 | orchestrator | service-check-containers : openvswitch | Notify handlers to restart containers --- 2.46s 2026-01-10 14:33:43.220889 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 2.27s 2026-01-10 14:33:43.220895 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 2.21s 2026-01-10 14:33:43.220900 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 2.19s 2026-01-10 14:33:43.220906 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 2.19s 2026-01-10 14:33:43.220912 | orchestrator | openvswitch : include_tasks --------------------------------------------- 2.11s 2026-01-10 14:33:43.220918 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 2.08s 2026-01-10 14:33:43.220924 | orchestrator | module-load : Load modules ---------------------------------------------- 1.59s 2026-01-10 14:33:43.220929 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.53s 2026-01-10 14:33:43.220935 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.50s 2026-01-10 14:33:46.245946 | orchestrator | 2026-01-10 14:33:46 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:33:46.246808 | orchestrator | 2026-01-10 14:33:46 | INFO  | Task b05dd80a-5179-44ef-9a2e-e676ef037cb7 is in state STARTED 2026-01-10 14:33:46.247539 | orchestrator | 2026-01-10 14:33:46 | INFO  | Task 58117f0b-6074-45cc-99bd-993bb6ddc811 is in state STARTED 2026-01-10 14:33:46.248460 | orchestrator | 2026-01-10 14:33:46 | INFO  | Task 35fd8ed6-bee3-4b01-a98c-0d469af062b0 is in state STARTED 2026-01-10 14:33:46.249517 | orchestrator | 2026-01-10 14:33:46 | INFO  | Task 17014c7a-d1aa-440d-a2ab-612bd1f148fe is in state STARTED 2026-01-10 14:33:46.251478 | orchestrator | 2026-01-10 14:33:46 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:33:49.288100 | orchestrator | 2026-01-10 14:33:49 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:33:49.292145 | orchestrator | 2026-01-10 14:33:49 | INFO  | Task b05dd80a-5179-44ef-9a2e-e676ef037cb7 is in state STARTED 2026-01-10 14:33:49.293513 | orchestrator | 2026-01-10 14:33:49 | INFO  | Task 58117f0b-6074-45cc-99bd-993bb6ddc811 is in state STARTED 2026-01-10 14:33:49.295014 | orchestrator | 2026-01-10 14:33:49 | INFO  | Task 35fd8ed6-bee3-4b01-a98c-0d469af062b0 is in state STARTED 2026-01-10 14:33:49.296455 | orchestrator | 2026-01-10 14:33:49 | INFO  | Task 17014c7a-d1aa-440d-a2ab-612bd1f148fe is in state STARTED 2026-01-10 14:33:49.296569 | orchestrator | 2026-01-10 14:33:49 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:33:52.335363 | orchestrator | 2026-01-10 14:33:52 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:33:52.336087 | orchestrator | 2026-01-10 14:33:52 | INFO  | Task b05dd80a-5179-44ef-9a2e-e676ef037cb7 is in state STARTED 2026-01-10 14:33:52.337260 | orchestrator | 2026-01-10 14:33:52 | INFO  | Task 58117f0b-6074-45cc-99bd-993bb6ddc811 is in state STARTED 2026-01-10 14:33:52.342096 | orchestrator | 2026-01-10 14:33:52 | INFO  | Task 35fd8ed6-bee3-4b01-a98c-0d469af062b0 is in state STARTED 2026-01-10 14:33:52.346610 | orchestrator | 2026-01-10 14:33:52 | INFO  | Task 17014c7a-d1aa-440d-a2ab-612bd1f148fe is in state STARTED 2026-01-10 14:33:52.346672 | orchestrator | 2026-01-10 14:33:52 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:33:55.453450 | orchestrator | 2026-01-10 14:33:55 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:33:55.454650 | orchestrator | 2026-01-10 14:33:55 | INFO  | Task b05dd80a-5179-44ef-9a2e-e676ef037cb7 is in state STARTED 2026-01-10 14:33:55.457263 | orchestrator | 2026-01-10 14:33:55 | INFO  | Task 58117f0b-6074-45cc-99bd-993bb6ddc811 is in state STARTED 2026-01-10 14:33:55.459008 | orchestrator | 2026-01-10 14:33:55 | INFO  | Task 35fd8ed6-bee3-4b01-a98c-0d469af062b0 is in state STARTED 2026-01-10 14:33:55.460359 | orchestrator | 2026-01-10 14:33:55 | INFO  | Task 17014c7a-d1aa-440d-a2ab-612bd1f148fe is in state STARTED 2026-01-10 14:33:55.460401 | orchestrator | 2026-01-10 14:33:55 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:33:58.543412 | orchestrator | 2026-01-10 14:33:58 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:33:58.543780 | orchestrator | 2026-01-10 14:33:58 | INFO  | Task b05dd80a-5179-44ef-9a2e-e676ef037cb7 is in state STARTED 2026-01-10 14:33:58.544740 | orchestrator | 2026-01-10 14:33:58 | INFO  | Task 58117f0b-6074-45cc-99bd-993bb6ddc811 is in state STARTED 2026-01-10 14:33:58.546280 | orchestrator | 2026-01-10 14:33:58 | INFO  | Task 35fd8ed6-bee3-4b01-a98c-0d469af062b0 is in state STARTED 2026-01-10 14:33:58.549544 | orchestrator | 2026-01-10 14:33:58 | INFO  | Task 17014c7a-d1aa-440d-a2ab-612bd1f148fe is in state STARTED 2026-01-10 14:33:58.549603 | orchestrator | 2026-01-10 14:33:58 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:34:01.623494 | orchestrator | 2026-01-10 14:34:01 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:34:01.623988 | orchestrator | 2026-01-10 14:34:01 | INFO  | Task b05dd80a-5179-44ef-9a2e-e676ef037cb7 is in state STARTED 2026-01-10 14:34:01.624946 | orchestrator | 2026-01-10 14:34:01 | INFO  | Task 58117f0b-6074-45cc-99bd-993bb6ddc811 is in state STARTED 2026-01-10 14:34:01.625572 | orchestrator | 2026-01-10 14:34:01 | INFO  | Task 35fd8ed6-bee3-4b01-a98c-0d469af062b0 is in state STARTED 2026-01-10 14:34:01.627800 | orchestrator | 2026-01-10 14:34:01 | INFO  | Task 17014c7a-d1aa-440d-a2ab-612bd1f148fe is in state STARTED 2026-01-10 14:34:01.627861 | orchestrator | 2026-01-10 14:34:01 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:34:04.673792 | orchestrator | 2026-01-10 14:34:04 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:34:04.673907 | orchestrator | 2026-01-10 14:34:04 | INFO  | Task b05dd80a-5179-44ef-9a2e-e676ef037cb7 is in state STARTED 2026-01-10 14:34:04.673915 | orchestrator | 2026-01-10 14:34:04 | INFO  | Task 58117f0b-6074-45cc-99bd-993bb6ddc811 is in state STARTED 2026-01-10 14:34:04.673920 | orchestrator | 2026-01-10 14:34:04 | INFO  | Task 35fd8ed6-bee3-4b01-a98c-0d469af062b0 is in state STARTED 2026-01-10 14:34:04.673924 | orchestrator | 2026-01-10 14:34:04 | INFO  | Task 17014c7a-d1aa-440d-a2ab-612bd1f148fe is in state STARTED 2026-01-10 14:34:04.673929 | orchestrator | 2026-01-10 14:34:04 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:34:07.774381 | orchestrator | 2026-01-10 14:34:07 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:34:07.774490 | orchestrator | 2026-01-10 14:34:07 | INFO  | Task b05dd80a-5179-44ef-9a2e-e676ef037cb7 is in state STARTED 2026-01-10 14:34:07.775102 | orchestrator | 2026-01-10 14:34:07 | INFO  | Task 58117f0b-6074-45cc-99bd-993bb6ddc811 is in state STARTED 2026-01-10 14:34:07.775953 | orchestrator | 2026-01-10 14:34:07 | INFO  | Task 35fd8ed6-bee3-4b01-a98c-0d469af062b0 is in state STARTED 2026-01-10 14:34:07.776539 | orchestrator | 2026-01-10 14:34:07 | INFO  | Task 17014c7a-d1aa-440d-a2ab-612bd1f148fe is in state STARTED 2026-01-10 14:34:07.776632 | orchestrator | 2026-01-10 14:34:07 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:34:11.034403 | orchestrator | 2026-01-10 14:34:11 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:34:11.034912 | orchestrator | 2026-01-10 14:34:11 | INFO  | Task b05dd80a-5179-44ef-9a2e-e676ef037cb7 is in state STARTED 2026-01-10 14:34:11.036172 | orchestrator | 2026-01-10 14:34:11 | INFO  | Task 58117f0b-6074-45cc-99bd-993bb6ddc811 is in state STARTED 2026-01-10 14:34:11.036983 | orchestrator | 2026-01-10 14:34:11 | INFO  | Task 35fd8ed6-bee3-4b01-a98c-0d469af062b0 is in state STARTED 2026-01-10 14:34:11.039453 | orchestrator | 2026-01-10 14:34:11 | INFO  | Task 17014c7a-d1aa-440d-a2ab-612bd1f148fe is in state STARTED 2026-01-10 14:34:11.039545 | orchestrator | 2026-01-10 14:34:11 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:34:14.101944 | orchestrator | 2026-01-10 14:34:14 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:34:14.102230 | orchestrator | 2026-01-10 14:34:14 | INFO  | Task b05dd80a-5179-44ef-9a2e-e676ef037cb7 is in state STARTED 2026-01-10 14:34:14.104164 | orchestrator | 2026-01-10 14:34:14 | INFO  | Task 7f2176df-6abf-4012-befb-2479535bac44 is in state STARTED 2026-01-10 14:34:14.105279 | orchestrator | 2026-01-10 14:34:14 | INFO  | Task 58117f0b-6074-45cc-99bd-993bb6ddc811 is in state SUCCESS 2026-01-10 14:34:14.107964 | orchestrator | 2026-01-10 14:34:14.108051 | orchestrator | 2026-01-10 14:34:14.108068 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2026-01-10 14:34:14.108082 | orchestrator | 2026-01-10 14:34:14.108094 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2026-01-10 14:34:14.108107 | orchestrator | Saturday 10 January 2026 14:29:39 +0000 (0:00:00.137) 0:00:00.137 ****** 2026-01-10 14:34:14.108120 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:34:14.108133 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:34:14.108145 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:34:14.108158 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:34:14.108183 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:34:14.108195 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:34:14.108206 | orchestrator | 2026-01-10 14:34:14.108240 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2026-01-10 14:34:14.108254 | orchestrator | Saturday 10 January 2026 14:29:40 +0000 (0:00:00.584) 0:00:00.721 ****** 2026-01-10 14:34:14.108266 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:34:14.108279 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:34:14.108290 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:34:14.108302 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:34:14.108314 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:34:14.108325 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:34:14.108337 | orchestrator | 2026-01-10 14:34:14.108359 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2026-01-10 14:34:14.108370 | orchestrator | Saturday 10 January 2026 14:29:41 +0000 (0:00:00.785) 0:00:01.507 ****** 2026-01-10 14:34:14.108382 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:34:14.108393 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:34:14.108404 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:34:14.108415 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:34:14.108426 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:34:14.108437 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:34:14.108449 | orchestrator | 2026-01-10 14:34:14.108461 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2026-01-10 14:34:14.108473 | orchestrator | Saturday 10 January 2026 14:29:42 +0000 (0:00:00.666) 0:00:02.174 ****** 2026-01-10 14:34:14.108484 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:34:14.108495 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:34:14.108507 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:34:14.108518 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:34:14.108529 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:34:14.108540 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:34:14.108590 | orchestrator | 2026-01-10 14:34:14.108604 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2026-01-10 14:34:14.108616 | orchestrator | Saturday 10 January 2026 14:29:44 +0000 (0:00:02.269) 0:00:04.444 ****** 2026-01-10 14:34:14.108627 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:34:14.108639 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:34:14.108650 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:34:14.108662 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:34:14.108674 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:34:14.108686 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:34:14.108699 | orchestrator | 2026-01-10 14:34:14.108711 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2026-01-10 14:34:14.108723 | orchestrator | Saturday 10 January 2026 14:29:45 +0000 (0:00:01.197) 0:00:05.641 ****** 2026-01-10 14:34:14.108734 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:34:14.108745 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:34:14.108757 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:34:14.108768 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:34:14.108779 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:34:14.108791 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:34:14.108802 | orchestrator | 2026-01-10 14:34:14.108814 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2026-01-10 14:34:14.108825 | orchestrator | Saturday 10 January 2026 14:29:47 +0000 (0:00:01.636) 0:00:07.278 ****** 2026-01-10 14:34:14.108835 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:34:14.108864 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:34:14.108876 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:34:14.108887 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:34:14.108898 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:34:14.108909 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:34:14.108920 | orchestrator | 2026-01-10 14:34:14.108931 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2026-01-10 14:34:14.108942 | orchestrator | Saturday 10 January 2026 14:29:48 +0000 (0:00:01.097) 0:00:08.375 ****** 2026-01-10 14:34:14.108964 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:34:14.108975 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:34:14.108985 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:34:14.108996 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:34:14.109007 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:34:14.109018 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:34:14.109029 | orchestrator | 2026-01-10 14:34:14.109041 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2026-01-10 14:34:14.109051 | orchestrator | Saturday 10 January 2026 14:29:49 +0000 (0:00:00.944) 0:00:09.319 ****** 2026-01-10 14:34:14.109062 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2026-01-10 14:34:14.109073 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-01-10 14:34:14.109084 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:34:14.109094 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2026-01-10 14:34:14.109105 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-01-10 14:34:14.109115 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:34:14.109126 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2026-01-10 14:34:14.109137 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-01-10 14:34:14.109149 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-01-10 14:34:14.109159 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-01-10 14:34:14.109188 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:34:14.109199 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-01-10 14:34:14.109210 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-01-10 14:34:14.109220 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:34:14.109231 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:34:14.109241 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-01-10 14:34:14.109260 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-01-10 14:34:14.109271 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:34:14.109282 | orchestrator | 2026-01-10 14:34:14.109292 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2026-01-10 14:34:14.109303 | orchestrator | Saturday 10 January 2026 14:29:50 +0000 (0:00:01.145) 0:00:10.465 ****** 2026-01-10 14:34:14.109313 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:34:14.109324 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:34:14.109335 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:34:14.109346 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:34:14.109357 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:34:14.109367 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:34:14.109378 | orchestrator | 2026-01-10 14:34:14.109389 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2026-01-10 14:34:14.109400 | orchestrator | Saturday 10 January 2026 14:29:51 +0000 (0:00:01.301) 0:00:11.766 ****** 2026-01-10 14:34:14.109412 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:34:14.109424 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:34:14.109434 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:34:14.109445 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:34:14.109456 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:34:14.109467 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:34:14.109478 | orchestrator | 2026-01-10 14:34:14.109489 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2026-01-10 14:34:14.109500 | orchestrator | Saturday 10 January 2026 14:29:52 +0000 (0:00:00.949) 0:00:12.716 ****** 2026-01-10 14:34:14.109511 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:34:14.109522 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:34:14.109543 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:34:14.109555 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:34:14.109566 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:34:14.109577 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:34:14.109588 | orchestrator | 2026-01-10 14:34:14.109598 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2026-01-10 14:34:14.109609 | orchestrator | Saturday 10 January 2026 14:29:58 +0000 (0:00:05.919) 0:00:18.635 ****** 2026-01-10 14:34:14.109620 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:34:14.109629 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:34:14.109640 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:34:14.109651 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:34:14.109662 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:34:14.109674 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:34:14.109685 | orchestrator | 2026-01-10 14:34:14.109697 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2026-01-10 14:34:14.109709 | orchestrator | Saturday 10 January 2026 14:29:59 +0000 (0:00:01.286) 0:00:19.922 ****** 2026-01-10 14:34:14.109720 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:34:14.109732 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:34:14.109744 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:34:14.109755 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:34:14.109767 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:34:14.109778 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:34:14.109789 | orchestrator | 2026-01-10 14:34:14.109800 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2026-01-10 14:34:14.109812 | orchestrator | Saturday 10 January 2026 14:30:01 +0000 (0:00:01.869) 0:00:21.792 ****** 2026-01-10 14:34:14.109823 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:34:14.109832 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:34:14.109870 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:34:14.109884 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:34:14.109894 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:34:14.109904 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:34:14.109915 | orchestrator | 2026-01-10 14:34:14.109925 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2026-01-10 14:34:14.109935 | orchestrator | Saturday 10 January 2026 14:30:02 +0000 (0:00:00.596) 0:00:22.388 ****** 2026-01-10 14:34:14.109945 | orchestrator | skipping: [testbed-node-3] => (item=rancher)  2026-01-10 14:34:14.109956 | orchestrator | skipping: [testbed-node-3] => (item=rancher/k3s)  2026-01-10 14:34:14.109966 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:34:14.109977 | orchestrator | skipping: [testbed-node-4] => (item=rancher)  2026-01-10 14:34:14.109988 | orchestrator | skipping: [testbed-node-4] => (item=rancher/k3s)  2026-01-10 14:34:14.109999 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:34:14.110010 | orchestrator | skipping: [testbed-node-5] => (item=rancher)  2026-01-10 14:34:14.110071 | orchestrator | skipping: [testbed-node-5] => (item=rancher/k3s)  2026-01-10 14:34:14.110084 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:34:14.110097 | orchestrator | skipping: [testbed-node-0] => (item=rancher)  2026-01-10 14:34:14.110108 | orchestrator | skipping: [testbed-node-0] => (item=rancher/k3s)  2026-01-10 14:34:14.110119 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:34:14.110130 | orchestrator | skipping: [testbed-node-1] => (item=rancher)  2026-01-10 14:34:14.110142 | orchestrator | skipping: [testbed-node-1] => (item=rancher/k3s)  2026-01-10 14:34:14.110153 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:34:14.110164 | orchestrator | skipping: [testbed-node-2] => (item=rancher)  2026-01-10 14:34:14.110177 | orchestrator | skipping: [testbed-node-2] => (item=rancher/k3s)  2026-01-10 14:34:14.110187 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:34:14.110198 | orchestrator | 2026-01-10 14:34:14.110208 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2026-01-10 14:34:14.110245 | orchestrator | Saturday 10 January 2026 14:30:03 +0000 (0:00:01.189) 0:00:23.578 ****** 2026-01-10 14:34:14.110257 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:34:14.110270 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:34:14.110282 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:34:14.110295 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:34:14.110306 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:34:14.110319 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:34:14.110331 | orchestrator | 2026-01-10 14:34:14.110344 | orchestrator | TASK [k3s_custom_registries : Remove /etc/rancher/k3s/registries.yaml when no registries configured] *** 2026-01-10 14:34:14.110362 | orchestrator | Saturday 10 January 2026 14:30:04 +0000 (0:00:00.645) 0:00:24.224 ****** 2026-01-10 14:34:14.110374 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:34:14.110386 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:34:14.110398 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:34:14.110410 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:34:14.110423 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:34:14.110435 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:34:14.110447 | orchestrator | 2026-01-10 14:34:14.110458 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2026-01-10 14:34:14.110470 | orchestrator | 2026-01-10 14:34:14.110482 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2026-01-10 14:34:14.110494 | orchestrator | Saturday 10 January 2026 14:30:05 +0000 (0:00:01.811) 0:00:26.035 ****** 2026-01-10 14:34:14.110506 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:34:14.110517 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:34:14.110530 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:34:14.110542 | orchestrator | 2026-01-10 14:34:14.110554 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2026-01-10 14:34:14.110568 | orchestrator | Saturday 10 January 2026 14:30:07 +0000 (0:00:01.483) 0:00:27.519 ****** 2026-01-10 14:34:14.110580 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:34:14.110593 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:34:14.110605 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:34:14.110618 | orchestrator | 2026-01-10 14:34:14.110631 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2026-01-10 14:34:14.110644 | orchestrator | Saturday 10 January 2026 14:30:09 +0000 (0:00:01.773) 0:00:29.292 ****** 2026-01-10 14:34:14.110658 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:34:14.110669 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:34:14.110682 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:34:14.110695 | orchestrator | 2026-01-10 14:34:14.110707 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2026-01-10 14:34:14.110719 | orchestrator | Saturday 10 January 2026 14:30:10 +0000 (0:00:01.122) 0:00:30.414 ****** 2026-01-10 14:34:14.110732 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:34:14.110745 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:34:14.110758 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:34:14.110771 | orchestrator | 2026-01-10 14:34:14.110784 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2026-01-10 14:34:14.110797 | orchestrator | Saturday 10 January 2026 14:30:10 +0000 (0:00:00.707) 0:00:31.122 ****** 2026-01-10 14:34:14.110810 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:34:14.110823 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:34:14.110836 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:34:14.110894 | orchestrator | 2026-01-10 14:34:14.110907 | orchestrator | TASK [k3s_server : Create /etc/rancher/k3s directory] ************************** 2026-01-10 14:34:14.110920 | orchestrator | Saturday 10 January 2026 14:30:11 +0000 (0:00:00.294) 0:00:31.416 ****** 2026-01-10 14:34:14.110933 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:34:14.110946 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:34:14.110959 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:34:14.110971 | orchestrator | 2026-01-10 14:34:14.110984 | orchestrator | TASK [k3s_server : Create custom resolv.conf for k3s] ************************** 2026-01-10 14:34:14.111009 | orchestrator | Saturday 10 January 2026 14:30:12 +0000 (0:00:00.885) 0:00:32.302 ****** 2026-01-10 14:34:14.111021 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:34:14.111034 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:34:14.111046 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:34:14.111058 | orchestrator | 2026-01-10 14:34:14.111069 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2026-01-10 14:34:14.111082 | orchestrator | Saturday 10 January 2026 14:30:14 +0000 (0:00:02.161) 0:00:34.463 ****** 2026-01-10 14:34:14.111093 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:34:14.111105 | orchestrator | 2026-01-10 14:34:14.111117 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2026-01-10 14:34:14.111129 | orchestrator | Saturday 10 January 2026 14:30:14 +0000 (0:00:00.476) 0:00:34.940 ****** 2026-01-10 14:34:14.111141 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:34:14.111152 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:34:14.111164 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:34:14.111176 | orchestrator | 2026-01-10 14:34:14.111187 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2026-01-10 14:34:14.111198 | orchestrator | Saturday 10 January 2026 14:30:17 +0000 (0:00:03.110) 0:00:38.051 ****** 2026-01-10 14:34:14.111209 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:34:14.111221 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:34:14.111232 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:34:14.111244 | orchestrator | 2026-01-10 14:34:14.111256 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2026-01-10 14:34:14.111268 | orchestrator | Saturday 10 January 2026 14:30:18 +0000 (0:00:01.048) 0:00:39.099 ****** 2026-01-10 14:34:14.111279 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:34:14.111292 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:34:14.111303 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:34:14.111316 | orchestrator | 2026-01-10 14:34:14.111328 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2026-01-10 14:34:14.111340 | orchestrator | Saturday 10 January 2026 14:30:20 +0000 (0:00:01.472) 0:00:40.572 ****** 2026-01-10 14:34:14.111353 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:34:14.111365 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:34:14.111377 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:34:14.111389 | orchestrator | 2026-01-10 14:34:14.111401 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2026-01-10 14:34:14.111420 | orchestrator | Saturday 10 January 2026 14:30:22 +0000 (0:00:01.634) 0:00:42.206 ****** 2026-01-10 14:34:14.111432 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:34:14.111444 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:34:14.111455 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:34:14.111466 | orchestrator | 2026-01-10 14:34:14.111478 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2026-01-10 14:34:14.111489 | orchestrator | Saturday 10 January 2026 14:30:23 +0000 (0:00:00.990) 0:00:43.197 ****** 2026-01-10 14:34:14.111501 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:34:14.111520 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:34:14.111532 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:34:14.111545 | orchestrator | 2026-01-10 14:34:14.111557 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2026-01-10 14:34:14.111570 | orchestrator | Saturday 10 January 2026 14:30:23 +0000 (0:00:00.549) 0:00:43.746 ****** 2026-01-10 14:34:14.111582 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:34:14.111594 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:34:14.111606 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:34:14.111619 | orchestrator | 2026-01-10 14:34:14.111631 | orchestrator | TASK [k3s_server : Detect Kubernetes version for label compatibility] ********** 2026-01-10 14:34:14.111643 | orchestrator | Saturday 10 January 2026 14:30:26 +0000 (0:00:02.502) 0:00:46.249 ****** 2026-01-10 14:34:14.111665 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:34:14.111678 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:34:14.111690 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:34:14.111703 | orchestrator | 2026-01-10 14:34:14.111716 | orchestrator | TASK [k3s_server : Set node role label selector based on Kubernetes version] *** 2026-01-10 14:34:14.111729 | orchestrator | Saturday 10 January 2026 14:30:28 +0000 (0:00:02.441) 0:00:48.690 ****** 2026-01-10 14:34:14.111741 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:34:14.111754 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:34:14.111766 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:34:14.111779 | orchestrator | 2026-01-10 14:34:14.111791 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2026-01-10 14:34:14.111803 | orchestrator | Saturday 10 January 2026 14:30:30 +0000 (0:00:01.941) 0:00:50.631 ****** 2026-01-10 14:34:14.111814 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-01-10 14:34:14.111827 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-01-10 14:34:14.111840 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-01-10 14:34:14.111870 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-01-10 14:34:14.111882 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-01-10 14:34:14.111893 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-01-10 14:34:14.111904 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-01-10 14:34:14.111915 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-01-10 14:34:14.111926 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-01-10 14:34:14.111937 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-01-10 14:34:14.111948 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-01-10 14:34:14.111959 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-01-10 14:34:14.111971 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:34:14.111982 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:34:14.111993 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:34:14.112004 | orchestrator | 2026-01-10 14:34:14.112015 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2026-01-10 14:34:14.112026 | orchestrator | Saturday 10 January 2026 14:31:14 +0000 (0:00:44.032) 0:01:34.664 ****** 2026-01-10 14:34:14.112037 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:34:14.112048 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:34:14.112059 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:34:14.112070 | orchestrator | 2026-01-10 14:34:14.112082 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2026-01-10 14:34:14.112093 | orchestrator | Saturday 10 January 2026 14:31:14 +0000 (0:00:00.322) 0:01:34.987 ****** 2026-01-10 14:34:14.112104 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:34:14.112115 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:34:14.112126 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:34:14.112145 | orchestrator | 2026-01-10 14:34:14.112157 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2026-01-10 14:34:14.112168 | orchestrator | Saturday 10 January 2026 14:31:15 +0000 (0:00:00.942) 0:01:35.930 ****** 2026-01-10 14:34:14.112179 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:34:14.112190 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:34:14.112201 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:34:14.112211 | orchestrator | 2026-01-10 14:34:14.112229 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2026-01-10 14:34:14.112240 | orchestrator | Saturday 10 January 2026 14:31:17 +0000 (0:00:01.372) 0:01:37.303 ****** 2026-01-10 14:34:14.112252 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:34:14.112263 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:34:14.112274 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:34:14.112285 | orchestrator | 2026-01-10 14:34:14.112296 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2026-01-10 14:34:14.112312 | orchestrator | Saturday 10 January 2026 14:31:42 +0000 (0:00:25.680) 0:02:02.984 ****** 2026-01-10 14:34:14.112323 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:34:14.112334 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:34:14.112345 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:34:14.112356 | orchestrator | 2026-01-10 14:34:14.112367 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2026-01-10 14:34:14.112378 | orchestrator | Saturday 10 January 2026 14:31:43 +0000 (0:00:00.684) 0:02:03.668 ****** 2026-01-10 14:34:14.112389 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:34:14.112401 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:34:14.112413 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:34:14.112424 | orchestrator | 2026-01-10 14:34:14.112436 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2026-01-10 14:34:14.112447 | orchestrator | Saturday 10 January 2026 14:31:44 +0000 (0:00:00.565) 0:02:04.234 ****** 2026-01-10 14:34:14.112458 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:34:14.112469 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:34:14.112480 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:34:14.112491 | orchestrator | 2026-01-10 14:34:14.112502 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2026-01-10 14:34:14.112514 | orchestrator | Saturday 10 January 2026 14:31:44 +0000 (0:00:00.568) 0:02:04.803 ****** 2026-01-10 14:34:14.112525 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:34:14.112536 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:34:14.112547 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:34:14.112558 | orchestrator | 2026-01-10 14:34:14.112570 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2026-01-10 14:34:14.112581 | orchestrator | Saturday 10 January 2026 14:31:45 +0000 (0:00:00.842) 0:02:05.645 ****** 2026-01-10 14:34:14.112592 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:34:14.112602 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:34:14.112613 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:34:14.112624 | orchestrator | 2026-01-10 14:34:14.112635 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2026-01-10 14:34:14.112646 | orchestrator | Saturday 10 January 2026 14:31:45 +0000 (0:00:00.298) 0:02:05.943 ****** 2026-01-10 14:34:14.112657 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:34:14.112668 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:34:14.112679 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:34:14.112690 | orchestrator | 2026-01-10 14:34:14.112701 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2026-01-10 14:34:14.112712 | orchestrator | Saturday 10 January 2026 14:31:46 +0000 (0:00:00.621) 0:02:06.564 ****** 2026-01-10 14:34:14.112723 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:34:14.112734 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:34:14.112746 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:34:14.112757 | orchestrator | 2026-01-10 14:34:14.112769 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2026-01-10 14:34:14.112787 | orchestrator | Saturday 10 January 2026 14:31:47 +0000 (0:00:00.631) 0:02:07.196 ****** 2026-01-10 14:34:14.112799 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:34:14.112810 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:34:14.112821 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:34:14.112831 | orchestrator | 2026-01-10 14:34:14.112855 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2026-01-10 14:34:14.112867 | orchestrator | Saturday 10 January 2026 14:31:47 +0000 (0:00:00.934) 0:02:08.131 ****** 2026-01-10 14:34:14.112878 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:34:14.112889 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:34:14.112900 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:34:14.112910 | orchestrator | 2026-01-10 14:34:14.112920 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2026-01-10 14:34:14.112930 | orchestrator | Saturday 10 January 2026 14:31:48 +0000 (0:00:00.830) 0:02:08.962 ****** 2026-01-10 14:34:14.112941 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:34:14.112951 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:34:14.112961 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:34:14.112972 | orchestrator | 2026-01-10 14:34:14.112982 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2026-01-10 14:34:14.112992 | orchestrator | Saturday 10 January 2026 14:31:49 +0000 (0:00:00.250) 0:02:09.213 ****** 2026-01-10 14:34:14.113003 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:34:14.113013 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:34:14.113024 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:34:14.113034 | orchestrator | 2026-01-10 14:34:14.113046 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2026-01-10 14:34:14.113057 | orchestrator | Saturday 10 January 2026 14:31:49 +0000 (0:00:00.277) 0:02:09.491 ****** 2026-01-10 14:34:14.113067 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:34:14.113078 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:34:14.113089 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:34:14.113099 | orchestrator | 2026-01-10 14:34:14.113109 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2026-01-10 14:34:14.113120 | orchestrator | Saturday 10 January 2026 14:31:50 +0000 (0:00:00.842) 0:02:10.334 ****** 2026-01-10 14:34:14.113131 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:34:14.113142 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:34:14.113153 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:34:14.113163 | orchestrator | 2026-01-10 14:34:14.113174 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2026-01-10 14:34:14.113185 | orchestrator | Saturday 10 January 2026 14:31:50 +0000 (0:00:00.632) 0:02:10.966 ****** 2026-01-10 14:34:14.113196 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-01-10 14:34:14.113215 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-01-10 14:34:14.113226 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-01-10 14:34:14.113237 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-01-10 14:34:14.113248 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-01-10 14:34:14.113264 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-01-10 14:34:14.113273 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-01-10 14:34:14.113284 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-01-10 14:34:14.113293 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-01-10 14:34:14.113303 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2026-01-10 14:34:14.113328 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-01-10 14:34:14.113338 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-01-10 14:34:14.113349 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2026-01-10 14:34:14.113359 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-01-10 14:34:14.113369 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-01-10 14:34:14.113380 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-01-10 14:34:14.113391 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-01-10 14:34:14.113401 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-01-10 14:34:14.113412 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-01-10 14:34:14.113422 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-01-10 14:34:14.113433 | orchestrator | 2026-01-10 14:34:14.113444 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2026-01-10 14:34:14.113455 | orchestrator | 2026-01-10 14:34:14.113466 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2026-01-10 14:34:14.113478 | orchestrator | Saturday 10 January 2026 14:31:53 +0000 (0:00:02.774) 0:02:13.741 ****** 2026-01-10 14:34:14.113488 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:34:14.113499 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:34:14.113510 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:34:14.113520 | orchestrator | 2026-01-10 14:34:14.113530 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2026-01-10 14:34:14.113540 | orchestrator | Saturday 10 January 2026 14:31:54 +0000 (0:00:00.512) 0:02:14.253 ****** 2026-01-10 14:34:14.113551 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:34:14.113561 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:34:14.113571 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:34:14.113581 | orchestrator | 2026-01-10 14:34:14.113591 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2026-01-10 14:34:14.113602 | orchestrator | Saturday 10 January 2026 14:31:54 +0000 (0:00:00.651) 0:02:14.905 ****** 2026-01-10 14:34:14.113612 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:34:14.113622 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:34:14.113633 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:34:14.113643 | orchestrator | 2026-01-10 14:34:14.113653 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2026-01-10 14:34:14.113663 | orchestrator | Saturday 10 January 2026 14:31:55 +0000 (0:00:00.308) 0:02:15.213 ****** 2026-01-10 14:34:14.113673 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-10 14:34:14.113683 | orchestrator | 2026-01-10 14:34:14.113693 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2026-01-10 14:34:14.113703 | orchestrator | Saturday 10 January 2026 14:31:55 +0000 (0:00:00.688) 0:02:15.902 ****** 2026-01-10 14:34:14.113714 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:34:14.113724 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:34:14.113734 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:34:14.113744 | orchestrator | 2026-01-10 14:34:14.113755 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2026-01-10 14:34:14.113765 | orchestrator | Saturday 10 January 2026 14:31:56 +0000 (0:00:00.340) 0:02:16.243 ****** 2026-01-10 14:34:14.113775 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:34:14.113785 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:34:14.113795 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:34:14.113805 | orchestrator | 2026-01-10 14:34:14.113822 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2026-01-10 14:34:14.113832 | orchestrator | Saturday 10 January 2026 14:31:56 +0000 (0:00:00.307) 0:02:16.550 ****** 2026-01-10 14:34:14.113859 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:34:14.113871 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:34:14.113881 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:34:14.113891 | orchestrator | 2026-01-10 14:34:14.113967 | orchestrator | TASK [k3s_agent : Create /etc/rancher/k3s directory] *************************** 2026-01-10 14:34:14.113984 | orchestrator | Saturday 10 January 2026 14:31:56 +0000 (0:00:00.315) 0:02:16.866 ****** 2026-01-10 14:34:14.113996 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:34:14.114007 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:34:14.114049 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:34:14.114061 | orchestrator | 2026-01-10 14:34:14.114086 | orchestrator | TASK [k3s_agent : Create custom resolv.conf for k3s] *************************** 2026-01-10 14:34:14.114099 | orchestrator | Saturday 10 January 2026 14:31:57 +0000 (0:00:00.815) 0:02:17.681 ****** 2026-01-10 14:34:14.114111 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:34:14.114122 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:34:14.114133 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:34:14.114146 | orchestrator | 2026-01-10 14:34:14.114158 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2026-01-10 14:34:14.114175 | orchestrator | Saturday 10 January 2026 14:31:58 +0000 (0:00:01.363) 0:02:19.045 ****** 2026-01-10 14:34:14.114188 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:34:14.114200 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:34:14.114211 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:34:14.114223 | orchestrator | 2026-01-10 14:34:14.114234 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2026-01-10 14:34:14.114244 | orchestrator | Saturday 10 January 2026 14:32:00 +0000 (0:00:01.356) 0:02:20.402 ****** 2026-01-10 14:34:14.114254 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:34:14.114266 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:34:14.114277 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:34:14.114289 | orchestrator | 2026-01-10 14:34:14.114302 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-01-10 14:34:14.114313 | orchestrator | 2026-01-10 14:34:14.114325 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-01-10 14:34:14.114336 | orchestrator | Saturday 10 January 2026 14:32:11 +0000 (0:00:11.076) 0:02:31.478 ****** 2026-01-10 14:34:14.114346 | orchestrator | ok: [testbed-manager] 2026-01-10 14:34:14.114359 | orchestrator | 2026-01-10 14:34:14.114371 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-01-10 14:34:14.114382 | orchestrator | Saturday 10 January 2026 14:32:12 +0000 (0:00:00.846) 0:02:32.324 ****** 2026-01-10 14:34:14.114395 | orchestrator | changed: [testbed-manager] 2026-01-10 14:34:14.114407 | orchestrator | 2026-01-10 14:34:14.114420 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-01-10 14:34:14.114431 | orchestrator | Saturday 10 January 2026 14:32:12 +0000 (0:00:00.456) 0:02:32.781 ****** 2026-01-10 14:34:14.114442 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-01-10 14:34:14.114454 | orchestrator | 2026-01-10 14:34:14.114466 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-01-10 14:34:14.114477 | orchestrator | Saturday 10 January 2026 14:32:13 +0000 (0:00:00.502) 0:02:33.284 ****** 2026-01-10 14:34:14.114489 | orchestrator | changed: [testbed-manager] 2026-01-10 14:34:14.114500 | orchestrator | 2026-01-10 14:34:14.114511 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-01-10 14:34:14.114522 | orchestrator | Saturday 10 January 2026 14:32:13 +0000 (0:00:00.768) 0:02:34.053 ****** 2026-01-10 14:34:14.114532 | orchestrator | changed: [testbed-manager] 2026-01-10 14:34:14.114543 | orchestrator | 2026-01-10 14:34:14.114554 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-01-10 14:34:14.114574 | orchestrator | Saturday 10 January 2026 14:32:14 +0000 (0:00:00.514) 0:02:34.567 ****** 2026-01-10 14:34:14.114586 | orchestrator | changed: [testbed-manager -> localhost] 2026-01-10 14:34:14.114597 | orchestrator | 2026-01-10 14:34:14.114608 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-01-10 14:34:14.114619 | orchestrator | Saturday 10 January 2026 14:32:15 +0000 (0:00:01.414) 0:02:35.982 ****** 2026-01-10 14:34:14.114629 | orchestrator | changed: [testbed-manager -> localhost] 2026-01-10 14:34:14.114640 | orchestrator | 2026-01-10 14:34:14.114652 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-01-10 14:34:14.114663 | orchestrator | Saturday 10 January 2026 14:32:16 +0000 (0:00:00.902) 0:02:36.885 ****** 2026-01-10 14:34:14.114674 | orchestrator | changed: [testbed-manager] 2026-01-10 14:34:14.114685 | orchestrator | 2026-01-10 14:34:14.114697 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-01-10 14:34:14.114708 | orchestrator | Saturday 10 January 2026 14:32:17 +0000 (0:00:00.465) 0:02:37.351 ****** 2026-01-10 14:34:14.114718 | orchestrator | changed: [testbed-manager] 2026-01-10 14:34:14.114730 | orchestrator | 2026-01-10 14:34:14.114737 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2026-01-10 14:34:14.114743 | orchestrator | 2026-01-10 14:34:14.114749 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2026-01-10 14:34:14.114792 | orchestrator | Saturday 10 January 2026 14:32:17 +0000 (0:00:00.681) 0:02:38.032 ****** 2026-01-10 14:34:14.114799 | orchestrator | ok: [testbed-manager] 2026-01-10 14:34:14.114805 | orchestrator | 2026-01-10 14:34:14.114811 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2026-01-10 14:34:14.114817 | orchestrator | Saturday 10 January 2026 14:32:18 +0000 (0:00:00.162) 0:02:38.194 ****** 2026-01-10 14:34:14.114823 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2026-01-10 14:34:14.114829 | orchestrator | 2026-01-10 14:34:14.114835 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2026-01-10 14:34:14.114841 | orchestrator | Saturday 10 January 2026 14:32:18 +0000 (0:00:00.260) 0:02:38.455 ****** 2026-01-10 14:34:14.114983 | orchestrator | ok: [testbed-manager] 2026-01-10 14:34:14.114993 | orchestrator | 2026-01-10 14:34:14.114999 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2026-01-10 14:34:14.115005 | orchestrator | Saturday 10 January 2026 14:32:19 +0000 (0:00:00.903) 0:02:39.358 ****** 2026-01-10 14:34:14.115012 | orchestrator | ok: [testbed-manager] 2026-01-10 14:34:14.115018 | orchestrator | 2026-01-10 14:34:14.115024 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2026-01-10 14:34:14.115030 | orchestrator | Saturday 10 January 2026 14:32:20 +0000 (0:00:01.779) 0:02:41.138 ****** 2026-01-10 14:34:14.115036 | orchestrator | changed: [testbed-manager] 2026-01-10 14:34:14.115043 | orchestrator | 2026-01-10 14:34:14.115049 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2026-01-10 14:34:14.115055 | orchestrator | Saturday 10 January 2026 14:32:21 +0000 (0:00:00.960) 0:02:42.099 ****** 2026-01-10 14:34:14.115061 | orchestrator | ok: [testbed-manager] 2026-01-10 14:34:14.115067 | orchestrator | 2026-01-10 14:34:14.115084 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2026-01-10 14:34:14.115090 | orchestrator | Saturday 10 January 2026 14:32:22 +0000 (0:00:00.638) 0:02:42.737 ****** 2026-01-10 14:34:14.115097 | orchestrator | changed: [testbed-manager] 2026-01-10 14:34:14.115103 | orchestrator | 2026-01-10 14:34:14.115109 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2026-01-10 14:34:14.115115 | orchestrator | Saturday 10 January 2026 14:32:31 +0000 (0:00:08.900) 0:02:51.637 ****** 2026-01-10 14:34:14.115121 | orchestrator | changed: [testbed-manager] 2026-01-10 14:34:14.115127 | orchestrator | 2026-01-10 14:34:14.115134 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2026-01-10 14:34:14.115140 | orchestrator | Saturday 10 January 2026 14:32:48 +0000 (0:00:17.353) 0:03:08.991 ****** 2026-01-10 14:34:14.115197 | orchestrator | ok: [testbed-manager] 2026-01-10 14:34:14.115219 | orchestrator | 2026-01-10 14:34:14.115225 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2026-01-10 14:34:14.115238 | orchestrator | 2026-01-10 14:34:14.115244 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2026-01-10 14:34:14.115251 | orchestrator | Saturday 10 January 2026 14:32:49 +0000 (0:00:00.921) 0:03:09.913 ****** 2026-01-10 14:34:14.115257 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:34:14.115263 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:34:14.115269 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:34:14.115275 | orchestrator | 2026-01-10 14:34:14.115281 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2026-01-10 14:34:14.115287 | orchestrator | Saturday 10 January 2026 14:32:50 +0000 (0:00:00.418) 0:03:10.332 ****** 2026-01-10 14:34:14.115294 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:34:14.115300 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:34:14.115306 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:34:14.115312 | orchestrator | 2026-01-10 14:34:14.115318 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2026-01-10 14:34:14.115324 | orchestrator | Saturday 10 January 2026 14:32:50 +0000 (0:00:00.516) 0:03:10.849 ****** 2026-01-10 14:34:14.115330 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:34:14.115337 | orchestrator | 2026-01-10 14:34:14.115343 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2026-01-10 14:34:14.115349 | orchestrator | Saturday 10 January 2026 14:32:51 +0000 (0:00:00.822) 0:03:11.671 ****** 2026-01-10 14:34:14.115355 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-01-10 14:34:14.115361 | orchestrator | 2026-01-10 14:34:14.115367 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2026-01-10 14:34:14.115373 | orchestrator | Saturday 10 January 2026 14:32:52 +0000 (0:00:00.936) 0:03:12.607 ****** 2026-01-10 14:34:14.115379 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-10 14:34:14.115386 | orchestrator | 2026-01-10 14:34:14.115392 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2026-01-10 14:34:14.115398 | orchestrator | Saturday 10 January 2026 14:32:53 +0000 (0:00:01.112) 0:03:13.720 ****** 2026-01-10 14:34:14.115404 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:34:14.115410 | orchestrator | 2026-01-10 14:34:14.115416 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2026-01-10 14:34:14.115422 | orchestrator | Saturday 10 January 2026 14:32:53 +0000 (0:00:00.197) 0:03:13.917 ****** 2026-01-10 14:34:14.115428 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-10 14:34:14.115433 | orchestrator | 2026-01-10 14:34:14.115438 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2026-01-10 14:34:14.115443 | orchestrator | Saturday 10 January 2026 14:32:55 +0000 (0:00:01.371) 0:03:15.288 ****** 2026-01-10 14:34:14.115449 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:34:14.115454 | orchestrator | 2026-01-10 14:34:14.115460 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2026-01-10 14:34:14.115465 | orchestrator | Saturday 10 January 2026 14:32:55 +0000 (0:00:00.161) 0:03:15.450 ****** 2026-01-10 14:34:14.115470 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:34:14.115476 | orchestrator | 2026-01-10 14:34:14.115481 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2026-01-10 14:34:14.115486 | orchestrator | Saturday 10 January 2026 14:32:55 +0000 (0:00:00.134) 0:03:15.585 ****** 2026-01-10 14:34:14.115492 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:34:14.115498 | orchestrator | 2026-01-10 14:34:14.115507 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2026-01-10 14:34:14.115515 | orchestrator | Saturday 10 January 2026 14:32:55 +0000 (0:00:00.142) 0:03:15.727 ****** 2026-01-10 14:34:14.115529 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:34:14.115540 | orchestrator | 2026-01-10 14:34:14.115555 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2026-01-10 14:34:14.115564 | orchestrator | Saturday 10 January 2026 14:32:55 +0000 (0:00:00.127) 0:03:15.855 ****** 2026-01-10 14:34:14.115572 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-01-10 14:34:14.115580 | orchestrator | 2026-01-10 14:34:14.115589 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2026-01-10 14:34:14.115597 | orchestrator | Saturday 10 January 2026 14:33:00 +0000 (0:00:04.639) 0:03:20.495 ****** 2026-01-10 14:34:14.115605 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/cilium-operator) 2026-01-10 14:34:14.115615 | orchestrator | FAILED - RETRYING: [testbed-node-0 -> localhost]: Wait for Cilium resources (30 retries left). 2026-01-10 14:34:14.115627 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=daemonset/cilium) 2026-01-10 14:34:14.115636 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-relay) 2026-01-10 14:34:14.115647 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-ui) 2026-01-10 14:34:14.115654 | orchestrator | 2026-01-10 14:34:14.115659 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2026-01-10 14:34:14.115664 | orchestrator | Saturday 10 January 2026 14:33:42 +0000 (0:00:42.529) 0:04:03.024 ****** 2026-01-10 14:34:14.115676 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-10 14:34:14.115682 | orchestrator | 2026-01-10 14:34:14.115687 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2026-01-10 14:34:14.115692 | orchestrator | Saturday 10 January 2026 14:33:44 +0000 (0:00:01.154) 0:04:04.178 ****** 2026-01-10 14:34:14.115698 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-01-10 14:34:14.115703 | orchestrator | 2026-01-10 14:34:14.115708 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2026-01-10 14:34:14.115717 | orchestrator | Saturday 10 January 2026 14:33:45 +0000 (0:00:01.480) 0:04:05.659 ****** 2026-01-10 14:34:14.115723 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-01-10 14:34:14.115728 | orchestrator | 2026-01-10 14:34:14.115733 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2026-01-10 14:34:14.115739 | orchestrator | Saturday 10 January 2026 14:33:46 +0000 (0:00:01.022) 0:04:06.681 ****** 2026-01-10 14:34:14.115744 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:34:14.115749 | orchestrator | 2026-01-10 14:34:14.115755 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2026-01-10 14:34:14.115760 | orchestrator | Saturday 10 January 2026 14:33:46 +0000 (0:00:00.105) 0:04:06.787 ****** 2026-01-10 14:34:14.115765 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io) 2026-01-10 14:34:14.115771 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io) 2026-01-10 14:34:14.115776 | orchestrator | 2026-01-10 14:34:14.115782 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2026-01-10 14:34:14.115787 | orchestrator | Saturday 10 January 2026 14:33:48 +0000 (0:00:01.772) 0:04:08.559 ****** 2026-01-10 14:34:14.115792 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:34:14.115798 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:34:14.115803 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:34:14.115809 | orchestrator | 2026-01-10 14:34:14.115814 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2026-01-10 14:34:14.115819 | orchestrator | Saturday 10 January 2026 14:33:48 +0000 (0:00:00.349) 0:04:08.908 ****** 2026-01-10 14:34:14.115824 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:34:14.115830 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:34:14.115835 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:34:14.115840 | orchestrator | 2026-01-10 14:34:14.115888 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2026-01-10 14:34:14.115896 | orchestrator | 2026-01-10 14:34:14.115904 | orchestrator | TASK [k9s : Gather variables for each operating system] ************************ 2026-01-10 14:34:14.115917 | orchestrator | Saturday 10 January 2026 14:33:49 +0000 (0:00:01.150) 0:04:10.059 ****** 2026-01-10 14:34:14.115925 | orchestrator | ok: [testbed-manager] 2026-01-10 14:34:14.115935 | orchestrator | 2026-01-10 14:34:14.115945 | orchestrator | TASK [k9s : Include distribution specific install tasks] *********************** 2026-01-10 14:34:14.115954 | orchestrator | Saturday 10 January 2026 14:33:50 +0000 (0:00:00.131) 0:04:10.190 ****** 2026-01-10 14:34:14.115961 | orchestrator | included: /ansible/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2026-01-10 14:34:14.115966 | orchestrator | 2026-01-10 14:34:14.115972 | orchestrator | TASK [k9s : Install k9s packages] ********************************************** 2026-01-10 14:34:14.115977 | orchestrator | Saturday 10 January 2026 14:33:50 +0000 (0:00:00.288) 0:04:10.479 ****** 2026-01-10 14:34:14.115983 | orchestrator | changed: [testbed-manager] 2026-01-10 14:34:14.115988 | orchestrator | 2026-01-10 14:34:14.115993 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2026-01-10 14:34:14.115999 | orchestrator | 2026-01-10 14:34:14.116004 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2026-01-10 14:34:14.116009 | orchestrator | Saturday 10 January 2026 14:33:56 +0000 (0:00:06.160) 0:04:16.639 ****** 2026-01-10 14:34:14.116014 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:34:14.116019 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:34:14.116025 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:34:14.116030 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:34:14.116035 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:34:14.116041 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:34:14.116046 | orchestrator | 2026-01-10 14:34:14.116051 | orchestrator | TASK [Manage labels] *********************************************************** 2026-01-10 14:34:14.116057 | orchestrator | Saturday 10 January 2026 14:33:57 +0000 (0:00:00.941) 0:04:17.580 ****** 2026-01-10 14:34:14.116062 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-01-10 14:34:14.116067 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-01-10 14:34:14.116073 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-01-10 14:34:14.116078 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-01-10 14:34:14.116083 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-01-10 14:34:14.116089 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-01-10 14:34:14.116094 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-01-10 14:34:14.116099 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-01-10 14:34:14.116104 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-01-10 14:34:14.116110 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2026-01-10 14:34:14.116115 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-01-10 14:34:14.116120 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2026-01-10 14:34:14.116131 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2026-01-10 14:34:14.116136 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-01-10 14:34:14.116141 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-01-10 14:34:14.116147 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-01-10 14:34:14.116152 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-01-10 14:34:14.116161 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-01-10 14:34:14.116166 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-01-10 14:34:14.116175 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-01-10 14:34:14.116181 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-01-10 14:34:14.116186 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-01-10 14:34:14.116191 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-01-10 14:34:14.116197 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-01-10 14:34:14.116202 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-01-10 14:34:14.116207 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-01-10 14:34:14.116213 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-01-10 14:34:14.116218 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-01-10 14:34:14.116224 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-01-10 14:34:14.116229 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-01-10 14:34:14.116234 | orchestrator | 2026-01-10 14:34:14.116239 | orchestrator | TASK [Manage annotations] ****************************************************** 2026-01-10 14:34:14.116245 | orchestrator | Saturday 10 January 2026 14:34:10 +0000 (0:00:13.517) 0:04:31.098 ****** 2026-01-10 14:34:14.116250 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:34:14.116255 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:34:14.116261 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:34:14.116266 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:34:14.116272 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:34:14.116277 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:34:14.116282 | orchestrator | 2026-01-10 14:34:14.116288 | orchestrator | TASK [Manage taints] *********************************************************** 2026-01-10 14:34:14.116293 | orchestrator | Saturday 10 January 2026 14:34:11 +0000 (0:00:00.630) 0:04:31.728 ****** 2026-01-10 14:34:14.116298 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:34:14.116304 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:34:14.116309 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:34:14.116314 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:34:14.116320 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:34:14.116325 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:34:14.116330 | orchestrator | 2026-01-10 14:34:14.116336 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-10 14:34:14.116341 | orchestrator | testbed-manager : ok=21  changed=11  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-10 14:34:14.116348 | orchestrator | testbed-node-0 : ok=50  changed=23  unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-01-10 14:34:14.116353 | orchestrator | testbed-node-1 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-01-10 14:34:14.116359 | orchestrator | testbed-node-2 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-01-10 14:34:14.116364 | orchestrator | testbed-node-3 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-01-10 14:34:14.116369 | orchestrator | testbed-node-4 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-01-10 14:34:14.116375 | orchestrator | testbed-node-5 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-01-10 14:34:14.116383 | orchestrator | 2026-01-10 14:34:14.116389 | orchestrator | 2026-01-10 14:34:14.116394 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-10 14:34:14.116399 | orchestrator | Saturday 10 January 2026 14:34:12 +0000 (0:00:00.515) 0:04:32.244 ****** 2026-01-10 14:34:14.116405 | orchestrator | =============================================================================== 2026-01-10 14:34:14.116410 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 44.03s 2026-01-10 14:34:14.116416 | orchestrator | k3s_server_post : Wait for Cilium resources ---------------------------- 42.53s 2026-01-10 14:34:14.116421 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 25.68s 2026-01-10 14:34:14.116429 | orchestrator | kubectl : Install required packages ------------------------------------ 17.35s 2026-01-10 14:34:14.116435 | orchestrator | Manage labels ---------------------------------------------------------- 13.52s 2026-01-10 14:34:14.116441 | orchestrator | k3s_agent : Manage k3s service ----------------------------------------- 11.08s 2026-01-10 14:34:14.116446 | orchestrator | kubectl : Add repository Debian ----------------------------------------- 8.90s 2026-01-10 14:34:14.116451 | orchestrator | k9s : Install k9s packages ---------------------------------------------- 6.16s 2026-01-10 14:34:14.116459 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 5.92s 2026-01-10 14:34:14.116464 | orchestrator | k3s_server_post : Install Cilium ---------------------------------------- 4.64s 2026-01-10 14:34:14.116470 | orchestrator | k3s_server : Set _kube_vip_bgp_peers fact ------------------------------- 3.11s 2026-01-10 14:34:14.116475 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 2.77s 2026-01-10 14:34:14.116480 | orchestrator | k3s_server : Init cluster inside the transient k3s-init service --------- 2.50s 2026-01-10 14:34:14.116486 | orchestrator | k3s_server : Detect Kubernetes version for label compatibility ---------- 2.44s 2026-01-10 14:34:14.116491 | orchestrator | k3s_prereq : Enable IPv4 forwarding ------------------------------------- 2.27s 2026-01-10 14:34:14.116496 | orchestrator | k3s_server : Create custom resolv.conf for k3s -------------------------- 2.16s 2026-01-10 14:34:14.116502 | orchestrator | k3s_server : Set node role label selector based on Kubernetes version --- 1.94s 2026-01-10 14:34:14.116507 | orchestrator | k3s_download : Download k3s binary armhf -------------------------------- 1.87s 2026-01-10 14:34:14.116512 | orchestrator | k3s_custom_registries : Remove /etc/rancher/k3s/registries.yaml when no registries configured --- 1.81s 2026-01-10 14:34:14.116520 | orchestrator | kubectl : Install apt-transport-https package --------------------------- 1.78s 2026-01-10 14:34:14.116529 | orchestrator | 2026-01-10 14:34:14 | INFO  | Task 3ef556d4-e384-4f44-8d13-899a5138fe59 is in state STARTED 2026-01-10 14:34:14.116538 | orchestrator | 2026-01-10 14:34:14 | INFO  | Task 35fd8ed6-bee3-4b01-a98c-0d469af062b0 is in state STARTED 2026-01-10 14:34:14.116547 | orchestrator | 2026-01-10 14:34:14 | INFO  | Task 17014c7a-d1aa-440d-a2ab-612bd1f148fe is in state STARTED 2026-01-10 14:34:14.116556 | orchestrator | 2026-01-10 14:34:14 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:34:17.168553 | orchestrator | 2026-01-10 14:34:17 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:34:17.171275 | orchestrator | 2026-01-10 14:34:17 | INFO  | Task b05dd80a-5179-44ef-9a2e-e676ef037cb7 is in state STARTED 2026-01-10 14:34:17.173887 | orchestrator | 2026-01-10 14:34:17 | INFO  | Task 7f2176df-6abf-4012-befb-2479535bac44 is in state STARTED 2026-01-10 14:34:17.178963 | orchestrator | 2026-01-10 14:34:17 | INFO  | Task 3ef556d4-e384-4f44-8d13-899a5138fe59 is in state STARTED 2026-01-10 14:34:17.183515 | orchestrator | 2026-01-10 14:34:17 | INFO  | Task 35fd8ed6-bee3-4b01-a98c-0d469af062b0 is in state STARTED 2026-01-10 14:34:17.185030 | orchestrator | 2026-01-10 14:34:17 | INFO  | Task 17014c7a-d1aa-440d-a2ab-612bd1f148fe is in state STARTED 2026-01-10 14:34:17.185085 | orchestrator | 2026-01-10 14:34:17 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:34:20.236641 | orchestrator | 2026-01-10 14:34:20 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:34:20.240512 | orchestrator | 2026-01-10 14:34:20 | INFO  | Task b05dd80a-5179-44ef-9a2e-e676ef037cb7 is in state STARTED 2026-01-10 14:34:20.247726 | orchestrator | 2026-01-10 14:34:20 | INFO  | Task 7f2176df-6abf-4012-befb-2479535bac44 is in state STARTED 2026-01-10 14:34:20.258380 | orchestrator | 2026-01-10 14:34:20 | INFO  | Task 3ef556d4-e384-4f44-8d13-899a5138fe59 is in state STARTED 2026-01-10 14:34:20.262208 | orchestrator | 2026-01-10 14:34:20 | INFO  | Task 35fd8ed6-bee3-4b01-a98c-0d469af062b0 is in state STARTED 2026-01-10 14:34:20.264992 | orchestrator | 2026-01-10 14:34:20 | INFO  | Task 17014c7a-d1aa-440d-a2ab-612bd1f148fe is in state STARTED 2026-01-10 14:34:20.265079 | orchestrator | 2026-01-10 14:34:20 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:34:23.312750 | orchestrator | 2026-01-10 14:34:23 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:34:23.314172 | orchestrator | 2026-01-10 14:34:23 | INFO  | Task b05dd80a-5179-44ef-9a2e-e676ef037cb7 is in state STARTED 2026-01-10 14:34:23.316601 | orchestrator | 2026-01-10 14:34:23 | INFO  | Task 7f2176df-6abf-4012-befb-2479535bac44 is in state STARTED 2026-01-10 14:34:23.317178 | orchestrator | 2026-01-10 14:34:23 | INFO  | Task 3ef556d4-e384-4f44-8d13-899a5138fe59 is in state SUCCESS 2026-01-10 14:34:23.319355 | orchestrator | 2026-01-10 14:34:23 | INFO  | Task 35fd8ed6-bee3-4b01-a98c-0d469af062b0 is in state STARTED 2026-01-10 14:34:23.320562 | orchestrator | 2026-01-10 14:34:23 | INFO  | Task 17014c7a-d1aa-440d-a2ab-612bd1f148fe is in state STARTED 2026-01-10 14:34:23.320600 | orchestrator | 2026-01-10 14:34:23 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:34:26.364683 | orchestrator | 2026-01-10 14:34:26 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:34:26.366777 | orchestrator | 2026-01-10 14:34:26 | INFO  | Task b05dd80a-5179-44ef-9a2e-e676ef037cb7 is in state STARTED 2026-01-10 14:34:26.369388 | orchestrator | 2026-01-10 14:34:26 | INFO  | Task 7f2176df-6abf-4012-befb-2479535bac44 is in state STARTED 2026-01-10 14:34:26.372183 | orchestrator | 2026-01-10 14:34:26 | INFO  | Task 35fd8ed6-bee3-4b01-a98c-0d469af062b0 is in state STARTED 2026-01-10 14:34:26.374533 | orchestrator | 2026-01-10 14:34:26 | INFO  | Task 17014c7a-d1aa-440d-a2ab-612bd1f148fe is in state STARTED 2026-01-10 14:34:26.374967 | orchestrator | 2026-01-10 14:34:26 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:34:29.415715 | orchestrator | 2026-01-10 14:34:29 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:34:29.416163 | orchestrator | 2026-01-10 14:34:29 | INFO  | Task b05dd80a-5179-44ef-9a2e-e676ef037cb7 is in state STARTED 2026-01-10 14:34:29.416820 | orchestrator | 2026-01-10 14:34:29 | INFO  | Task 7f2176df-6abf-4012-befb-2479535bac44 is in state SUCCESS 2026-01-10 14:34:29.417618 | orchestrator | 2026-01-10 14:34:29 | INFO  | Task 35fd8ed6-bee3-4b01-a98c-0d469af062b0 is in state STARTED 2026-01-10 14:34:29.418687 | orchestrator | 2026-01-10 14:34:29 | INFO  | Task 17014c7a-d1aa-440d-a2ab-612bd1f148fe is in state STARTED 2026-01-10 14:34:29.418830 | orchestrator | 2026-01-10 14:34:29 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:34:32.479515 | orchestrator | 2026-01-10 14:34:32 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:34:32.480024 | orchestrator | 2026-01-10 14:34:32 | INFO  | Task b05dd80a-5179-44ef-9a2e-e676ef037cb7 is in state STARTED 2026-01-10 14:34:32.486146 | orchestrator | 2026-01-10 14:34:32 | INFO  | Task 35fd8ed6-bee3-4b01-a98c-0d469af062b0 is in state STARTED 2026-01-10 14:34:32.487270 | orchestrator | 2026-01-10 14:34:32 | INFO  | Task 17014c7a-d1aa-440d-a2ab-612bd1f148fe is in state STARTED 2026-01-10 14:34:32.487293 | orchestrator | 2026-01-10 14:34:32 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:34:35.531999 | orchestrator | 2026-01-10 14:34:35 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:34:35.533516 | orchestrator | 2026-01-10 14:34:35 | INFO  | Task b05dd80a-5179-44ef-9a2e-e676ef037cb7 is in state STARTED 2026-01-10 14:34:35.535260 | orchestrator | 2026-01-10 14:34:35 | INFO  | Task 35fd8ed6-bee3-4b01-a98c-0d469af062b0 is in state STARTED 2026-01-10 14:34:35.536915 | orchestrator | 2026-01-10 14:34:35 | INFO  | Task 17014c7a-d1aa-440d-a2ab-612bd1f148fe is in state STARTED 2026-01-10 14:34:35.537008 | orchestrator | 2026-01-10 14:34:35 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:34:38.580230 | orchestrator | 2026-01-10 14:34:38 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:34:38.580600 | orchestrator | 2026-01-10 14:34:38 | INFO  | Task b05dd80a-5179-44ef-9a2e-e676ef037cb7 is in state STARTED 2026-01-10 14:34:38.582000 | orchestrator | 2026-01-10 14:34:38 | INFO  | Task 35fd8ed6-bee3-4b01-a98c-0d469af062b0 is in state STARTED 2026-01-10 14:34:38.586049 | orchestrator | 2026-01-10 14:34:38 | INFO  | Task 17014c7a-d1aa-440d-a2ab-612bd1f148fe is in state STARTED 2026-01-10 14:34:38.586102 | orchestrator | 2026-01-10 14:34:38 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:34:41.623492 | orchestrator | 2026-01-10 14:34:41 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:34:41.625600 | orchestrator | 2026-01-10 14:34:41 | INFO  | Task b05dd80a-5179-44ef-9a2e-e676ef037cb7 is in state STARTED 2026-01-10 14:34:41.628102 | orchestrator | 2026-01-10 14:34:41 | INFO  | Task 35fd8ed6-bee3-4b01-a98c-0d469af062b0 is in state STARTED 2026-01-10 14:34:41.630524 | orchestrator | 2026-01-10 14:34:41 | INFO  | Task 17014c7a-d1aa-440d-a2ab-612bd1f148fe is in state STARTED 2026-01-10 14:34:41.630579 | orchestrator | 2026-01-10 14:34:41 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:34:44.667883 | orchestrator | 2026-01-10 14:34:44 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:34:44.668946 | orchestrator | 2026-01-10 14:34:44 | INFO  | Task b05dd80a-5179-44ef-9a2e-e676ef037cb7 is in state STARTED 2026-01-10 14:34:44.669580 | orchestrator | 2026-01-10 14:34:44 | INFO  | Task 35fd8ed6-bee3-4b01-a98c-0d469af062b0 is in state STARTED 2026-01-10 14:34:44.670350 | orchestrator | 2026-01-10 14:34:44 | INFO  | Task 17014c7a-d1aa-440d-a2ab-612bd1f148fe is in state STARTED 2026-01-10 14:34:44.670459 | orchestrator | 2026-01-10 14:34:44 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:34:47.708376 | orchestrator | 2026-01-10 14:34:47 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:34:47.708602 | orchestrator | 2026-01-10 14:34:47 | INFO  | Task b05dd80a-5179-44ef-9a2e-e676ef037cb7 is in state STARTED 2026-01-10 14:34:47.709632 | orchestrator | 2026-01-10 14:34:47 | INFO  | Task 35fd8ed6-bee3-4b01-a98c-0d469af062b0 is in state STARTED 2026-01-10 14:34:47.710337 | orchestrator | 2026-01-10 14:34:47 | INFO  | Task 17014c7a-d1aa-440d-a2ab-612bd1f148fe is in state STARTED 2026-01-10 14:34:47.710410 | orchestrator | 2026-01-10 14:34:47 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:34:50.741917 | orchestrator | 2026-01-10 14:34:50 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:34:50.742054 | orchestrator | 2026-01-10 14:34:50 | INFO  | Task b05dd80a-5179-44ef-9a2e-e676ef037cb7 is in state STARTED 2026-01-10 14:34:50.742501 | orchestrator | 2026-01-10 14:34:50 | INFO  | Task 35fd8ed6-bee3-4b01-a98c-0d469af062b0 is in state STARTED 2026-01-10 14:34:50.742985 | orchestrator | 2026-01-10 14:34:50 | INFO  | Task 17014c7a-d1aa-440d-a2ab-612bd1f148fe is in state STARTED 2026-01-10 14:34:50.743059 | orchestrator | 2026-01-10 14:34:50 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:34:53.776551 | orchestrator | 2026-01-10 14:34:53 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:34:53.777128 | orchestrator | 2026-01-10 14:34:53 | INFO  | Task b05dd80a-5179-44ef-9a2e-e676ef037cb7 is in state STARTED 2026-01-10 14:34:53.778572 | orchestrator | 2026-01-10 14:34:53 | INFO  | Task 35fd8ed6-bee3-4b01-a98c-0d469af062b0 is in state STARTED 2026-01-10 14:34:53.780552 | orchestrator | 2026-01-10 14:34:53 | INFO  | Task 17014c7a-d1aa-440d-a2ab-612bd1f148fe is in state STARTED 2026-01-10 14:34:53.780594 | orchestrator | 2026-01-10 14:34:53 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:34:56.819380 | orchestrator | 2026-01-10 14:34:56 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:34:56.820631 | orchestrator | 2026-01-10 14:34:56 | INFO  | Task b05dd80a-5179-44ef-9a2e-e676ef037cb7 is in state STARTED 2026-01-10 14:34:56.821370 | orchestrator | 2026-01-10 14:34:56 | INFO  | Task 35fd8ed6-bee3-4b01-a98c-0d469af062b0 is in state STARTED 2026-01-10 14:34:56.822574 | orchestrator | 2026-01-10 14:34:56 | INFO  | Task 17014c7a-d1aa-440d-a2ab-612bd1f148fe is in state STARTED 2026-01-10 14:34:56.822620 | orchestrator | 2026-01-10 14:34:56 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:34:59.869120 | orchestrator | 2026-01-10 14:34:59 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:34:59.870578 | orchestrator | 2026-01-10 14:34:59 | INFO  | Task b05dd80a-5179-44ef-9a2e-e676ef037cb7 is in state STARTED 2026-01-10 14:34:59.872369 | orchestrator | 2026-01-10 14:34:59 | INFO  | Task 35fd8ed6-bee3-4b01-a98c-0d469af062b0 is in state STARTED 2026-01-10 14:34:59.874663 | orchestrator | 2026-01-10 14:34:59 | INFO  | Task 17014c7a-d1aa-440d-a2ab-612bd1f148fe is in state STARTED 2026-01-10 14:34:59.874704 | orchestrator | 2026-01-10 14:34:59 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:35:02.921200 | orchestrator | 2026-01-10 14:35:02 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:35:02.921335 | orchestrator | 2026-01-10 14:35:02 | INFO  | Task b05dd80a-5179-44ef-9a2e-e676ef037cb7 is in state STARTED 2026-01-10 14:35:02.922277 | orchestrator | 2026-01-10 14:35:02 | INFO  | Task 35fd8ed6-bee3-4b01-a98c-0d469af062b0 is in state STARTED 2026-01-10 14:35:02.924148 | orchestrator | 2026-01-10 14:35:02 | INFO  | Task 17014c7a-d1aa-440d-a2ab-612bd1f148fe is in state STARTED 2026-01-10 14:35:02.924189 | orchestrator | 2026-01-10 14:35:02 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:35:05.959850 | orchestrator | 2026-01-10 14:35:05 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:35:05.962520 | orchestrator | 2026-01-10 14:35:05 | INFO  | Task b05dd80a-5179-44ef-9a2e-e676ef037cb7 is in state STARTED 2026-01-10 14:35:05.964601 | orchestrator | 2026-01-10 14:35:05 | INFO  | Task 35fd8ed6-bee3-4b01-a98c-0d469af062b0 is in state STARTED 2026-01-10 14:35:05.964670 | orchestrator | 2026-01-10 14:35:05 | INFO  | Task 17014c7a-d1aa-440d-a2ab-612bd1f148fe is in state STARTED 2026-01-10 14:35:05.964676 | orchestrator | 2026-01-10 14:35:05 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:35:08.998738 | orchestrator | 2026-01-10 14:35:08 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:35:09.000769 | orchestrator | 2026-01-10 14:35:09 | INFO  | Task b05dd80a-5179-44ef-9a2e-e676ef037cb7 is in state STARTED 2026-01-10 14:35:09.002824 | orchestrator | 2026-01-10 14:35:09 | INFO  | Task 35fd8ed6-bee3-4b01-a98c-0d469af062b0 is in state STARTED 2026-01-10 14:35:09.004080 | orchestrator | 2026-01-10 14:35:09 | INFO  | Task 17014c7a-d1aa-440d-a2ab-612bd1f148fe is in state STARTED 2026-01-10 14:35:09.004234 | orchestrator | 2026-01-10 14:35:09 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:35:12.083769 | orchestrator | 2026-01-10 14:35:12 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:35:12.087233 | orchestrator | 2026-01-10 14:35:12 | INFO  | Task b05dd80a-5179-44ef-9a2e-e676ef037cb7 is in state STARTED 2026-01-10 14:35:12.094166 | orchestrator | 2026-01-10 14:35:12 | INFO  | Task 35fd8ed6-bee3-4b01-a98c-0d469af062b0 is in state STARTED 2026-01-10 14:35:12.103306 | orchestrator | 2026-01-10 14:35:12 | INFO  | Task 17014c7a-d1aa-440d-a2ab-612bd1f148fe is in state STARTED 2026-01-10 14:35:12.103380 | orchestrator | 2026-01-10 14:35:12 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:35:15.189551 | orchestrator | 2026-01-10 14:35:15 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:35:15.191195 | orchestrator | 2026-01-10 14:35:15 | INFO  | Task b05dd80a-5179-44ef-9a2e-e676ef037cb7 is in state STARTED 2026-01-10 14:35:15.193193 | orchestrator | 2026-01-10 14:35:15 | INFO  | Task 35fd8ed6-bee3-4b01-a98c-0d469af062b0 is in state STARTED 2026-01-10 14:35:15.195414 | orchestrator | 2026-01-10 14:35:15 | INFO  | Task 17014c7a-d1aa-440d-a2ab-612bd1f148fe is in state STARTED 2026-01-10 14:35:15.195468 | orchestrator | 2026-01-10 14:35:15 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:35:18.239755 | orchestrator | 2026-01-10 14:35:18 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:35:18.240216 | orchestrator | 2026-01-10 14:35:18 | INFO  | Task b05dd80a-5179-44ef-9a2e-e676ef037cb7 is in state STARTED 2026-01-10 14:35:18.241316 | orchestrator | 2026-01-10 14:35:18 | INFO  | Task 35fd8ed6-bee3-4b01-a98c-0d469af062b0 is in state STARTED 2026-01-10 14:35:18.242181 | orchestrator | 2026-01-10 14:35:18 | INFO  | Task 17014c7a-d1aa-440d-a2ab-612bd1f148fe is in state STARTED 2026-01-10 14:35:18.242232 | orchestrator | 2026-01-10 14:35:18 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:35:21.268442 | orchestrator | 2026-01-10 14:35:21 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:35:21.269345 | orchestrator | 2026-01-10 14:35:21 | INFO  | Task b05dd80a-5179-44ef-9a2e-e676ef037cb7 is in state STARTED 2026-01-10 14:35:21.269750 | orchestrator | 2026-01-10 14:35:21 | INFO  | Task 35fd8ed6-bee3-4b01-a98c-0d469af062b0 is in state STARTED 2026-01-10 14:35:21.270560 | orchestrator | 2026-01-10 14:35:21 | INFO  | Task 17014c7a-d1aa-440d-a2ab-612bd1f148fe is in state STARTED 2026-01-10 14:35:21.270598 | orchestrator | 2026-01-10 14:35:21 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:35:24.305923 | orchestrator | 2026-01-10 14:35:24 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:35:24.306080 | orchestrator | 2026-01-10 14:35:24 | INFO  | Task b05dd80a-5179-44ef-9a2e-e676ef037cb7 is in state STARTED 2026-01-10 14:35:24.307144 | orchestrator | 2026-01-10 14:35:24 | INFO  | Task 35fd8ed6-bee3-4b01-a98c-0d469af062b0 is in state STARTED 2026-01-10 14:35:24.308805 | orchestrator | 2026-01-10 14:35:24 | INFO  | Task 17014c7a-d1aa-440d-a2ab-612bd1f148fe is in state STARTED 2026-01-10 14:35:24.308891 | orchestrator | 2026-01-10 14:35:24 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:35:27.347682 | orchestrator | 2026-01-10 14:35:27 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:35:27.347767 | orchestrator | 2026-01-10 14:35:27 | INFO  | Task b05dd80a-5179-44ef-9a2e-e676ef037cb7 is in state STARTED 2026-01-10 14:35:27.347776 | orchestrator | 2026-01-10 14:35:27 | INFO  | Task 35fd8ed6-bee3-4b01-a98c-0d469af062b0 is in state STARTED 2026-01-10 14:35:27.347782 | orchestrator | 2026-01-10 14:35:27 | INFO  | Task 17014c7a-d1aa-440d-a2ab-612bd1f148fe is in state STARTED 2026-01-10 14:35:27.347789 | orchestrator | 2026-01-10 14:35:27 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:35:30.388311 | orchestrator | 2026-01-10 14:35:30 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:35:30.404332 | orchestrator | 2026-01-10 14:35:30 | INFO  | Task b05dd80a-5179-44ef-9a2e-e676ef037cb7 is in state STARTED 2026-01-10 14:35:30.404409 | orchestrator | 2026-01-10 14:35:30 | INFO  | Task 35fd8ed6-bee3-4b01-a98c-0d469af062b0 is in state STARTED 2026-01-10 14:35:30.404418 | orchestrator | 2026-01-10 14:35:30 | INFO  | Task 17014c7a-d1aa-440d-a2ab-612bd1f148fe is in state STARTED 2026-01-10 14:35:30.404426 | orchestrator | 2026-01-10 14:35:30 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:35:33.429056 | orchestrator | 2026-01-10 14:35:33 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:35:33.431864 | orchestrator | 2026-01-10 14:35:33 | INFO  | Task b05dd80a-5179-44ef-9a2e-e676ef037cb7 is in state STARTED 2026-01-10 14:35:33.433248 | orchestrator | 2026-01-10 14:35:33 | INFO  | Task 35fd8ed6-bee3-4b01-a98c-0d469af062b0 is in state STARTED 2026-01-10 14:35:33.435716 | orchestrator | 2026-01-10 14:35:33 | INFO  | Task 17014c7a-d1aa-440d-a2ab-612bd1f148fe is in state STARTED 2026-01-10 14:35:33.435856 | orchestrator | 2026-01-10 14:35:33 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:35:36.477440 | orchestrator | 2026-01-10 14:35:36 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:35:36.483395 | orchestrator | 2026-01-10 14:35:36 | INFO  | Task b05dd80a-5179-44ef-9a2e-e676ef037cb7 is in state STARTED 2026-01-10 14:35:36.496332 | orchestrator | 2026-01-10 14:35:36 | INFO  | Task 35fd8ed6-bee3-4b01-a98c-0d469af062b0 is in state STARTED 2026-01-10 14:35:36.498588 | orchestrator | 2026-01-10 14:35:36 | INFO  | Task 17014c7a-d1aa-440d-a2ab-612bd1f148fe is in state STARTED 2026-01-10 14:35:36.498645 | orchestrator | 2026-01-10 14:35:36 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:35:39.550658 | orchestrator | 2026-01-10 14:35:39 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:35:39.555144 | orchestrator | 2026-01-10 14:35:39 | INFO  | Task b05dd80a-5179-44ef-9a2e-e676ef037cb7 is in state STARTED 2026-01-10 14:35:39.557428 | orchestrator | 2026-01-10 14:35:39 | INFO  | Task 35fd8ed6-bee3-4b01-a98c-0d469af062b0 is in state STARTED 2026-01-10 14:35:39.560186 | orchestrator | 2026-01-10 14:35:39 | INFO  | Task 17014c7a-d1aa-440d-a2ab-612bd1f148fe is in state STARTED 2026-01-10 14:35:39.560715 | orchestrator | 2026-01-10 14:35:39 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:35:42.603876 | orchestrator | 2026-01-10 14:35:42 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:35:42.603978 | orchestrator | 2026-01-10 14:35:42 | INFO  | Task b05dd80a-5179-44ef-9a2e-e676ef037cb7 is in state STARTED 2026-01-10 14:35:42.603990 | orchestrator | 2026-01-10 14:35:42 | INFO  | Task 35fd8ed6-bee3-4b01-a98c-0d469af062b0 is in state STARTED 2026-01-10 14:35:42.603996 | orchestrator | 2026-01-10 14:35:42 | INFO  | Task 17014c7a-d1aa-440d-a2ab-612bd1f148fe is in state STARTED 2026-01-10 14:35:42.604000 | orchestrator | 2026-01-10 14:35:42 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:35:45.609113 | orchestrator | 2026-01-10 14:35:45 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:35:45.609567 | orchestrator | 2026-01-10 14:35:45 | INFO  | Task b05dd80a-5179-44ef-9a2e-e676ef037cb7 is in state STARTED 2026-01-10 14:35:45.610305 | orchestrator | 2026-01-10 14:35:45 | INFO  | Task 35fd8ed6-bee3-4b01-a98c-0d469af062b0 is in state STARTED 2026-01-10 14:35:45.611083 | orchestrator | 2026-01-10 14:35:45 | INFO  | Task 17014c7a-d1aa-440d-a2ab-612bd1f148fe is in state STARTED 2026-01-10 14:35:45.611106 | orchestrator | 2026-01-10 14:35:45 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:35:48.639521 | orchestrator | 2026-01-10 14:35:48 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:35:48.640749 | orchestrator | 2026-01-10 14:35:48 | INFO  | Task b05dd80a-5179-44ef-9a2e-e676ef037cb7 is in state STARTED 2026-01-10 14:35:48.642375 | orchestrator | 2026-01-10 14:35:48 | INFO  | Task 35fd8ed6-bee3-4b01-a98c-0d469af062b0 is in state STARTED 2026-01-10 14:35:48.643393 | orchestrator | 2026-01-10 14:35:48 | INFO  | Task 17014c7a-d1aa-440d-a2ab-612bd1f148fe is in state STARTED 2026-01-10 14:35:48.643688 | orchestrator | 2026-01-10 14:35:48 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:35:51.677387 | orchestrator | 2026-01-10 14:35:51 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:35:51.679998 | orchestrator | 2026-01-10 14:35:51 | INFO  | Task b05dd80a-5179-44ef-9a2e-e676ef037cb7 is in state STARTED 2026-01-10 14:35:51.682300 | orchestrator | 2026-01-10 14:35:51 | INFO  | Task 35fd8ed6-bee3-4b01-a98c-0d469af062b0 is in state STARTED 2026-01-10 14:35:51.684055 | orchestrator | 2026-01-10 14:35:51 | INFO  | Task 17014c7a-d1aa-440d-a2ab-612bd1f148fe is in state STARTED 2026-01-10 14:35:51.684106 | orchestrator | 2026-01-10 14:35:51 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:35:54.734262 | orchestrator | 2026-01-10 14:35:54 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:35:54.736595 | orchestrator | 2026-01-10 14:35:54 | INFO  | Task b05dd80a-5179-44ef-9a2e-e676ef037cb7 is in state STARTED 2026-01-10 14:35:54.738758 | orchestrator | 2026-01-10 14:35:54 | INFO  | Task 35fd8ed6-bee3-4b01-a98c-0d469af062b0 is in state STARTED 2026-01-10 14:35:54.740265 | orchestrator | 2026-01-10 14:35:54 | INFO  | Task 17014c7a-d1aa-440d-a2ab-612bd1f148fe is in state STARTED 2026-01-10 14:35:54.741366 | orchestrator | 2026-01-10 14:35:54 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:35:57.774230 | orchestrator | 2026-01-10 14:35:57 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:35:57.775064 | orchestrator | 2026-01-10 14:35:57 | INFO  | Task b05dd80a-5179-44ef-9a2e-e676ef037cb7 is in state STARTED 2026-01-10 14:35:57.775982 | orchestrator | 2026-01-10 14:35:57 | INFO  | Task 35fd8ed6-bee3-4b01-a98c-0d469af062b0 is in state STARTED 2026-01-10 14:35:57.777531 | orchestrator | 2026-01-10 14:35:57 | INFO  | Task 17014c7a-d1aa-440d-a2ab-612bd1f148fe is in state STARTED 2026-01-10 14:35:57.777568 | orchestrator | 2026-01-10 14:35:57 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:36:00.800999 | orchestrator | 2026-01-10 14:36:00 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:36:00.801123 | orchestrator | 2026-01-10 14:36:00 | INFO  | Task b05dd80a-5179-44ef-9a2e-e676ef037cb7 is in state STARTED 2026-01-10 14:36:00.801300 | orchestrator | 2026-01-10 14:36:00 | INFO  | Task 35fd8ed6-bee3-4b01-a98c-0d469af062b0 is in state STARTED 2026-01-10 14:36:00.802155 | orchestrator | 2026-01-10 14:36:00 | INFO  | Task 17014c7a-d1aa-440d-a2ab-612bd1f148fe is in state STARTED 2026-01-10 14:36:00.802200 | orchestrator | 2026-01-10 14:36:00 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:36:03.838409 | orchestrator | 2026-01-10 14:36:03 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:36:03.839423 | orchestrator | 2026-01-10 14:36:03 | INFO  | Task b05dd80a-5179-44ef-9a2e-e676ef037cb7 is in state STARTED 2026-01-10 14:36:03.839580 | orchestrator | 2026-01-10 14:36:03 | INFO  | Task 35fd8ed6-bee3-4b01-a98c-0d469af062b0 is in state STARTED 2026-01-10 14:36:03.841153 | orchestrator | 2026-01-10 14:36:03 | INFO  | Task 17014c7a-d1aa-440d-a2ab-612bd1f148fe is in state STARTED 2026-01-10 14:36:03.841205 | orchestrator | 2026-01-10 14:36:03 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:36:06.903131 | orchestrator | 2026-01-10 14:36:06 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:36:06.903206 | orchestrator | 2026-01-10 14:36:06 | INFO  | Task b05dd80a-5179-44ef-9a2e-e676ef037cb7 is in state STARTED 2026-01-10 14:36:06.903213 | orchestrator | 2026-01-10 14:36:06 | INFO  | Task 35fd8ed6-bee3-4b01-a98c-0d469af062b0 is in state STARTED 2026-01-10 14:36:06.903218 | orchestrator | 2026-01-10 14:36:06 | INFO  | Task 17014c7a-d1aa-440d-a2ab-612bd1f148fe is in state STARTED 2026-01-10 14:36:06.903223 | orchestrator | 2026-01-10 14:36:06 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:36:09.937732 | orchestrator | 2026-01-10 14:36:09 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:36:09.939956 | orchestrator | 2026-01-10 14:36:09 | INFO  | Task b05dd80a-5179-44ef-9a2e-e676ef037cb7 is in state SUCCESS 2026-01-10 14:36:09.942311 | orchestrator | 2026-01-10 14:36:09.942346 | orchestrator | 2026-01-10 14:36:09.942352 | orchestrator | PLAY [Copy kubeconfig to the configuration repository] ************************* 2026-01-10 14:36:09.942357 | orchestrator | 2026-01-10 14:36:09.942362 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-01-10 14:36:09.942367 | orchestrator | Saturday 10 January 2026 14:34:17 +0000 (0:00:00.170) 0:00:00.170 ****** 2026-01-10 14:36:09.942372 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-01-10 14:36:09.942400 | orchestrator | 2026-01-10 14:36:09.942406 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-01-10 14:36:09.942411 | orchestrator | Saturday 10 January 2026 14:34:18 +0000 (0:00:00.816) 0:00:00.987 ****** 2026-01-10 14:36:09.942416 | orchestrator | changed: [testbed-manager] 2026-01-10 14:36:09.942421 | orchestrator | 2026-01-10 14:36:09.942426 | orchestrator | TASK [Change server address in the kubeconfig file] **************************** 2026-01-10 14:36:09.942436 | orchestrator | Saturday 10 January 2026 14:34:19 +0000 (0:00:01.589) 0:00:02.577 ****** 2026-01-10 14:36:09.942441 | orchestrator | changed: [testbed-manager] 2026-01-10 14:36:09.942445 | orchestrator | 2026-01-10 14:36:09.942450 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-10 14:36:09.942469 | orchestrator | testbed-manager : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-10 14:36:09.942476 | orchestrator | 2026-01-10 14:36:09.942480 | orchestrator | 2026-01-10 14:36:09.942485 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-10 14:36:09.942489 | orchestrator | Saturday 10 January 2026 14:34:20 +0000 (0:00:00.697) 0:00:03.274 ****** 2026-01-10 14:36:09.942494 | orchestrator | =============================================================================== 2026-01-10 14:36:09.942498 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.59s 2026-01-10 14:36:09.942503 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.82s 2026-01-10 14:36:09.942508 | orchestrator | Change server address in the kubeconfig file ---------------------------- 0.70s 2026-01-10 14:36:09.942512 | orchestrator | 2026-01-10 14:36:09.942517 | orchestrator | 2026-01-10 14:36:09.942521 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-01-10 14:36:09.942526 | orchestrator | 2026-01-10 14:36:09.942530 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-01-10 14:36:09.942535 | orchestrator | Saturday 10 January 2026 14:34:16 +0000 (0:00:00.165) 0:00:00.165 ****** 2026-01-10 14:36:09.942539 | orchestrator | ok: [testbed-manager] 2026-01-10 14:36:09.942545 | orchestrator | 2026-01-10 14:36:09.942549 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-01-10 14:36:09.942554 | orchestrator | Saturday 10 January 2026 14:34:18 +0000 (0:00:01.609) 0:00:01.774 ****** 2026-01-10 14:36:09.942558 | orchestrator | ok: [testbed-manager] 2026-01-10 14:36:09.942563 | orchestrator | 2026-01-10 14:36:09.942567 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-01-10 14:36:09.942572 | orchestrator | Saturday 10 January 2026 14:34:18 +0000 (0:00:00.647) 0:00:02.422 ****** 2026-01-10 14:36:09.942576 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-01-10 14:36:09.942581 | orchestrator | 2026-01-10 14:36:09.942585 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-01-10 14:36:09.942590 | orchestrator | Saturday 10 January 2026 14:34:19 +0000 (0:00:00.918) 0:00:03.340 ****** 2026-01-10 14:36:09.942595 | orchestrator | changed: [testbed-manager] 2026-01-10 14:36:09.942599 | orchestrator | 2026-01-10 14:36:09.942604 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-01-10 14:36:09.942608 | orchestrator | Saturday 10 January 2026 14:34:22 +0000 (0:00:02.415) 0:00:05.756 ****** 2026-01-10 14:36:09.942613 | orchestrator | changed: [testbed-manager] 2026-01-10 14:36:09.942617 | orchestrator | 2026-01-10 14:36:09.942622 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-01-10 14:36:09.942626 | orchestrator | Saturday 10 January 2026 14:34:22 +0000 (0:00:00.627) 0:00:06.384 ****** 2026-01-10 14:36:09.942631 | orchestrator | changed: [testbed-manager -> localhost] 2026-01-10 14:36:09.942636 | orchestrator | 2026-01-10 14:36:09.942640 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-01-10 14:36:09.942645 | orchestrator | Saturday 10 January 2026 14:34:24 +0000 (0:00:02.110) 0:00:08.494 ****** 2026-01-10 14:36:09.942649 | orchestrator | changed: [testbed-manager -> localhost] 2026-01-10 14:36:09.942654 | orchestrator | 2026-01-10 14:36:09.942658 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-01-10 14:36:09.942663 | orchestrator | Saturday 10 January 2026 14:34:25 +0000 (0:00:00.948) 0:00:09.443 ****** 2026-01-10 14:36:09.942668 | orchestrator | ok: [testbed-manager] 2026-01-10 14:36:09.942672 | orchestrator | 2026-01-10 14:36:09.942677 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-01-10 14:36:09.942681 | orchestrator | Saturday 10 January 2026 14:34:26 +0000 (0:00:00.510) 0:00:09.954 ****** 2026-01-10 14:36:09.942686 | orchestrator | ok: [testbed-manager] 2026-01-10 14:36:09.942690 | orchestrator | 2026-01-10 14:36:09.942695 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-10 14:36:09.942703 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-10 14:36:09.942707 | orchestrator | 2026-01-10 14:36:09.942712 | orchestrator | 2026-01-10 14:36:09.942716 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-10 14:36:09.942721 | orchestrator | Saturday 10 January 2026 14:34:26 +0000 (0:00:00.472) 0:00:10.426 ****** 2026-01-10 14:36:09.942725 | orchestrator | =============================================================================== 2026-01-10 14:36:09.942730 | orchestrator | Write kubeconfig file --------------------------------------------------- 2.42s 2026-01-10 14:36:09.942734 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 2.11s 2026-01-10 14:36:09.942739 | orchestrator | Get home directory of operator user ------------------------------------- 1.61s 2026-01-10 14:36:09.942751 | orchestrator | Change server address in the kubeconfig inside the manager service ------ 0.95s 2026-01-10 14:36:09.942756 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.92s 2026-01-10 14:36:09.942760 | orchestrator | Create .kube directory -------------------------------------------------- 0.65s 2026-01-10 14:36:09.942764 | orchestrator | Change server address in the kubeconfig --------------------------------- 0.63s 2026-01-10 14:36:09.942769 | orchestrator | Set KUBECONFIG environment variable ------------------------------------- 0.51s 2026-01-10 14:36:09.942773 | orchestrator | Enable kubectl command line completion ---------------------------------- 0.47s 2026-01-10 14:36:09.942778 | orchestrator | 2026-01-10 14:36:09.942782 | orchestrator | 2026-01-10 14:36:09.942787 | orchestrator | PLAY [Set kolla_action_rabbitmq] *********************************************** 2026-01-10 14:36:09.942791 | orchestrator | 2026-01-10 14:36:09.942798 | orchestrator | TASK [Inform the user about the following task] ******************************** 2026-01-10 14:36:09.942803 | orchestrator | Saturday 10 January 2026 14:32:58 +0000 (0:00:00.153) 0:00:00.153 ****** 2026-01-10 14:36:09.942848 | orchestrator | ok: [localhost] => { 2026-01-10 14:36:09.942854 | orchestrator |  "msg": "The task 'Check RabbitMQ service' fails if the RabbitMQ service has not yet been deployed. This is fine." 2026-01-10 14:36:09.942859 | orchestrator | } 2026-01-10 14:36:09.942864 | orchestrator | 2026-01-10 14:36:09.942869 | orchestrator | TASK [Check RabbitMQ service] ************************************************** 2026-01-10 14:36:09.942873 | orchestrator | Saturday 10 January 2026 14:32:58 +0000 (0:00:00.126) 0:00:00.280 ****** 2026-01-10 14:36:09.942879 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string RabbitMQ Management in 192.168.16.9:15672"} 2026-01-10 14:36:09.942885 | orchestrator | ...ignoring 2026-01-10 14:36:09.942889 | orchestrator | 2026-01-10 14:36:09.942894 | orchestrator | TASK [Set kolla_action_rabbitmq = upgrade if RabbitMQ is already running] ****** 2026-01-10 14:36:09.942898 | orchestrator | Saturday 10 January 2026 14:33:01 +0000 (0:00:02.914) 0:00:03.194 ****** 2026-01-10 14:36:09.942903 | orchestrator | skipping: [localhost] 2026-01-10 14:36:09.942907 | orchestrator | 2026-01-10 14:36:09.942913 | orchestrator | TASK [Set kolla_action_rabbitmq = kolla_action_ng] ***************************** 2026-01-10 14:36:09.942918 | orchestrator | Saturday 10 January 2026 14:33:01 +0000 (0:00:00.335) 0:00:03.530 ****** 2026-01-10 14:36:09.942924 | orchestrator | ok: [localhost] 2026-01-10 14:36:09.942929 | orchestrator | 2026-01-10 14:36:09.942934 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-10 14:36:09.942939 | orchestrator | 2026-01-10 14:36:09.942945 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-10 14:36:09.942950 | orchestrator | Saturday 10 January 2026 14:33:01 +0000 (0:00:00.212) 0:00:03.743 ****** 2026-01-10 14:36:09.942955 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:36:09.942960 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:36:09.942965 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:36:09.942970 | orchestrator | 2026-01-10 14:36:09.942975 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-10 14:36:09.942981 | orchestrator | Saturday 10 January 2026 14:33:02 +0000 (0:00:00.500) 0:00:04.243 ****** 2026-01-10 14:36:09.942990 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2026-01-10 14:36:09.942995 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2026-01-10 14:36:09.943000 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2026-01-10 14:36:09.943006 | orchestrator | 2026-01-10 14:36:09.943011 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2026-01-10 14:36:09.943016 | orchestrator | 2026-01-10 14:36:09.943021 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-01-10 14:36:09.943027 | orchestrator | Saturday 10 January 2026 14:33:02 +0000 (0:00:00.619) 0:00:04.862 ****** 2026-01-10 14:36:09.943032 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:36:09.943057 | orchestrator | 2026-01-10 14:36:09.943062 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-01-10 14:36:09.943067 | orchestrator | Saturday 10 January 2026 14:33:03 +0000 (0:00:00.600) 0:00:05.463 ****** 2026-01-10 14:36:09.943072 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:36:09.943078 | orchestrator | 2026-01-10 14:36:09.943083 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2026-01-10 14:36:09.943088 | orchestrator | Saturday 10 January 2026 14:33:04 +0000 (0:00:00.876) 0:00:06.339 ****** 2026-01-10 14:36:09.943093 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:36:09.943098 | orchestrator | 2026-01-10 14:36:09.943104 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2026-01-10 14:36:09.943109 | orchestrator | Saturday 10 January 2026 14:33:04 +0000 (0:00:00.239) 0:00:06.579 ****** 2026-01-10 14:36:09.943114 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:36:09.943119 | orchestrator | 2026-01-10 14:36:09.943124 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2026-01-10 14:36:09.943130 | orchestrator | Saturday 10 January 2026 14:33:04 +0000 (0:00:00.264) 0:00:06.844 ****** 2026-01-10 14:36:09.943135 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:36:09.943140 | orchestrator | 2026-01-10 14:36:09.943145 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2026-01-10 14:36:09.943150 | orchestrator | Saturday 10 January 2026 14:33:05 +0000 (0:00:00.258) 0:00:07.102 ****** 2026-01-10 14:36:09.943156 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:36:09.943161 | orchestrator | 2026-01-10 14:36:09.943166 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-01-10 14:36:09.943171 | orchestrator | Saturday 10 January 2026 14:33:05 +0000 (0:00:00.733) 0:00:07.836 ****** 2026-01-10 14:36:09.943177 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:36:09.943182 | orchestrator | 2026-01-10 14:36:09.943187 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-01-10 14:36:09.943197 | orchestrator | Saturday 10 January 2026 14:33:06 +0000 (0:00:00.668) 0:00:08.504 ****** 2026-01-10 14:36:09.943202 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:36:09.943207 | orchestrator | 2026-01-10 14:36:09.943212 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2026-01-10 14:36:09.943217 | orchestrator | Saturday 10 January 2026 14:33:07 +0000 (0:00:00.827) 0:00:09.332 ****** 2026-01-10 14:36:09.943223 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:36:09.943228 | orchestrator | 2026-01-10 14:36:09.943234 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2026-01-10 14:36:09.943239 | orchestrator | Saturday 10 January 2026 14:33:07 +0000 (0:00:00.309) 0:00:09.642 ****** 2026-01-10 14:36:09.943244 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:36:09.943248 | orchestrator | 2026-01-10 14:36:09.943253 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2026-01-10 14:36:09.943261 | orchestrator | Saturday 10 January 2026 14:33:08 +0000 (0:00:00.451) 0:00:10.093 ****** 2026-01-10 14:36:09.943269 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-10 14:36:09.943279 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-10 14:36:09.943286 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-10 14:36:09.943291 | orchestrator | 2026-01-10 14:36:09.943296 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2026-01-10 14:36:09.943300 | orchestrator | Saturday 10 January 2026 14:33:09 +0000 (0:00:00.945) 0:00:11.038 ****** 2026-01-10 14:36:09.943311 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-10 14:36:09.943320 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-10 14:36:09.943325 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-10 14:36:09.943330 | orchestrator | 2026-01-10 14:36:09.943334 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2026-01-10 14:36:09.943339 | orchestrator | Saturday 10 January 2026 14:33:11 +0000 (0:00:02.186) 0:00:13.225 ****** 2026-01-10 14:36:09.943344 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-01-10 14:36:09.943348 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-01-10 14:36:09.943353 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-01-10 14:36:09.943357 | orchestrator | 2026-01-10 14:36:09.943362 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2026-01-10 14:36:09.943367 | orchestrator | Saturday 10 January 2026 14:33:13 +0000 (0:00:02.325) 0:00:15.550 ****** 2026-01-10 14:36:09.943371 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-01-10 14:36:09.943376 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-01-10 14:36:09.943380 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-01-10 14:36:09.943384 | orchestrator | 2026-01-10 14:36:09.943389 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2026-01-10 14:36:09.943396 | orchestrator | Saturday 10 January 2026 14:33:15 +0000 (0:00:02.115) 0:00:17.666 ****** 2026-01-10 14:36:09.943404 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-01-10 14:36:09.943408 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-01-10 14:36:09.943413 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-01-10 14:36:09.943417 | orchestrator | 2026-01-10 14:36:09.943422 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2026-01-10 14:36:09.943426 | orchestrator | Saturday 10 January 2026 14:33:16 +0000 (0:00:01.310) 0:00:18.976 ****** 2026-01-10 14:36:09.943431 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-01-10 14:36:09.943438 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-01-10 14:36:09.943442 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-01-10 14:36:09.943447 | orchestrator | 2026-01-10 14:36:09.943451 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2026-01-10 14:36:09.943456 | orchestrator | Saturday 10 January 2026 14:33:18 +0000 (0:00:01.567) 0:00:20.544 ****** 2026-01-10 14:36:09.943460 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-01-10 14:36:09.943465 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-01-10 14:36:09.943469 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-01-10 14:36:09.943473 | orchestrator | 2026-01-10 14:36:09.943478 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2026-01-10 14:36:09.943483 | orchestrator | Saturday 10 January 2026 14:33:19 +0000 (0:00:01.373) 0:00:21.917 ****** 2026-01-10 14:36:09.943487 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-01-10 14:36:09.943492 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-01-10 14:36:09.943496 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-01-10 14:36:09.943500 | orchestrator | 2026-01-10 14:36:09.943505 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-01-10 14:36:09.943509 | orchestrator | Saturday 10 January 2026 14:33:21 +0000 (0:00:01.359) 0:00:23.276 ****** 2026-01-10 14:36:09.943514 | orchestrator | included: /ansible/roles/rabbitmq/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:36:09.943518 | orchestrator | 2026-01-10 14:36:09.943523 | orchestrator | TASK [service-cert-copy : rabbitmq | Copying over extra CA certificates] ******* 2026-01-10 14:36:09.943527 | orchestrator | Saturday 10 January 2026 14:33:22 +0000 (0:00:01.237) 0:00:24.514 ****** 2026-01-10 14:36:09.943532 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-10 14:36:09.943541 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-10 14:36:09.943550 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-10 14:36:09.943555 | orchestrator | 2026-01-10 14:36:09.943559 | orchestrator | TASK [service-cert-copy : rabbitmq | Copying over backend internal TLS certificate] *** 2026-01-10 14:36:09.943564 | orchestrator | Saturday 10 January 2026 14:33:23 +0000 (0:00:01.385) 0:00:25.899 ****** 2026-01-10 14:36:09.943584 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-01-10 14:36:09.943589 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-01-10 14:36:09.943597 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:36:09.943602 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:36:09.943610 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-01-10 14:36:09.943616 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:36:09.943620 | orchestrator | 2026-01-10 14:36:09.943629 | orchestrator | TASK [service-cert-copy : rabbitmq | Copying over backend internal TLS key] **** 2026-01-10 14:36:09.943634 | orchestrator | Saturday 10 January 2026 14:33:24 +0000 (0:00:00.522) 0:00:26.422 ****** 2026-01-10 14:36:09.943639 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-01-10 14:36:09.943644 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:36:09.943649 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-01-10 14:36:09.943656 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:36:09.943661 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-01-10 14:36:09.943666 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:36:09.943671 | orchestrator | 2026-01-10 14:36:09.943675 | orchestrator | TASK [service-check-containers : rabbitmq | Check containers] ****************** 2026-01-10 14:36:09.943682 | orchestrator | Saturday 10 January 2026 14:33:25 +0000 (0:00:01.035) 0:00:27.457 ****** 2026-01-10 14:36:09.943689 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-10 14:36:09.943695 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-10 14:36:09.943700 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-10 14:36:09.943708 | orchestrator | 2026-01-10 14:36:09.943713 | orchestrator | TASK [service-check-containers : rabbitmq | Notify handlers to restart containers] *** 2026-01-10 14:36:09.943717 | orchestrator | Saturday 10 January 2026 14:33:26 +0000 (0:00:01.306) 0:00:28.764 ****** 2026-01-10 14:36:09.943722 | orchestrator | changed: [testbed-node-0] => { 2026-01-10 14:36:09.943726 | orchestrator |  "msg": "Notifying handlers" 2026-01-10 14:36:09.943731 | orchestrator | } 2026-01-10 14:36:09.943736 | orchestrator | changed: [testbed-node-1] => { 2026-01-10 14:36:09.943740 | orchestrator |  "msg": "Notifying handlers" 2026-01-10 14:36:09.943744 | orchestrator | } 2026-01-10 14:36:09.943749 | orchestrator | changed: [testbed-node-2] => { 2026-01-10 14:36:09.943753 | orchestrator |  "msg": "Notifying handlers" 2026-01-10 14:36:09.943758 | orchestrator | } 2026-01-10 14:36:09.943762 | orchestrator | 2026-01-10 14:36:09.943767 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-01-10 14:36:09.943771 | orchestrator | Saturday 10 January 2026 14:33:27 +0000 (0:00:00.530) 0:00:29.295 ****** 2026-01-10 14:36:09.943913 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-01-10 14:36:09.943923 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-01-10 14:36:09.943929 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:36:09.943933 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:36:09.943942 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-01-10 14:36:09.943947 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:36:09.943952 | orchestrator | 2026-01-10 14:36:09.943956 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2026-01-10 14:36:09.943961 | orchestrator | Saturday 10 January 2026 14:33:28 +0000 (0:00:00.910) 0:00:30.205 ****** 2026-01-10 14:36:09.943965 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:36:09.943970 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:36:09.943974 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:36:09.943979 | orchestrator | 2026-01-10 14:36:09.943983 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2026-01-10 14:36:09.943988 | orchestrator | Saturday 10 January 2026 14:33:29 +0000 (0:00:00.858) 0:00:31.063 ****** 2026-01-10 14:36:09.943992 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:36:09.943997 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:36:09.944001 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:36:09.944006 | orchestrator | 2026-01-10 14:36:09.944010 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2026-01-10 14:36:09.944015 | orchestrator | Saturday 10 January 2026 14:33:38 +0000 (0:00:09.104) 0:00:40.168 ****** 2026-01-10 14:36:09.944019 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:36:09.944024 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:36:09.944028 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:36:09.944033 | orchestrator | 2026-01-10 14:36:09.944037 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-01-10 14:36:09.944041 | orchestrator | 2026-01-10 14:36:09.944046 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-01-10 14:36:09.944053 | orchestrator | Saturday 10 January 2026 14:33:38 +0000 (0:00:00.529) 0:00:40.698 ****** 2026-01-10 14:36:09.944058 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:36:09.944062 | orchestrator | 2026-01-10 14:36:09.944067 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-01-10 14:36:09.944071 | orchestrator | Saturday 10 January 2026 14:33:39 +0000 (0:00:00.683) 0:00:41.381 ****** 2026-01-10 14:36:09.944075 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:36:09.944080 | orchestrator | 2026-01-10 14:36:09.944084 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-01-10 14:36:09.944089 | orchestrator | Saturday 10 January 2026 14:33:39 +0000 (0:00:00.116) 0:00:41.497 ****** 2026-01-10 14:36:09.944093 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:36:09.944098 | orchestrator | 2026-01-10 14:36:09.944102 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-01-10 14:36:09.944107 | orchestrator | Saturday 10 January 2026 14:33:41 +0000 (0:00:01.745) 0:00:43.243 ****** 2026-01-10 14:36:09.944113 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:36:09.944118 | orchestrator | 2026-01-10 14:36:09.944123 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-01-10 14:36:09.944127 | orchestrator | 2026-01-10 14:36:09.944135 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-01-10 14:36:09.944139 | orchestrator | Saturday 10 January 2026 14:35:34 +0000 (0:01:53.607) 0:02:36.851 ****** 2026-01-10 14:36:09.944144 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:36:09.944148 | orchestrator | 2026-01-10 14:36:09.944153 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-01-10 14:36:09.944157 | orchestrator | Saturday 10 January 2026 14:35:35 +0000 (0:00:00.637) 0:02:37.488 ****** 2026-01-10 14:36:09.944161 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:36:09.944166 | orchestrator | 2026-01-10 14:36:09.944170 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-01-10 14:36:09.944175 | orchestrator | Saturday 10 January 2026 14:35:35 +0000 (0:00:00.120) 0:02:37.609 ****** 2026-01-10 14:36:09.944179 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:36:09.944184 | orchestrator | 2026-01-10 14:36:09.944188 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-01-10 14:36:09.944193 | orchestrator | Saturday 10 January 2026 14:35:42 +0000 (0:00:06.688) 0:02:44.298 ****** 2026-01-10 14:36:09.944197 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:36:09.944202 | orchestrator | 2026-01-10 14:36:09.944206 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-01-10 14:36:09.944211 | orchestrator | 2026-01-10 14:36:09.944215 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-01-10 14:36:09.944220 | orchestrator | Saturday 10 January 2026 14:35:49 +0000 (0:00:07.476) 0:02:51.774 ****** 2026-01-10 14:36:09.944224 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:36:09.944229 | orchestrator | 2026-01-10 14:36:09.944233 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-01-10 14:36:09.944238 | orchestrator | Saturday 10 January 2026 14:35:50 +0000 (0:00:00.736) 0:02:52.510 ****** 2026-01-10 14:36:09.944242 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:36:09.944247 | orchestrator | 2026-01-10 14:36:09.944251 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-01-10 14:36:09.944255 | orchestrator | Saturday 10 January 2026 14:35:50 +0000 (0:00:00.116) 0:02:52.627 ****** 2026-01-10 14:36:09.944260 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:36:09.944264 | orchestrator | 2026-01-10 14:36:09.944269 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-01-10 14:36:09.944273 | orchestrator | Saturday 10 January 2026 14:35:52 +0000 (0:00:01.654) 0:02:54.281 ****** 2026-01-10 14:36:09.944278 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:36:09.944282 | orchestrator | 2026-01-10 14:36:09.944287 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2026-01-10 14:36:09.944291 | orchestrator | 2026-01-10 14:36:09.944296 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2026-01-10 14:36:09.944300 | orchestrator | Saturday 10 January 2026 14:36:03 +0000 (0:00:11.673) 0:03:05.955 ****** 2026-01-10 14:36:09.944304 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:36:09.944321 | orchestrator | 2026-01-10 14:36:09.944326 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2026-01-10 14:36:09.944331 | orchestrator | Saturday 10 January 2026 14:36:04 +0000 (0:00:00.730) 0:03:06.686 ****** 2026-01-10 14:36:09.944335 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:36:09.944340 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:36:09.944345 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:36:09.944349 | orchestrator | 2026-01-10 14:36:09.944354 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-10 14:36:09.944359 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2026-01-10 14:36:09.944364 | orchestrator | testbed-node-0 : ok=26  changed=16  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2026-01-10 14:36:09.944368 | orchestrator | testbed-node-1 : ok=24  changed=16  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-01-10 14:36:09.944376 | orchestrator | testbed-node-2 : ok=24  changed=16  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-01-10 14:36:09.944381 | orchestrator | 2026-01-10 14:36:09.944385 | orchestrator | 2026-01-10 14:36:09.944390 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-10 14:36:09.944394 | orchestrator | Saturday 10 January 2026 14:36:07 +0000 (0:00:02.742) 0:03:09.429 ****** 2026-01-10 14:36:09.944399 | orchestrator | =============================================================================== 2026-01-10 14:36:09.944404 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------ 132.76s 2026-01-10 14:36:09.944411 | orchestrator | rabbitmq : Restart rabbitmq container ---------------------------------- 10.09s 2026-01-10 14:36:09.944415 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------- 9.10s 2026-01-10 14:36:09.944420 | orchestrator | Check RabbitMQ service -------------------------------------------------- 2.91s 2026-01-10 14:36:09.944425 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 2.74s 2026-01-10 14:36:09.944429 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 2.33s 2026-01-10 14:36:09.944434 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 2.19s 2026-01-10 14:36:09.944438 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 2.12s 2026-01-10 14:36:09.944443 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 2.06s 2026-01-10 14:36:09.944450 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 1.57s 2026-01-10 14:36:09.944455 | orchestrator | service-cert-copy : rabbitmq | Copying over extra CA certificates ------- 1.39s 2026-01-10 14:36:09.944459 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 1.37s 2026-01-10 14:36:09.944463 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 1.36s 2026-01-10 14:36:09.944468 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 1.31s 2026-01-10 14:36:09.944472 | orchestrator | service-check-containers : rabbitmq | Check containers ------------------ 1.31s 2026-01-10 14:36:09.944477 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 1.24s 2026-01-10 14:36:09.944481 | orchestrator | service-cert-copy : rabbitmq | Copying over backend internal TLS key ---- 1.04s 2026-01-10 14:36:09.944486 | orchestrator | rabbitmq : Ensuring config directories exist ---------------------------- 0.95s 2026-01-10 14:36:09.944490 | orchestrator | service-check-containers : Include tasks -------------------------------- 0.91s 2026-01-10 14:36:09.944495 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 0.88s 2026-01-10 14:36:09.944510 | orchestrator | 2026-01-10 14:36:09 | INFO  | Task 35fd8ed6-bee3-4b01-a98c-0d469af062b0 is in state STARTED 2026-01-10 14:36:09.944995 | orchestrator | 2026-01-10 14:36:09 | INFO  | Task 17014c7a-d1aa-440d-a2ab-612bd1f148fe is in state STARTED 2026-01-10 14:36:09.945179 | orchestrator | 2026-01-10 14:36:09 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:36:12.981417 | orchestrator | 2026-01-10 14:36:12 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:36:12.984371 | orchestrator | 2026-01-10 14:36:12 | INFO  | Task 35fd8ed6-bee3-4b01-a98c-0d469af062b0 is in state STARTED 2026-01-10 14:36:12.986211 | orchestrator | 2026-01-10 14:36:12 | INFO  | Task 17014c7a-d1aa-440d-a2ab-612bd1f148fe is in state STARTED 2026-01-10 14:36:12.986290 | orchestrator | 2026-01-10 14:36:12 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:36:16.021744 | orchestrator | 2026-01-10 14:36:16 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:36:16.029244 | orchestrator | 2026-01-10 14:36:16 | INFO  | Task 35fd8ed6-bee3-4b01-a98c-0d469af062b0 is in state STARTED 2026-01-10 14:36:16.034926 | orchestrator | 2026-01-10 14:36:16 | INFO  | Task 17014c7a-d1aa-440d-a2ab-612bd1f148fe is in state STARTED 2026-01-10 14:36:16.035010 | orchestrator | 2026-01-10 14:36:16 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:36:19.079588 | orchestrator | 2026-01-10 14:36:19 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:36:19.081700 | orchestrator | 2026-01-10 14:36:19 | INFO  | Task 35fd8ed6-bee3-4b01-a98c-0d469af062b0 is in state STARTED 2026-01-10 14:36:19.083201 | orchestrator | 2026-01-10 14:36:19 | INFO  | Task 17014c7a-d1aa-440d-a2ab-612bd1f148fe is in state STARTED 2026-01-10 14:36:19.083273 | orchestrator | 2026-01-10 14:36:19 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:36:22.127825 | orchestrator | 2026-01-10 14:36:22 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:36:22.128168 | orchestrator | 2026-01-10 14:36:22 | INFO  | Task 35fd8ed6-bee3-4b01-a98c-0d469af062b0 is in state STARTED 2026-01-10 14:36:22.128959 | orchestrator | 2026-01-10 14:36:22 | INFO  | Task 17014c7a-d1aa-440d-a2ab-612bd1f148fe is in state STARTED 2026-01-10 14:36:22.129022 | orchestrator | 2026-01-10 14:36:22 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:36:25.178075 | orchestrator | 2026-01-10 14:36:25 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:36:25.181202 | orchestrator | 2026-01-10 14:36:25 | INFO  | Task 35fd8ed6-bee3-4b01-a98c-0d469af062b0 is in state STARTED 2026-01-10 14:36:25.183986 | orchestrator | 2026-01-10 14:36:25 | INFO  | Task 17014c7a-d1aa-440d-a2ab-612bd1f148fe is in state STARTED 2026-01-10 14:36:25.184068 | orchestrator | 2026-01-10 14:36:25 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:36:28.256397 | orchestrator | 2026-01-10 14:36:28 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:36:28.258519 | orchestrator | 2026-01-10 14:36:28 | INFO  | Task 35fd8ed6-bee3-4b01-a98c-0d469af062b0 is in state STARTED 2026-01-10 14:36:28.259508 | orchestrator | 2026-01-10 14:36:28 | INFO  | Task 17014c7a-d1aa-440d-a2ab-612bd1f148fe is in state STARTED 2026-01-10 14:36:28.259536 | orchestrator | 2026-01-10 14:36:28 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:36:31.304042 | orchestrator | 2026-01-10 14:36:31 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:36:31.304371 | orchestrator | 2026-01-10 14:36:31 | INFO  | Task 35fd8ed6-bee3-4b01-a98c-0d469af062b0 is in state STARTED 2026-01-10 14:36:31.305989 | orchestrator | 2026-01-10 14:36:31 | INFO  | Task 17014c7a-d1aa-440d-a2ab-612bd1f148fe is in state STARTED 2026-01-10 14:36:31.306130 | orchestrator | 2026-01-10 14:36:31 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:36:34.348611 | orchestrator | 2026-01-10 14:36:34 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:36:34.350488 | orchestrator | 2026-01-10 14:36:34 | INFO  | Task 35fd8ed6-bee3-4b01-a98c-0d469af062b0 is in state STARTED 2026-01-10 14:36:34.353000 | orchestrator | 2026-01-10 14:36:34 | INFO  | Task 17014c7a-d1aa-440d-a2ab-612bd1f148fe is in state STARTED 2026-01-10 14:36:34.353179 | orchestrator | 2026-01-10 14:36:34 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:36:37.397411 | orchestrator | 2026-01-10 14:36:37 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:36:37.397924 | orchestrator | 2026-01-10 14:36:37 | INFO  | Task 35fd8ed6-bee3-4b01-a98c-0d469af062b0 is in state STARTED 2026-01-10 14:36:37.398946 | orchestrator | 2026-01-10 14:36:37 | INFO  | Task 17014c7a-d1aa-440d-a2ab-612bd1f148fe is in state STARTED 2026-01-10 14:36:37.398998 | orchestrator | 2026-01-10 14:36:37 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:36:40.434428 | orchestrator | 2026-01-10 14:36:40 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:36:40.437104 | orchestrator | 2026-01-10 14:36:40 | INFO  | Task 35fd8ed6-bee3-4b01-a98c-0d469af062b0 is in state STARTED 2026-01-10 14:36:40.441840 | orchestrator | 2026-01-10 14:36:40 | INFO  | Task 17014c7a-d1aa-440d-a2ab-612bd1f148fe is in state STARTED 2026-01-10 14:36:40.441905 | orchestrator | 2026-01-10 14:36:40 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:36:43.483886 | orchestrator | 2026-01-10 14:36:43 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:36:43.485545 | orchestrator | 2026-01-10 14:36:43 | INFO  | Task 35fd8ed6-bee3-4b01-a98c-0d469af062b0 is in state STARTED 2026-01-10 14:36:43.487939 | orchestrator | 2026-01-10 14:36:43 | INFO  | Task 17014c7a-d1aa-440d-a2ab-612bd1f148fe is in state STARTED 2026-01-10 14:36:43.488744 | orchestrator | 2026-01-10 14:36:43 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:36:46.530229 | orchestrator | 2026-01-10 14:36:46 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:36:46.533108 | orchestrator | 2026-01-10 14:36:46 | INFO  | Task 35fd8ed6-bee3-4b01-a98c-0d469af062b0 is in state STARTED 2026-01-10 14:36:46.536251 | orchestrator | 2026-01-10 14:36:46 | INFO  | Task 17014c7a-d1aa-440d-a2ab-612bd1f148fe is in state STARTED 2026-01-10 14:36:46.537301 | orchestrator | 2026-01-10 14:36:46 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:36:49.578340 | orchestrator | 2026-01-10 14:36:49 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:36:49.578926 | orchestrator | 2026-01-10 14:36:49 | INFO  | Task 35fd8ed6-bee3-4b01-a98c-0d469af062b0 is in state STARTED 2026-01-10 14:36:49.580513 | orchestrator | 2026-01-10 14:36:49 | INFO  | Task 17014c7a-d1aa-440d-a2ab-612bd1f148fe is in state STARTED 2026-01-10 14:36:49.580560 | orchestrator | 2026-01-10 14:36:49 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:36:52.622878 | orchestrator | 2026-01-10 14:36:52 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:36:52.623035 | orchestrator | 2026-01-10 14:36:52 | INFO  | Task 35fd8ed6-bee3-4b01-a98c-0d469af062b0 is in state STARTED 2026-01-10 14:36:52.623939 | orchestrator | 2026-01-10 14:36:52 | INFO  | Task 17014c7a-d1aa-440d-a2ab-612bd1f148fe is in state STARTED 2026-01-10 14:36:52.623962 | orchestrator | 2026-01-10 14:36:52 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:36:55.658714 | orchestrator | 2026-01-10 14:36:55 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:36:55.660284 | orchestrator | 2026-01-10 14:36:55 | INFO  | Task 35fd8ed6-bee3-4b01-a98c-0d469af062b0 is in state STARTED 2026-01-10 14:36:55.661953 | orchestrator | 2026-01-10 14:36:55 | INFO  | Task 17014c7a-d1aa-440d-a2ab-612bd1f148fe is in state STARTED 2026-01-10 14:36:55.662004 | orchestrator | 2026-01-10 14:36:55 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:36:58.701331 | orchestrator | 2026-01-10 14:36:58 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:36:58.703417 | orchestrator | 2026-01-10 14:36:58 | INFO  | Task 35fd8ed6-bee3-4b01-a98c-0d469af062b0 is in state STARTED 2026-01-10 14:36:58.705168 | orchestrator | 2026-01-10 14:36:58 | INFO  | Task 17014c7a-d1aa-440d-a2ab-612bd1f148fe is in state STARTED 2026-01-10 14:36:58.705243 | orchestrator | 2026-01-10 14:36:58 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:37:01.753637 | orchestrator | 2026-01-10 14:37:01 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:37:01.754140 | orchestrator | 2026-01-10 14:37:01 | INFO  | Task 35fd8ed6-bee3-4b01-a98c-0d469af062b0 is in state STARTED 2026-01-10 14:37:01.758663 | orchestrator | 2026-01-10 14:37:01 | INFO  | Task 17014c7a-d1aa-440d-a2ab-612bd1f148fe is in state STARTED 2026-01-10 14:37:01.758733 | orchestrator | 2026-01-10 14:37:01 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:37:04.793868 | orchestrator | 2026-01-10 14:37:04 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:37:04.794214 | orchestrator | 2026-01-10 14:37:04 | INFO  | Task 35fd8ed6-bee3-4b01-a98c-0d469af062b0 is in state STARTED 2026-01-10 14:37:04.795319 | orchestrator | 2026-01-10 14:37:04 | INFO  | Task 17014c7a-d1aa-440d-a2ab-612bd1f148fe is in state STARTED 2026-01-10 14:37:04.795440 | orchestrator | 2026-01-10 14:37:04 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:37:07.857254 | orchestrator | 2026-01-10 14:37:07 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:37:07.857623 | orchestrator | 2026-01-10 14:37:07 | INFO  | Task 35fd8ed6-bee3-4b01-a98c-0d469af062b0 is in state STARTED 2026-01-10 14:37:07.860001 | orchestrator | 2026-01-10 14:37:07 | INFO  | Task 17014c7a-d1aa-440d-a2ab-612bd1f148fe is in state STARTED 2026-01-10 14:37:07.860038 | orchestrator | 2026-01-10 14:37:07 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:37:10.907164 | orchestrator | 2026-01-10 14:37:10 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:37:10.907342 | orchestrator | 2026-01-10 14:37:10 | INFO  | Task 35fd8ed6-bee3-4b01-a98c-0d469af062b0 is in state STARTED 2026-01-10 14:37:10.907997 | orchestrator | 2026-01-10 14:37:10 | INFO  | Task 17014c7a-d1aa-440d-a2ab-612bd1f148fe is in state STARTED 2026-01-10 14:37:10.908059 | orchestrator | 2026-01-10 14:37:10 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:37:13.954954 | orchestrator | 2026-01-10 14:37:13 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:37:13.955567 | orchestrator | 2026-01-10 14:37:13 | INFO  | Task 35fd8ed6-bee3-4b01-a98c-0d469af062b0 is in state STARTED 2026-01-10 14:37:13.957675 | orchestrator | 2026-01-10 14:37:13 | INFO  | Task 17014c7a-d1aa-440d-a2ab-612bd1f148fe is in state STARTED 2026-01-10 14:37:13.957831 | orchestrator | 2026-01-10 14:37:13 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:37:17.002966 | orchestrator | 2026-01-10 14:37:17 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:37:17.006137 | orchestrator | 2026-01-10 14:37:17 | INFO  | Task 35fd8ed6-bee3-4b01-a98c-0d469af062b0 is in state STARTED 2026-01-10 14:37:17.009235 | orchestrator | 2026-01-10 14:37:17 | INFO  | Task 17014c7a-d1aa-440d-a2ab-612bd1f148fe is in state STARTED 2026-01-10 14:37:17.009450 | orchestrator | 2026-01-10 14:37:17 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:37:20.048182 | orchestrator | 2026-01-10 14:37:20 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:37:20.048936 | orchestrator | 2026-01-10 14:37:20 | INFO  | Task 35fd8ed6-bee3-4b01-a98c-0d469af062b0 is in state STARTED 2026-01-10 14:37:20.050665 | orchestrator | 2026-01-10 14:37:20 | INFO  | Task 17014c7a-d1aa-440d-a2ab-612bd1f148fe is in state STARTED 2026-01-10 14:37:20.050821 | orchestrator | 2026-01-10 14:37:20 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:37:23.086520 | orchestrator | 2026-01-10 14:37:23 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:37:23.087535 | orchestrator | 2026-01-10 14:37:23 | INFO  | Task 35fd8ed6-bee3-4b01-a98c-0d469af062b0 is in state STARTED 2026-01-10 14:37:23.087598 | orchestrator | 2026-01-10 14:37:23 | INFO  | Task 17014c7a-d1aa-440d-a2ab-612bd1f148fe is in state STARTED 2026-01-10 14:37:23.087605 | orchestrator | 2026-01-10 14:37:23 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:37:26.136410 | orchestrator | 2026-01-10 14:37:26 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:37:26.138459 | orchestrator | 2026-01-10 14:37:26 | INFO  | Task 35fd8ed6-bee3-4b01-a98c-0d469af062b0 is in state STARTED 2026-01-10 14:37:26.143252 | orchestrator | 2026-01-10 14:37:26 | INFO  | Task 17014c7a-d1aa-440d-a2ab-612bd1f148fe is in state SUCCESS 2026-01-10 14:37:26.145378 | orchestrator | 2026-01-10 14:37:26.145424 | orchestrator | 2026-01-10 14:37:26.145429 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-10 14:37:26.145433 | orchestrator | 2026-01-10 14:37:26.145437 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-10 14:37:26.145440 | orchestrator | Saturday 10 January 2026 14:33:48 +0000 (0:00:00.202) 0:00:00.202 ****** 2026-01-10 14:37:26.145443 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:37:26.145447 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:37:26.145450 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:37:26.145453 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:37:26.145457 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:37:26.145460 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:37:26.145463 | orchestrator | 2026-01-10 14:37:26.145468 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-10 14:37:26.145473 | orchestrator | Saturday 10 January 2026 14:33:49 +0000 (0:00:00.619) 0:00:00.821 ****** 2026-01-10 14:37:26.145478 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2026-01-10 14:37:26.145483 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2026-01-10 14:37:26.145488 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2026-01-10 14:37:26.145492 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2026-01-10 14:37:26.145498 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2026-01-10 14:37:26.145505 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2026-01-10 14:37:26.145511 | orchestrator | 2026-01-10 14:37:26.145516 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2026-01-10 14:37:26.145522 | orchestrator | 2026-01-10 14:37:26.145528 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2026-01-10 14:37:26.145533 | orchestrator | Saturday 10 January 2026 14:33:50 +0000 (0:00:00.993) 0:00:01.815 ****** 2026-01-10 14:37:26.145540 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-10 14:37:26.145545 | orchestrator | 2026-01-10 14:37:26.145551 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2026-01-10 14:37:26.145557 | orchestrator | Saturday 10 January 2026 14:33:51 +0000 (0:00:01.064) 0:00:02.879 ****** 2026-01-10 14:37:26.145564 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:37:26.145571 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:37:26.145590 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:37:26.145596 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:37:26.145602 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:37:26.145611 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:37:26.145615 | orchestrator | 2026-01-10 14:37:26.145629 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2026-01-10 14:37:26.145634 | orchestrator | Saturday 10 January 2026 14:33:52 +0000 (0:00:01.153) 0:00:04.033 ****** 2026-01-10 14:37:26.145639 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:37:26.145644 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:37:26.145650 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:37:26.145655 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:37:26.145665 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:37:26.145671 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:37:26.145677 | orchestrator | 2026-01-10 14:37:26.145680 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2026-01-10 14:37:26.145683 | orchestrator | Saturday 10 January 2026 14:33:55 +0000 (0:00:02.570) 0:00:06.603 ****** 2026-01-10 14:37:26.145686 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:37:26.145689 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:37:26.145823 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:37:26.145834 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:37:26.145840 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:37:26.145845 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:37:26.145849 | orchestrator | 2026-01-10 14:37:26.145856 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2026-01-10 14:37:26.145859 | orchestrator | Saturday 10 January 2026 14:33:57 +0000 (0:00:02.254) 0:00:08.858 ****** 2026-01-10 14:37:26.145862 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:37:26.145865 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:37:26.145868 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:37:26.145871 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:37:26.145874 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:37:26.145879 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:37:26.145883 | orchestrator | 2026-01-10 14:37:26.145890 | orchestrator | TASK [service-check-containers : ovn_controller | Check containers] ************ 2026-01-10 14:37:26.145895 | orchestrator | Saturday 10 January 2026 14:33:59 +0000 (0:00:01.790) 0:00:10.648 ****** 2026-01-10 14:37:26.145900 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:37:26.145906 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:37:26.145915 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:37:26.145920 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:37:26.145926 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:37:26.145931 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:37:26.145936 | orchestrator | 2026-01-10 14:37:26.145941 | orchestrator | TASK [service-check-containers : ovn_controller | Notify handlers to restart containers] *** 2026-01-10 14:37:26.145947 | orchestrator | Saturday 10 January 2026 14:34:01 +0000 (0:00:02.366) 0:00:13.015 ****** 2026-01-10 14:37:26.145952 | orchestrator | changed: [testbed-node-0] => { 2026-01-10 14:37:26.145958 | orchestrator |  "msg": "Notifying handlers" 2026-01-10 14:37:26.145963 | orchestrator | } 2026-01-10 14:37:26.145969 | orchestrator | changed: [testbed-node-1] => { 2026-01-10 14:37:26.145973 | orchestrator |  "msg": "Notifying handlers" 2026-01-10 14:37:26.145976 | orchestrator | } 2026-01-10 14:37:26.145979 | orchestrator | changed: [testbed-node-2] => { 2026-01-10 14:37:26.145982 | orchestrator |  "msg": "Notifying handlers" 2026-01-10 14:37:26.145986 | orchestrator | } 2026-01-10 14:37:26.145990 | orchestrator | changed: [testbed-node-3] => { 2026-01-10 14:37:26.145993 | orchestrator |  "msg": "Notifying handlers" 2026-01-10 14:37:26.145996 | orchestrator | } 2026-01-10 14:37:26.146000 | orchestrator | changed: [testbed-node-4] => { 2026-01-10 14:37:26.146003 | orchestrator |  "msg": "Notifying handlers" 2026-01-10 14:37:26.146007 | orchestrator | } 2026-01-10 14:37:26.146010 | orchestrator | changed: [testbed-node-5] => { 2026-01-10 14:37:26.146036 | orchestrator |  "msg": "Notifying handlers" 2026-01-10 14:37:26.146040 | orchestrator | } 2026-01-10 14:37:26.146043 | orchestrator | 2026-01-10 14:37:26.146047 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-01-10 14:37:26.146050 | orchestrator | Saturday 10 January 2026 14:34:02 +0000 (0:00:00.670) 0:00:13.686 ****** 2026-01-10 14:37:26.146056 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:37:26.146060 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:37:26.146067 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:37:26.146075 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:37:26.146116 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:37:26.146289 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:37:26.146297 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:37:26.146303 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:37:26.146309 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:37:26.146315 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:37:26.146321 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:37:26.146326 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:37:26.146332 | orchestrator | 2026-01-10 14:37:26.146338 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2026-01-10 14:37:26.146400 | orchestrator | Saturday 10 January 2026 14:34:03 +0000 (0:00:01.366) 0:00:15.052 ****** 2026-01-10 14:37:26.146406 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:37:26.146411 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:37:26.146417 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:37:26.146422 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:37:26.146428 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:37:26.146433 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:37:26.146439 | orchestrator | 2026-01-10 14:37:26.146445 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2026-01-10 14:37:26.146450 | orchestrator | Saturday 10 January 2026 14:34:06 +0000 (0:00:02.770) 0:00:17.822 ****** 2026-01-10 14:37:26.146456 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2026-01-10 14:37:26.146462 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2026-01-10 14:37:26.146468 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2026-01-10 14:37:26.146474 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2026-01-10 14:37:26.146479 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2026-01-10 14:37:26.146484 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2026-01-10 14:37:26.146496 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-01-10 14:37:26.146502 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-01-10 14:37:26.146507 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-01-10 14:37:26.146513 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-01-10 14:37:26.146522 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-01-10 14:37:26.146528 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-01-10 14:37:26.146539 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-01-10 14:37:26.146545 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-01-10 14:37:26.146551 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-01-10 14:37:26.146556 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-01-10 14:37:26.146561 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-01-10 14:37:26.146566 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-01-10 14:37:26.146572 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-01-10 14:37:26.146577 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-01-10 14:37:26.146583 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-01-10 14:37:26.146587 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-01-10 14:37:26.146593 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-01-10 14:37:26.146598 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-01-10 14:37:26.146602 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-01-10 14:37:26.146607 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-01-10 14:37:26.146612 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-01-10 14:37:26.146617 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-01-10 14:37:26.146623 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-01-10 14:37:26.146628 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-01-10 14:37:26.146634 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-01-10 14:37:26.146639 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-01-10 14:37:26.146644 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-01-10 14:37:26.146649 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-01-10 14:37:26.146655 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-01-10 14:37:26.146660 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-01-10 14:37:26.146669 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-01-10 14:37:26.146674 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-01-10 14:37:26.146680 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-01-10 14:37:26.146686 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-01-10 14:37:26.146692 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-01-10 14:37:26.146698 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-01-10 14:37:26.146703 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2026-01-10 14:37:26.146709 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2026-01-10 14:37:26.146714 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2026-01-10 14:37:26.146721 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2026-01-10 14:37:26.146726 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2026-01-10 14:37:26.146734 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2026-01-10 14:37:26.146740 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-01-10 14:37:26.146828 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-01-10 14:37:26.146835 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-01-10 14:37:26.146841 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-01-10 14:37:26.146846 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-01-10 14:37:26.146852 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-01-10 14:37:26.146857 | orchestrator | 2026-01-10 14:37:26.146903 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-01-10 14:37:26.147262 | orchestrator | Saturday 10 January 2026 14:34:27 +0000 (0:00:20.747) 0:00:38.570 ****** 2026-01-10 14:37:26.147272 | orchestrator | 2026-01-10 14:37:26.147278 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-01-10 14:37:26.147283 | orchestrator | Saturday 10 January 2026 14:34:27 +0000 (0:00:00.070) 0:00:38.641 ****** 2026-01-10 14:37:26.147288 | orchestrator | 2026-01-10 14:37:26.147294 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-01-10 14:37:26.147299 | orchestrator | Saturday 10 January 2026 14:34:27 +0000 (0:00:00.067) 0:00:38.708 ****** 2026-01-10 14:37:26.147305 | orchestrator | 2026-01-10 14:37:26.147310 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-01-10 14:37:26.147315 | orchestrator | Saturday 10 January 2026 14:34:27 +0000 (0:00:00.066) 0:00:38.774 ****** 2026-01-10 14:37:26.147321 | orchestrator | 2026-01-10 14:37:26.147327 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-01-10 14:37:26.147342 | orchestrator | Saturday 10 January 2026 14:34:27 +0000 (0:00:00.066) 0:00:38.840 ****** 2026-01-10 14:37:26.147348 | orchestrator | 2026-01-10 14:37:26.147354 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-01-10 14:37:26.147359 | orchestrator | Saturday 10 January 2026 14:34:27 +0000 (0:00:00.065) 0:00:38.905 ****** 2026-01-10 14:37:26.147364 | orchestrator | 2026-01-10 14:37:26.147370 | orchestrator | RUNNING HANDLER [ovn-controller : Reload systemd config] *********************** 2026-01-10 14:37:26.147376 | orchestrator | Saturday 10 January 2026 14:34:27 +0000 (0:00:00.075) 0:00:38.981 ****** 2026-01-10 14:37:26.147381 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:37:26.147387 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:37:26.147393 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:37:26.147398 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:37:26.147404 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:37:26.147410 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:37:26.147415 | orchestrator | 2026-01-10 14:37:26.147421 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2026-01-10 14:37:26.147427 | orchestrator | Saturday 10 January 2026 14:34:29 +0000 (0:00:01.865) 0:00:40.847 ****** 2026-01-10 14:37:26.147432 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:37:26.147438 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:37:26.147444 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:37:26.147449 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:37:26.147455 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:37:26.147461 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:37:26.147466 | orchestrator | 2026-01-10 14:37:26.147472 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2026-01-10 14:37:26.147478 | orchestrator | 2026-01-10 14:37:26.147484 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-01-10 14:37:26.147487 | orchestrator | Saturday 10 January 2026 14:34:38 +0000 (0:00:08.804) 0:00:49.651 ****** 2026-01-10 14:37:26.147490 | orchestrator | included: /ansible/roles/ovn-db/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:37:26.147494 | orchestrator | 2026-01-10 14:37:26.147496 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-01-10 14:37:26.147500 | orchestrator | Saturday 10 January 2026 14:34:38 +0000 (0:00:00.561) 0:00:50.212 ****** 2026-01-10 14:37:26.147503 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:37:26.147506 | orchestrator | 2026-01-10 14:37:26.147509 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2026-01-10 14:37:26.147512 | orchestrator | Saturday 10 January 2026 14:34:39 +0000 (0:00:00.812) 0:00:51.025 ****** 2026-01-10 14:37:26.147515 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:37:26.147518 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:37:26.147521 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:37:26.147524 | orchestrator | 2026-01-10 14:37:26.147527 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2026-01-10 14:37:26.147530 | orchestrator | Saturday 10 January 2026 14:34:40 +0000 (0:00:01.018) 0:00:52.043 ****** 2026-01-10 14:37:26.147533 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:37:26.147536 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:37:26.147539 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:37:26.147542 | orchestrator | 2026-01-10 14:37:26.147546 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2026-01-10 14:37:26.147555 | orchestrator | Saturday 10 January 2026 14:34:41 +0000 (0:00:00.374) 0:00:52.417 ****** 2026-01-10 14:37:26.147560 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:37:26.147565 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:37:26.147570 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:37:26.147575 | orchestrator | 2026-01-10 14:37:26.147580 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2026-01-10 14:37:26.147592 | orchestrator | Saturday 10 January 2026 14:34:41 +0000 (0:00:00.536) 0:00:52.953 ****** 2026-01-10 14:37:26.147603 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:37:26.147606 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:37:26.147609 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:37:26.147612 | orchestrator | 2026-01-10 14:37:26.147615 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2026-01-10 14:37:26.147618 | orchestrator | Saturday 10 January 2026 14:34:41 +0000 (0:00:00.339) 0:00:53.293 ****** 2026-01-10 14:37:26.147621 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:37:26.147624 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:37:26.147627 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:37:26.147630 | orchestrator | 2026-01-10 14:37:26.147633 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2026-01-10 14:37:26.147636 | orchestrator | Saturday 10 January 2026 14:34:42 +0000 (0:00:00.329) 0:00:53.622 ****** 2026-01-10 14:37:26.147639 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:37:26.147645 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:37:26.147649 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:37:26.147655 | orchestrator | 2026-01-10 14:37:26.147660 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2026-01-10 14:37:26.147666 | orchestrator | Saturday 10 January 2026 14:34:42 +0000 (0:00:00.363) 0:00:53.986 ****** 2026-01-10 14:37:26.147671 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:37:26.147676 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:37:26.147681 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:37:26.147686 | orchestrator | 2026-01-10 14:37:26.147692 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2026-01-10 14:37:26.147695 | orchestrator | Saturday 10 January 2026 14:34:43 +0000 (0:00:00.598) 0:00:54.584 ****** 2026-01-10 14:37:26.147698 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:37:26.147701 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:37:26.147704 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:37:26.147707 | orchestrator | 2026-01-10 14:37:26.147710 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2026-01-10 14:37:26.147713 | orchestrator | Saturday 10 January 2026 14:34:43 +0000 (0:00:00.431) 0:00:55.016 ****** 2026-01-10 14:37:26.147716 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:37:26.147719 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:37:26.147722 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:37:26.147725 | orchestrator | 2026-01-10 14:37:26.147728 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2026-01-10 14:37:26.147731 | orchestrator | Saturday 10 January 2026 14:34:44 +0000 (0:00:00.402) 0:00:55.418 ****** 2026-01-10 14:37:26.147734 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:37:26.147737 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:37:26.147740 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:37:26.147761 | orchestrator | 2026-01-10 14:37:26.147766 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2026-01-10 14:37:26.147770 | orchestrator | Saturday 10 January 2026 14:34:44 +0000 (0:00:00.340) 0:00:55.758 ****** 2026-01-10 14:37:26.147775 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:37:26.147781 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:37:26.147786 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:37:26.147790 | orchestrator | 2026-01-10 14:37:26.147795 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2026-01-10 14:37:26.147800 | orchestrator | Saturday 10 January 2026 14:34:45 +0000 (0:00:00.634) 0:00:56.393 ****** 2026-01-10 14:37:26.147803 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:37:26.147806 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:37:26.147809 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:37:26.147814 | orchestrator | 2026-01-10 14:37:26.147819 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2026-01-10 14:37:26.147823 | orchestrator | Saturday 10 January 2026 14:34:45 +0000 (0:00:00.372) 0:00:56.766 ****** 2026-01-10 14:37:26.147828 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:37:26.147836 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:37:26.147842 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:37:26.147847 | orchestrator | 2026-01-10 14:37:26.147853 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2026-01-10 14:37:26.147858 | orchestrator | Saturday 10 January 2026 14:34:45 +0000 (0:00:00.358) 0:00:57.125 ****** 2026-01-10 14:37:26.147863 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:37:26.147868 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:37:26.147871 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:37:26.147874 | orchestrator | 2026-01-10 14:37:26.147877 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2026-01-10 14:37:26.147880 | orchestrator | Saturday 10 January 2026 14:34:46 +0000 (0:00:00.311) 0:00:57.436 ****** 2026-01-10 14:37:26.147883 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:37:26.147886 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:37:26.147889 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:37:26.147892 | orchestrator | 2026-01-10 14:37:26.147895 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2026-01-10 14:37:26.147898 | orchestrator | Saturday 10 January 2026 14:34:46 +0000 (0:00:00.336) 0:00:57.773 ****** 2026-01-10 14:37:26.147901 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:37:26.147904 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:37:26.147907 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:37:26.147910 | orchestrator | 2026-01-10 14:37:26.147913 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2026-01-10 14:37:26.147916 | orchestrator | Saturday 10 January 2026 14:34:47 +0000 (0:00:00.709) 0:00:58.482 ****** 2026-01-10 14:37:26.147920 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:37:26.147923 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:37:26.147926 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:37:26.147929 | orchestrator | 2026-01-10 14:37:26.147932 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-01-10 14:37:26.147935 | orchestrator | Saturday 10 January 2026 14:34:47 +0000 (0:00:00.543) 0:00:59.025 ****** 2026-01-10 14:37:26.147938 | orchestrator | included: /ansible/roles/ovn-db/tasks/bootstrap-initial.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:37:26.147941 | orchestrator | 2026-01-10 14:37:26.147947 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new cluster)] ******************* 2026-01-10 14:37:26.147951 | orchestrator | Saturday 10 January 2026 14:34:48 +0000 (0:00:00.825) 0:00:59.850 ****** 2026-01-10 14:37:26.147954 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:37:26.147957 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:37:26.147960 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:37:26.147963 | orchestrator | 2026-01-10 14:37:26.147966 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new cluster)] ******************* 2026-01-10 14:37:26.147970 | orchestrator | Saturday 10 January 2026 14:34:49 +0000 (0:00:00.851) 0:01:00.702 ****** 2026-01-10 14:37:26.147973 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:37:26.147976 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:37:26.147979 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:37:26.147982 | orchestrator | 2026-01-10 14:37:26.147985 | orchestrator | TASK [ovn-db : Check NB cluster status] **************************************** 2026-01-10 14:37:26.147988 | orchestrator | Saturday 10 January 2026 14:34:49 +0000 (0:00:00.531) 0:01:01.234 ****** 2026-01-10 14:37:26.147991 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:37:26.147994 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:37:26.147997 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:37:26.148000 | orchestrator | 2026-01-10 14:37:26.148003 | orchestrator | TASK [ovn-db : Check SB cluster status] **************************************** 2026-01-10 14:37:26.148006 | orchestrator | Saturday 10 January 2026 14:34:50 +0000 (0:00:00.428) 0:01:01.662 ****** 2026-01-10 14:37:26.148009 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:37:26.148012 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:37:26.148016 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:37:26.148022 | orchestrator | 2026-01-10 14:37:26.148025 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in NB DB] *** 2026-01-10 14:37:26.148028 | orchestrator | Saturday 10 January 2026 14:34:50 +0000 (0:00:00.449) 0:01:02.112 ****** 2026-01-10 14:37:26.148031 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:37:26.148034 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:37:26.148038 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:37:26.148045 | orchestrator | 2026-01-10 14:37:26.148048 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in SB DB] *** 2026-01-10 14:37:26.148051 | orchestrator | Saturday 10 January 2026 14:34:51 +0000 (0:00:00.728) 0:01:02.840 ****** 2026-01-10 14:37:26.148054 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:37:26.148057 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:37:26.148061 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:37:26.148066 | orchestrator | 2026-01-10 14:37:26.148069 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new member)] ******************** 2026-01-10 14:37:26.148072 | orchestrator | Saturday 10 January 2026 14:34:51 +0000 (0:00:00.407) 0:01:03.248 ****** 2026-01-10 14:37:26.148075 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:37:26.148078 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:37:26.148098 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:37:26.148102 | orchestrator | 2026-01-10 14:37:26.148106 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new member)] ******************** 2026-01-10 14:37:26.148111 | orchestrator | Saturday 10 January 2026 14:34:52 +0000 (0:00:00.408) 0:01:03.656 ****** 2026-01-10 14:37:26.148117 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:37:26.148124 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:37:26.148130 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:37:26.148135 | orchestrator | 2026-01-10 14:37:26.148140 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-01-10 14:37:26.148145 | orchestrator | Saturday 10 January 2026 14:34:52 +0000 (0:00:00.404) 0:01:04.061 ****** 2026-01-10 14:37:26.148152 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:37:26.148159 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:37:26.148167 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:37:26.148177 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:37:26.148186 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:37:26.148191 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:37:26.148196 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:37:26.148202 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:37:26.148208 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:37:26.148212 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:37:26.148218 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:37:26.148229 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:37:26.148240 | orchestrator | 2026-01-10 14:37:26.148245 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-01-10 14:37:26.148250 | orchestrator | Saturday 10 January 2026 14:34:56 +0000 (0:00:03.432) 0:01:07.493 ****** 2026-01-10 14:37:26.148255 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:37:26.148260 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:37:26.148265 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:37:26.148270 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:37:26.148274 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:37:26.148279 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:37:26.148287 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:37:26.148300 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:37:26.148311 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:37:26.148317 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:37:26.148322 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:37:26.148327 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:37:26.148332 | orchestrator | 2026-01-10 14:37:26.148337 | orchestrator | TASK [ovn-db : Ensure configuration for relays exists] ************************* 2026-01-10 14:37:26.148342 | orchestrator | Saturday 10 January 2026 14:35:01 +0000 (0:00:05.156) 0:01:12.650 ****** 2026-01-10 14:37:26.148348 | orchestrator | included: /ansible/roles/ovn-db/tasks/config-relay.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=1) 2026-01-10 14:37:26.148353 | orchestrator | 2026-01-10 14:37:26.148358 | orchestrator | TASK [ovn-db : Ensuring config directories exist for OVN relay containers] ***** 2026-01-10 14:37:26.148363 | orchestrator | Saturday 10 January 2026 14:35:01 +0000 (0:00:00.592) 0:01:13.243 ****** 2026-01-10 14:37:26.148483 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:37:26.148489 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:37:26.148493 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:37:26.148498 | orchestrator | 2026-01-10 14:37:26.148502 | orchestrator | TASK [ovn-db : Copying over config.json files for OVN relay services] ********** 2026-01-10 14:37:26.148507 | orchestrator | Saturday 10 January 2026 14:35:02 +0000 (0:00:00.934) 0:01:14.178 ****** 2026-01-10 14:37:26.148512 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:37:26.148521 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:37:26.148546 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:37:26.148551 | orchestrator | 2026-01-10 14:37:26.148556 | orchestrator | TASK [ovn-db : Generate config files for OVN relay services] ******************* 2026-01-10 14:37:26.148560 | orchestrator | Saturday 10 January 2026 14:35:04 +0000 (0:00:02.030) 0:01:16.208 ****** 2026-01-10 14:37:26.148565 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:37:26.148887 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:37:26.148903 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:37:26.148909 | orchestrator | 2026-01-10 14:37:26.148914 | orchestrator | TASK [service-check-containers : ovn_db | Check containers] ******************** 2026-01-10 14:37:26.148919 | orchestrator | Saturday 10 January 2026 14:35:06 +0000 (0:00:01.847) 0:01:18.056 ****** 2026-01-10 14:37:26.148937 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:37:26.148945 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:37:26.148950 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:37:26.148957 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:37:26.148962 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:37:26.148967 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:37:26.148978 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:37:26.148985 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:37:26.149000 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:37:26.149005 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:37:26.149010 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:37:26.149015 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:37:26.149020 | orchestrator | 2026-01-10 14:37:26.149025 | orchestrator | TASK [service-check-containers : ovn_db | Notify handlers to restart containers] *** 2026-01-10 14:37:26.149031 | orchestrator | Saturday 10 January 2026 14:35:10 +0000 (0:00:03.856) 0:01:21.912 ****** 2026-01-10 14:37:26.149035 | orchestrator | changed: [testbed-node-0] => { 2026-01-10 14:37:26.149040 | orchestrator |  "msg": "Notifying handlers" 2026-01-10 14:37:26.149044 | orchestrator | } 2026-01-10 14:37:26.149049 | orchestrator | changed: [testbed-node-1] => { 2026-01-10 14:37:26.149054 | orchestrator |  "msg": "Notifying handlers" 2026-01-10 14:37:26.149059 | orchestrator | } 2026-01-10 14:37:26.149063 | orchestrator | changed: [testbed-node-2] => { 2026-01-10 14:37:26.149069 | orchestrator |  "msg": "Notifying handlers" 2026-01-10 14:37:26.149074 | orchestrator | } 2026-01-10 14:37:26.149079 | orchestrator | 2026-01-10 14:37:26.149084 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-01-10 14:37:26.149093 | orchestrator | Saturday 10 January 2026 14:35:10 +0000 (0:00:00.373) 0:01:22.286 ****** 2026-01-10 14:37:26.149099 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:37:26.149104 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:37:26.149110 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:37:26.149122 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:37:26.149128 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:37:26.149132 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:37:26.149138 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:37:26.149143 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:37:26.149153 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:37:26.149158 | orchestrator | included: /ansible/roles/service-check-containers/tasks/iterated.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:37:26.149164 | orchestrator | 2026-01-10 14:37:26.149169 | orchestrator | TASK [service-check-containers : ovn_db | Check containers with iteration] ***** 2026-01-10 14:37:26.149174 | orchestrator | Saturday 10 January 2026 14:35:14 +0000 (0:00:03.456) 0:01:25.742 ****** 2026-01-10 14:37:26.149180 | orchestrator | changed: [testbed-node-0] => (item=[1]) 2026-01-10 14:37:26.149185 | orchestrator | changed: [testbed-node-1] => (item=[1]) 2026-01-10 14:37:26.149191 | orchestrator | changed: [testbed-node-2] => (item=[1]) 2026-01-10 14:37:26.149196 | orchestrator | 2026-01-10 14:37:26.149201 | orchestrator | TASK [service-check-containers : ovn_db | Notify handlers to restart containers] *** 2026-01-10 14:37:26.149205 | orchestrator | Saturday 10 January 2026 14:35:15 +0000 (0:00:01.233) 0:01:26.976 ****** 2026-01-10 14:37:26.149210 | orchestrator | changed: [testbed-node-0] => { 2026-01-10 14:37:26.149215 | orchestrator |  "msg": "Notifying handlers" 2026-01-10 14:37:26.149223 | orchestrator | } 2026-01-10 14:37:26.149228 | orchestrator | changed: [testbed-node-1] => { 2026-01-10 14:37:26.149233 | orchestrator |  "msg": "Notifying handlers" 2026-01-10 14:37:26.149238 | orchestrator | } 2026-01-10 14:37:26.149242 | orchestrator | changed: [testbed-node-2] => { 2026-01-10 14:37:26.149247 | orchestrator |  "msg": "Notifying handlers" 2026-01-10 14:37:26.149255 | orchestrator | } 2026-01-10 14:37:26.149260 | orchestrator | 2026-01-10 14:37:26.149265 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-01-10 14:37:26.149271 | orchestrator | Saturday 10 January 2026 14:35:16 +0000 (0:00:01.013) 0:01:27.989 ****** 2026-01-10 14:37:26.149276 | orchestrator | 2026-01-10 14:37:26.149281 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-01-10 14:37:26.149286 | orchestrator | Saturday 10 January 2026 14:35:16 +0000 (0:00:00.084) 0:01:28.073 ****** 2026-01-10 14:37:26.149291 | orchestrator | 2026-01-10 14:37:26.149296 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-01-10 14:37:26.149301 | orchestrator | Saturday 10 January 2026 14:35:16 +0000 (0:00:00.090) 0:01:28.164 ****** 2026-01-10 14:37:26.149306 | orchestrator | 2026-01-10 14:37:26.149311 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-01-10 14:37:26.149316 | orchestrator | Saturday 10 January 2026 14:35:16 +0000 (0:00:00.074) 0:01:28.239 ****** 2026-01-10 14:37:26.149321 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:37:26.149326 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:37:26.149332 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:37:26.149341 | orchestrator | 2026-01-10 14:37:26.149347 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-01-10 14:37:26.149352 | orchestrator | Saturday 10 January 2026 14:35:25 +0000 (0:00:08.520) 0:01:36.759 ****** 2026-01-10 14:37:26.149356 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:37:26.149361 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:37:26.149366 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:37:26.149371 | orchestrator | 2026-01-10 14:37:26.149375 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db-relay container] ******************* 2026-01-10 14:37:26.149380 | orchestrator | Saturday 10 January 2026 14:35:40 +0000 (0:00:14.966) 0:01:51.725 ****** 2026-01-10 14:37:26.149385 | orchestrator | changed: [testbed-node-0] => (item=1) 2026-01-10 14:37:26.149391 | orchestrator | changed: [testbed-node-1] => (item=1) 2026-01-10 14:37:26.149395 | orchestrator | changed: [testbed-node-2] => (item=1) 2026-01-10 14:37:26.149401 | orchestrator | 2026-01-10 14:37:26.149406 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-01-10 14:37:26.149411 | orchestrator | Saturday 10 January 2026 14:35:55 +0000 (0:00:14.922) 0:02:06.647 ****** 2026-01-10 14:37:26.149415 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:37:26.149420 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:37:26.149425 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:37:26.149430 | orchestrator | 2026-01-10 14:37:26.149434 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-01-10 14:37:26.149439 | orchestrator | Saturday 10 January 2026 14:36:04 +0000 (0:00:08.877) 0:02:15.525 ****** 2026-01-10 14:37:26.149443 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:37:26.149448 | orchestrator | 2026-01-10 14:37:26.149453 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-01-10 14:37:26.149458 | orchestrator | Saturday 10 January 2026 14:36:04 +0000 (0:00:00.271) 0:02:15.797 ****** 2026-01-10 14:37:26.149462 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:37:26.149467 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:37:26.149472 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:37:26.149478 | orchestrator | 2026-01-10 14:37:26.149484 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-01-10 14:37:26.149489 | orchestrator | Saturday 10 January 2026 14:36:05 +0000 (0:00:01.014) 0:02:16.811 ****** 2026-01-10 14:37:26.149494 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:37:26.149499 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:37:26.149504 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:37:26.149508 | orchestrator | 2026-01-10 14:37:26.149513 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-01-10 14:37:26.149519 | orchestrator | Saturday 10 January 2026 14:36:06 +0000 (0:00:00.736) 0:02:17.548 ****** 2026-01-10 14:37:26.149523 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:37:26.149528 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:37:26.149534 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:37:26.149539 | orchestrator | 2026-01-10 14:37:26.149544 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-01-10 14:37:26.149549 | orchestrator | Saturday 10 January 2026 14:36:07 +0000 (0:00:01.221) 0:02:18.770 ****** 2026-01-10 14:37:26.149554 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:37:26.149559 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:37:26.149564 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:37:26.149569 | orchestrator | 2026-01-10 14:37:26.149575 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-01-10 14:37:26.149580 | orchestrator | Saturday 10 January 2026 14:36:08 +0000 (0:00:00.654) 0:02:19.424 ****** 2026-01-10 14:37:26.149585 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:37:26.149590 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:37:26.149595 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:37:26.149600 | orchestrator | 2026-01-10 14:37:26.149606 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-01-10 14:37:26.149610 | orchestrator | Saturday 10 January 2026 14:36:08 +0000 (0:00:00.823) 0:02:20.248 ****** 2026-01-10 14:37:26.149616 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:37:26.149619 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:37:26.149622 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:37:26.149625 | orchestrator | 2026-01-10 14:37:26.149628 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db-relay] *************************************** 2026-01-10 14:37:26.149632 | orchestrator | Saturday 10 January 2026 14:36:09 +0000 (0:00:00.787) 0:02:21.036 ****** 2026-01-10 14:37:26.149635 | orchestrator | ok: [testbed-node-0] => (item=1) 2026-01-10 14:37:26.149638 | orchestrator | ok: [testbed-node-1] => (item=1) 2026-01-10 14:37:26.149641 | orchestrator | ok: [testbed-node-2] => (item=1) 2026-01-10 14:37:26.149644 | orchestrator | 2026-01-10 14:37:26.149647 | orchestrator | TASK [ovn-db : Unset bootstrap args fact] ************************************** 2026-01-10 14:37:26.149650 | orchestrator | Saturday 10 January 2026 14:36:10 +0000 (0:00:01.124) 0:02:22.160 ****** 2026-01-10 14:37:26.149655 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:37:26.149658 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:37:26.149661 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:37:26.149664 | orchestrator | 2026-01-10 14:37:26.149667 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-01-10 14:37:26.150037 | orchestrator | Saturday 10 January 2026 14:36:11 +0000 (0:00:00.344) 0:02:22.505 ****** 2026-01-10 14:37:26.150054 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:37:26.150059 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:37:26.150062 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:37:26.150065 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:37:26.150069 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:37:26.150077 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:37:26.150080 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:37:26.150090 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:37:26.150094 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:37:26.150097 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:37:26.150100 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:37:26.150103 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:37:26.150107 | orchestrator | 2026-01-10 14:37:26.150129 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-01-10 14:37:26.150133 | orchestrator | Saturday 10 January 2026 14:36:13 +0000 (0:00:02.778) 0:02:25.284 ****** 2026-01-10 14:37:26.150136 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:37:26.150142 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:37:26.150145 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:37:26.150316 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:37:26.150322 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:37:26.150325 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:37:26.150331 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:37:26.150336 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:37:26.150345 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:37:26.150348 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:37:26.150352 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:37:26.150359 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:37:26.150363 | orchestrator | 2026-01-10 14:37:26.150367 | orchestrator | TASK [ovn-db : Ensure configuration for relays exists] ************************* 2026-01-10 14:37:26.150373 | orchestrator | Saturday 10 January 2026 14:36:19 +0000 (0:00:05.505) 0:02:30.789 ****** 2026-01-10 14:37:26.150376 | orchestrator | included: /ansible/roles/ovn-db/tasks/config-relay.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=1) 2026-01-10 14:37:26.150380 | orchestrator | 2026-01-10 14:37:26.150383 | orchestrator | TASK [ovn-db : Ensuring config directories exist for OVN relay containers] ***** 2026-01-10 14:37:26.150386 | orchestrator | Saturday 10 January 2026 14:36:20 +0000 (0:00:00.752) 0:02:31.542 ****** 2026-01-10 14:37:26.150389 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:37:26.150392 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:37:26.150395 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:37:26.150398 | orchestrator | 2026-01-10 14:37:26.150401 | orchestrator | TASK [ovn-db : Copying over config.json files for OVN relay services] ********** 2026-01-10 14:37:26.150404 | orchestrator | Saturday 10 January 2026 14:36:20 +0000 (0:00:00.690) 0:02:32.233 ****** 2026-01-10 14:37:26.150407 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:37:26.150410 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:37:26.150413 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:37:26.150416 | orchestrator | 2026-01-10 14:37:26.150419 | orchestrator | TASK [ovn-db : Generate config files for OVN relay services] ******************* 2026-01-10 14:37:26.150422 | orchestrator | Saturday 10 January 2026 14:36:22 +0000 (0:00:01.902) 0:02:34.135 ****** 2026-01-10 14:37:26.150425 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:37:26.150428 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:37:26.150431 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:37:26.150434 | orchestrator | 2026-01-10 14:37:26.150437 | orchestrator | TASK [service-check-containers : ovn_db | Check containers] ******************** 2026-01-10 14:37:26.150440 | orchestrator | Saturday 10 January 2026 14:36:25 +0000 (0:00:02.357) 0:02:36.493 ****** 2026-01-10 14:37:26.150446 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:37:26.150453 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:37:26.150456 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:37:26.150459 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:37:26.150463 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:37:26.150476 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:37:26.150479 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:37:26.150483 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:37:26.150488 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:37:26.150492 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:37:26.150495 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:37:26.150498 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:37:26.150501 | orchestrator | 2026-01-10 14:37:26.150505 | orchestrator | TASK [service-check-containers : ovn_db | Notify handlers to restart containers] *** 2026-01-10 14:37:26.150508 | orchestrator | Saturday 10 January 2026 14:36:29 +0000 (0:00:04.742) 0:02:41.235 ****** 2026-01-10 14:37:26.150511 | orchestrator | ok: [testbed-node-0] => { 2026-01-10 14:37:26.150514 | orchestrator |  "msg": "Notifying handlers" 2026-01-10 14:37:26.150517 | orchestrator | } 2026-01-10 14:37:26.150521 | orchestrator | changed: [testbed-node-1] => { 2026-01-10 14:37:26.150524 | orchestrator |  "msg": "Notifying handlers" 2026-01-10 14:37:26.150527 | orchestrator | } 2026-01-10 14:37:26.150530 | orchestrator | changed: [testbed-node-2] => { 2026-01-10 14:37:26.150533 | orchestrator |  "msg": "Notifying handlers" 2026-01-10 14:37:26.150536 | orchestrator | } 2026-01-10 14:37:26.150539 | orchestrator | 2026-01-10 14:37:26.150542 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-01-10 14:37:26.150545 | orchestrator | Saturday 10 January 2026 14:36:30 +0000 (0:00:00.341) 0:02:41.577 ****** 2026-01-10 14:37:26.150553 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:37:26.150557 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:37:26.150573 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:37:26.150577 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:37:26.150580 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:37:26.150584 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:37:26.150587 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:37:26.150591 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:37:26.150597 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:37:26.150715 | orchestrator | included: /ansible/roles/service-check-containers/tasks/iterated.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:37:26.150720 | orchestrator | 2026-01-10 14:37:26.150724 | orchestrator | TASK [service-check-containers : ovn_db | Check containers with iteration] ***** 2026-01-10 14:37:26.150727 | orchestrator | Saturday 10 January 2026 14:36:32 +0000 (0:00:02.122) 0:02:43.699 ****** 2026-01-10 14:37:26.150731 | orchestrator | changed: [testbed-node-0] => (item=[1]) 2026-01-10 14:37:26.150735 | orchestrator | changed: [testbed-node-1] => (item=[1]) 2026-01-10 14:37:26.150738 | orchestrator | changed: [testbed-node-2] => (item=[1]) 2026-01-10 14:37:26.150750 | orchestrator | 2026-01-10 14:37:26.150781 | orchestrator | TASK [service-check-containers : ovn_db | Notify handlers to restart containers] *** 2026-01-10 14:37:26.150785 | orchestrator | Saturday 10 January 2026 14:36:33 +0000 (0:00:01.126) 0:02:44.826 ****** 2026-01-10 14:37:26.150788 | orchestrator | changed: [testbed-node-0] => { 2026-01-10 14:37:26.150792 | orchestrator |  "msg": "Notifying handlers" 2026-01-10 14:37:26.150795 | orchestrator | } 2026-01-10 14:37:26.150799 | orchestrator | changed: [testbed-node-1] => { 2026-01-10 14:37:26.150802 | orchestrator |  "msg": "Notifying handlers" 2026-01-10 14:37:26.150806 | orchestrator | } 2026-01-10 14:37:26.150809 | orchestrator | changed: [testbed-node-2] => { 2026-01-10 14:37:26.150813 | orchestrator |  "msg": "Notifying handlers" 2026-01-10 14:37:26.150816 | orchestrator | } 2026-01-10 14:37:26.150819 | orchestrator | 2026-01-10 14:37:26.150823 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-01-10 14:37:26.150827 | orchestrator | Saturday 10 January 2026 14:36:34 +0000 (0:00:00.616) 0:02:45.442 ****** 2026-01-10 14:37:26.150830 | orchestrator | 2026-01-10 14:37:26.150833 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-01-10 14:37:26.150837 | orchestrator | Saturday 10 January 2026 14:36:34 +0000 (0:00:00.065) 0:02:45.507 ****** 2026-01-10 14:37:26.150840 | orchestrator | 2026-01-10 14:37:26.150844 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-01-10 14:37:26.150847 | orchestrator | Saturday 10 January 2026 14:36:34 +0000 (0:00:00.066) 0:02:45.574 ****** 2026-01-10 14:37:26.150851 | orchestrator | 2026-01-10 14:37:26.150854 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-01-10 14:37:26.150858 | orchestrator | Saturday 10 January 2026 14:36:34 +0000 (0:00:00.066) 0:02:45.640 ****** 2026-01-10 14:37:26.150861 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:37:26.150864 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:37:26.150868 | orchestrator | 2026-01-10 14:37:26.150872 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-01-10 14:37:26.150875 | orchestrator | Saturday 10 January 2026 14:36:48 +0000 (0:00:14.582) 0:03:00.223 ****** 2026-01-10 14:37:26.150879 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:37:26.150882 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:37:26.150885 | orchestrator | 2026-01-10 14:37:26.150889 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db-relay container] ******************* 2026-01-10 14:37:26.150893 | orchestrator | Saturday 10 January 2026 14:37:03 +0000 (0:00:14.285) 0:03:14.509 ****** 2026-01-10 14:37:26.150896 | orchestrator | changed: [testbed-node-2] => (item=1) 2026-01-10 14:37:26.150900 | orchestrator | changed: [testbed-node-0] => (item=1) 2026-01-10 14:37:26.150903 | orchestrator | changed: [testbed-node-1] => (item=1) 2026-01-10 14:37:26.150907 | orchestrator | 2026-01-10 14:37:26.150910 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-01-10 14:37:26.150917 | orchestrator | Saturday 10 January 2026 14:37:17 +0000 (0:00:14.704) 0:03:29.213 ****** 2026-01-10 14:37:26.150920 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:37:26.150926 | orchestrator | 2026-01-10 14:37:26.150931 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-01-10 14:37:26.150936 | orchestrator | Saturday 10 January 2026 14:37:17 +0000 (0:00:00.150) 0:03:29.364 ****** 2026-01-10 14:37:26.150941 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:37:26.150946 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:37:26.150951 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:37:26.150956 | orchestrator | 2026-01-10 14:37:26.150962 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-01-10 14:37:26.150967 | orchestrator | Saturday 10 January 2026 14:37:18 +0000 (0:00:00.737) 0:03:30.102 ****** 2026-01-10 14:37:26.150971 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:37:26.150976 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:37:26.150981 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:37:26.150986 | orchestrator | 2026-01-10 14:37:26.150991 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-01-10 14:37:26.150996 | orchestrator | Saturday 10 January 2026 14:37:19 +0000 (0:00:00.764) 0:03:30.866 ****** 2026-01-10 14:37:26.151001 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:37:26.151004 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:37:26.151008 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:37:26.151011 | orchestrator | 2026-01-10 14:37:26.151014 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-01-10 14:37:26.151020 | orchestrator | Saturday 10 January 2026 14:37:20 +0000 (0:00:01.003) 0:03:31.869 ****** 2026-01-10 14:37:26.151023 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:37:26.151026 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:37:26.151029 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:37:26.151032 | orchestrator | 2026-01-10 14:37:26.151035 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-01-10 14:37:26.151038 | orchestrator | Saturday 10 January 2026 14:37:21 +0000 (0:00:00.597) 0:03:32.467 ****** 2026-01-10 14:37:26.151041 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:37:26.151044 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:37:26.151047 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:37:26.151050 | orchestrator | 2026-01-10 14:37:26.151053 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-01-10 14:37:26.151056 | orchestrator | Saturday 10 January 2026 14:37:21 +0000 (0:00:00.778) 0:03:33.246 ****** 2026-01-10 14:37:26.151059 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:37:26.151062 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:37:26.151065 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:37:26.151068 | orchestrator | 2026-01-10 14:37:26.151071 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db-relay] *************************************** 2026-01-10 14:37:26.151074 | orchestrator | Saturday 10 January 2026 14:37:22 +0000 (0:00:00.956) 0:03:34.202 ****** 2026-01-10 14:37:26.151077 | orchestrator | ok: [testbed-node-0] => (item=1) 2026-01-10 14:37:26.151080 | orchestrator | ok: [testbed-node-1] => (item=1) 2026-01-10 14:37:26.151083 | orchestrator | ok: [testbed-node-2] => (item=1) 2026-01-10 14:37:26.151086 | orchestrator | 2026-01-10 14:37:26.151089 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-10 14:37:26.151093 | orchestrator | testbed-node-0 : ok=65  changed=29  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2026-01-10 14:37:26.151097 | orchestrator | testbed-node-1 : ok=63  changed=30  unreachable=0 failed=0 skipped=23  rescued=0 ignored=0 2026-01-10 14:37:26.151100 | orchestrator | testbed-node-2 : ok=63  changed=30  unreachable=0 failed=0 skipped=23  rescued=0 ignored=0 2026-01-10 14:37:26.151103 | orchestrator | testbed-node-3 : ok=13  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-10 14:37:26.151109 | orchestrator | testbed-node-4 : ok=13  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-10 14:37:26.151112 | orchestrator | testbed-node-5 : ok=13  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-10 14:37:26.151115 | orchestrator | 2026-01-10 14:37:26.151118 | orchestrator | 2026-01-10 14:37:26.151121 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-10 14:37:26.151124 | orchestrator | Saturday 10 January 2026 14:37:24 +0000 (0:00:01.306) 0:03:35.509 ****** 2026-01-10 14:37:26.151127 | orchestrator | =============================================================================== 2026-01-10 14:37:26.151130 | orchestrator | ovn-db : Restart ovn-sb-db-relay container ----------------------------- 29.63s 2026-01-10 14:37:26.151133 | orchestrator | ovn-db : Restart ovn-sb-db container ----------------------------------- 29.25s 2026-01-10 14:37:26.151136 | orchestrator | ovn-db : Restart ovn-nb-db container ----------------------------------- 23.10s 2026-01-10 14:37:26.151139 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 20.75s 2026-01-10 14:37:26.151142 | orchestrator | ovn-db : Restart ovn-northd container ----------------------------------- 8.88s 2026-01-10 14:37:26.151145 | orchestrator | ovn-controller : Restart ovn-controller container ----------------------- 8.80s 2026-01-10 14:37:26.151148 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 5.51s 2026-01-10 14:37:26.151151 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 5.16s 2026-01-10 14:37:26.151154 | orchestrator | service-check-containers : ovn_db | Check containers -------------------- 4.74s 2026-01-10 14:37:26.151169 | orchestrator | service-check-containers : ovn_db | Check containers -------------------- 3.86s 2026-01-10 14:37:26.151173 | orchestrator | service-check-containers : Include tasks -------------------------------- 3.46s 2026-01-10 14:37:26.151176 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 3.43s 2026-01-10 14:37:26.151179 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 2.78s 2026-01-10 14:37:26.151182 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 2.77s 2026-01-10 14:37:26.151185 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 2.57s 2026-01-10 14:37:26.151188 | orchestrator | service-check-containers : ovn_controller | Check containers ------------ 2.37s 2026-01-10 14:37:26.151191 | orchestrator | ovn-db : Generate config files for OVN relay services ------------------- 2.36s 2026-01-10 14:37:26.151194 | orchestrator | ovn-controller : Ensuring systemd override directory exists ------------- 2.25s 2026-01-10 14:37:26.151197 | orchestrator | service-check-containers : Include tasks -------------------------------- 2.12s 2026-01-10 14:37:26.151200 | orchestrator | ovn-db : Copying over config.json files for OVN relay services ---------- 2.03s 2026-01-10 14:37:26.151206 | orchestrator | 2026-01-10 14:37:26 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:37:29.200529 | orchestrator | 2026-01-10 14:37:29 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:37:29.203994 | orchestrator | 2026-01-10 14:37:29 | INFO  | Task 35fd8ed6-bee3-4b01-a98c-0d469af062b0 is in state STARTED 2026-01-10 14:37:29.204114 | orchestrator | 2026-01-10 14:37:29 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:37:32.252225 | orchestrator | 2026-01-10 14:37:32 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:37:32.253031 | orchestrator | 2026-01-10 14:37:32 | INFO  | Task 35fd8ed6-bee3-4b01-a98c-0d469af062b0 is in state STARTED 2026-01-10 14:37:32.253092 | orchestrator | 2026-01-10 14:37:32 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:37:35.301285 | orchestrator | 2026-01-10 14:37:35 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:37:35.304011 | orchestrator | 2026-01-10 14:37:35 | INFO  | Task 35fd8ed6-bee3-4b01-a98c-0d469af062b0 is in state STARTED 2026-01-10 14:37:35.304145 | orchestrator | 2026-01-10 14:37:35 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:37:38.335689 | orchestrator | 2026-01-10 14:37:38 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:37:38.337237 | orchestrator | 2026-01-10 14:37:38 | INFO  | Task 35fd8ed6-bee3-4b01-a98c-0d469af062b0 is in state STARTED 2026-01-10 14:37:38.337289 | orchestrator | 2026-01-10 14:37:38 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:37:41.376380 | orchestrator | 2026-01-10 14:37:41 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:37:41.378478 | orchestrator | 2026-01-10 14:37:41 | INFO  | Task 35fd8ed6-bee3-4b01-a98c-0d469af062b0 is in state STARTED 2026-01-10 14:37:41.378589 | orchestrator | 2026-01-10 14:37:41 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:37:44.435311 | orchestrator | 2026-01-10 14:37:44 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:37:44.437647 | orchestrator | 2026-01-10 14:37:44 | INFO  | Task 35fd8ed6-bee3-4b01-a98c-0d469af062b0 is in state STARTED 2026-01-10 14:37:44.438344 | orchestrator | 2026-01-10 14:37:44 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:37:47.480363 | orchestrator | 2026-01-10 14:37:47 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:37:47.481063 | orchestrator | 2026-01-10 14:37:47 | INFO  | Task 35fd8ed6-bee3-4b01-a98c-0d469af062b0 is in state STARTED 2026-01-10 14:37:47.481115 | orchestrator | 2026-01-10 14:37:47 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:37:50.532939 | orchestrator | 2026-01-10 14:37:50 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:37:50.533510 | orchestrator | 2026-01-10 14:37:50 | INFO  | Task 35fd8ed6-bee3-4b01-a98c-0d469af062b0 is in state STARTED 2026-01-10 14:37:50.533577 | orchestrator | 2026-01-10 14:37:50 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:37:53.582646 | orchestrator | 2026-01-10 14:37:53 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:37:53.584960 | orchestrator | 2026-01-10 14:37:53 | INFO  | Task 35fd8ed6-bee3-4b01-a98c-0d469af062b0 is in state STARTED 2026-01-10 14:37:53.585832 | orchestrator | 2026-01-10 14:37:53 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:37:56.631434 | orchestrator | 2026-01-10 14:37:56 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:37:56.631889 | orchestrator | 2026-01-10 14:37:56 | INFO  | Task 35fd8ed6-bee3-4b01-a98c-0d469af062b0 is in state STARTED 2026-01-10 14:37:56.631916 | orchestrator | 2026-01-10 14:37:56 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:37:59.694186 | orchestrator | 2026-01-10 14:37:59 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:37:59.694546 | orchestrator | 2026-01-10 14:37:59 | INFO  | Task 35fd8ed6-bee3-4b01-a98c-0d469af062b0 is in state STARTED 2026-01-10 14:37:59.694566 | orchestrator | 2026-01-10 14:37:59 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:38:02.735059 | orchestrator | 2026-01-10 14:38:02 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:38:02.737782 | orchestrator | 2026-01-10 14:38:02 | INFO  | Task 35fd8ed6-bee3-4b01-a98c-0d469af062b0 is in state STARTED 2026-01-10 14:38:02.737901 | orchestrator | 2026-01-10 14:38:02 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:38:05.779008 | orchestrator | 2026-01-10 14:38:05 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:38:05.779110 | orchestrator | 2026-01-10 14:38:05 | INFO  | Task 35fd8ed6-bee3-4b01-a98c-0d469af062b0 is in state STARTED 2026-01-10 14:38:05.779152 | orchestrator | 2026-01-10 14:38:05 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:38:08.824652 | orchestrator | 2026-01-10 14:38:08 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:38:08.827861 | orchestrator | 2026-01-10 14:38:08 | INFO  | Task 35fd8ed6-bee3-4b01-a98c-0d469af062b0 is in state STARTED 2026-01-10 14:38:08.828129 | orchestrator | 2026-01-10 14:38:08 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:38:11.866248 | orchestrator | 2026-01-10 14:38:11 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:38:11.866692 | orchestrator | 2026-01-10 14:38:11 | INFO  | Task 35fd8ed6-bee3-4b01-a98c-0d469af062b0 is in state STARTED 2026-01-10 14:38:11.866749 | orchestrator | 2026-01-10 14:38:11 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:38:14.921069 | orchestrator | 2026-01-10 14:38:14 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:38:14.922862 | orchestrator | 2026-01-10 14:38:14 | INFO  | Task 35fd8ed6-bee3-4b01-a98c-0d469af062b0 is in state STARTED 2026-01-10 14:38:14.922933 | orchestrator | 2026-01-10 14:38:14 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:38:17.965824 | orchestrator | 2026-01-10 14:38:17 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:38:17.966729 | orchestrator | 2026-01-10 14:38:17 | INFO  | Task 35fd8ed6-bee3-4b01-a98c-0d469af062b0 is in state STARTED 2026-01-10 14:38:17.966787 | orchestrator | 2026-01-10 14:38:17 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:38:21.015390 | orchestrator | 2026-01-10 14:38:21 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:38:21.018941 | orchestrator | 2026-01-10 14:38:21 | INFO  | Task 35fd8ed6-bee3-4b01-a98c-0d469af062b0 is in state STARTED 2026-01-10 14:38:21.019023 | orchestrator | 2026-01-10 14:38:21 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:38:24.053802 | orchestrator | 2026-01-10 14:38:24 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:38:24.055586 | orchestrator | 2026-01-10 14:38:24 | INFO  | Task 35fd8ed6-bee3-4b01-a98c-0d469af062b0 is in state STARTED 2026-01-10 14:38:24.055678 | orchestrator | 2026-01-10 14:38:24 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:38:27.109053 | orchestrator | 2026-01-10 14:38:27 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:38:27.112300 | orchestrator | 2026-01-10 14:38:27 | INFO  | Task 35fd8ed6-bee3-4b01-a98c-0d469af062b0 is in state STARTED 2026-01-10 14:38:27.112370 | orchestrator | 2026-01-10 14:38:27 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:38:30.159897 | orchestrator | 2026-01-10 14:38:30 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:38:30.161654 | orchestrator | 2026-01-10 14:38:30 | INFO  | Task 35fd8ed6-bee3-4b01-a98c-0d469af062b0 is in state STARTED 2026-01-10 14:38:30.161720 | orchestrator | 2026-01-10 14:38:30 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:38:33.216814 | orchestrator | 2026-01-10 14:38:33 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:38:33.217904 | orchestrator | 2026-01-10 14:38:33 | INFO  | Task 35fd8ed6-bee3-4b01-a98c-0d469af062b0 is in state STARTED 2026-01-10 14:38:33.217943 | orchestrator | 2026-01-10 14:38:33 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:38:36.262827 | orchestrator | 2026-01-10 14:38:36 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:38:36.263710 | orchestrator | 2026-01-10 14:38:36 | INFO  | Task 35fd8ed6-bee3-4b01-a98c-0d469af062b0 is in state STARTED 2026-01-10 14:38:36.264152 | orchestrator | 2026-01-10 14:38:36 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:38:39.310264 | orchestrator | 2026-01-10 14:38:39 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:38:39.311411 | orchestrator | 2026-01-10 14:38:39 | INFO  | Task 35fd8ed6-bee3-4b01-a98c-0d469af062b0 is in state STARTED 2026-01-10 14:38:39.311463 | orchestrator | 2026-01-10 14:38:39 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:38:42.354796 | orchestrator | 2026-01-10 14:38:42 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:38:42.358773 | orchestrator | 2026-01-10 14:38:42 | INFO  | Task 35fd8ed6-bee3-4b01-a98c-0d469af062b0 is in state STARTED 2026-01-10 14:38:42.359513 | orchestrator | 2026-01-10 14:38:42 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:38:45.399830 | orchestrator | 2026-01-10 14:38:45 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:38:45.400552 | orchestrator | 2026-01-10 14:38:45 | INFO  | Task 35fd8ed6-bee3-4b01-a98c-0d469af062b0 is in state STARTED 2026-01-10 14:38:45.400600 | orchestrator | 2026-01-10 14:38:45 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:38:48.446740 | orchestrator | 2026-01-10 14:38:48 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:38:48.449220 | orchestrator | 2026-01-10 14:38:48 | INFO  | Task 35fd8ed6-bee3-4b01-a98c-0d469af062b0 is in state STARTED 2026-01-10 14:38:48.449334 | orchestrator | 2026-01-10 14:38:48 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:38:51.495540 | orchestrator | 2026-01-10 14:38:51 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:38:51.496082 | orchestrator | 2026-01-10 14:38:51 | INFO  | Task 35fd8ed6-bee3-4b01-a98c-0d469af062b0 is in state STARTED 2026-01-10 14:38:51.496119 | orchestrator | 2026-01-10 14:38:51 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:38:54.528849 | orchestrator | 2026-01-10 14:38:54 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:38:54.531952 | orchestrator | 2026-01-10 14:38:54 | INFO  | Task 35fd8ed6-bee3-4b01-a98c-0d469af062b0 is in state STARTED 2026-01-10 14:38:54.532513 | orchestrator | 2026-01-10 14:38:54 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:38:57.587318 | orchestrator | 2026-01-10 14:38:57 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:38:57.589746 | orchestrator | 2026-01-10 14:38:57 | INFO  | Task 35fd8ed6-bee3-4b01-a98c-0d469af062b0 is in state STARTED 2026-01-10 14:38:57.589811 | orchestrator | 2026-01-10 14:38:57 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:39:00.637472 | orchestrator | 2026-01-10 14:39:00 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:39:00.638459 | orchestrator | 2026-01-10 14:39:00 | INFO  | Task 35fd8ed6-bee3-4b01-a98c-0d469af062b0 is in state STARTED 2026-01-10 14:39:00.638503 | orchestrator | 2026-01-10 14:39:00 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:39:03.682411 | orchestrator | 2026-01-10 14:39:03 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:39:03.682582 | orchestrator | 2026-01-10 14:39:03 | INFO  | Task 35fd8ed6-bee3-4b01-a98c-0d469af062b0 is in state STARTED 2026-01-10 14:39:03.682604 | orchestrator | 2026-01-10 14:39:03 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:39:06.721587 | orchestrator | 2026-01-10 14:39:06 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:39:06.722895 | orchestrator | 2026-01-10 14:39:06 | INFO  | Task 35fd8ed6-bee3-4b01-a98c-0d469af062b0 is in state STARTED 2026-01-10 14:39:06.722930 | orchestrator | 2026-01-10 14:39:06 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:39:09.773775 | orchestrator | 2026-01-10 14:39:09 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:39:09.777132 | orchestrator | 2026-01-10 14:39:09 | INFO  | Task 35fd8ed6-bee3-4b01-a98c-0d469af062b0 is in state STARTED 2026-01-10 14:39:09.777225 | orchestrator | 2026-01-10 14:39:09 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:39:12.828340 | orchestrator | 2026-01-10 14:39:12 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:39:12.830718 | orchestrator | 2026-01-10 14:39:12 | INFO  | Task 35fd8ed6-bee3-4b01-a98c-0d469af062b0 is in state STARTED 2026-01-10 14:39:12.830766 | orchestrator | 2026-01-10 14:39:12 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:39:15.876458 | orchestrator | 2026-01-10 14:39:15 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:39:15.879046 | orchestrator | 2026-01-10 14:39:15 | INFO  | Task 35fd8ed6-bee3-4b01-a98c-0d469af062b0 is in state STARTED 2026-01-10 14:39:15.879116 | orchestrator | 2026-01-10 14:39:15 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:39:18.918286 | orchestrator | 2026-01-10 14:39:18 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:39:18.920450 | orchestrator | 2026-01-10 14:39:18 | INFO  | Task 35fd8ed6-bee3-4b01-a98c-0d469af062b0 is in state STARTED 2026-01-10 14:39:18.920501 | orchestrator | 2026-01-10 14:39:18 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:39:21.959536 | orchestrator | 2026-01-10 14:39:21 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:39:21.960917 | orchestrator | 2026-01-10 14:39:21 | INFO  | Task 35fd8ed6-bee3-4b01-a98c-0d469af062b0 is in state STARTED 2026-01-10 14:39:21.960962 | orchestrator | 2026-01-10 14:39:21 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:39:25.004454 | orchestrator | 2026-01-10 14:39:25 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:39:25.007038 | orchestrator | 2026-01-10 14:39:25 | INFO  | Task 35fd8ed6-bee3-4b01-a98c-0d469af062b0 is in state STARTED 2026-01-10 14:39:25.007971 | orchestrator | 2026-01-10 14:39:25 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:39:28.063577 | orchestrator | 2026-01-10 14:39:28 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:39:28.064421 | orchestrator | 2026-01-10 14:39:28 | INFO  | Task 35fd8ed6-bee3-4b01-a98c-0d469af062b0 is in state STARTED 2026-01-10 14:39:28.064499 | orchestrator | 2026-01-10 14:39:28 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:39:31.115105 | orchestrator | 2026-01-10 14:39:31 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:39:31.115311 | orchestrator | 2026-01-10 14:39:31 | INFO  | Task e1c1f69f-b10d-408b-abed-be39393f1ae8 is in state STARTED 2026-01-10 14:39:31.116988 | orchestrator | 2026-01-10 14:39:31 | INFO  | Task b6367eb5-2686-46ac-9949-20af3bf840c4 is in state STARTED 2026-01-10 14:39:31.126536 | orchestrator | 2026-01-10 14:39:31 | INFO  | Task 35fd8ed6-bee3-4b01-a98c-0d469af062b0 is in state SUCCESS 2026-01-10 14:39:31.128097 | orchestrator | 2026-01-10 14:39:31.128148 | orchestrator | 2026-01-10 14:39:31.128157 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-10 14:39:31.128163 | orchestrator | 2026-01-10 14:39:31.128167 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-10 14:39:31.128171 | orchestrator | Saturday 10 January 2026 14:32:28 +0000 (0:00:00.917) 0:00:00.917 ****** 2026-01-10 14:39:31.128175 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:39:31.128180 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:39:31.128184 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:39:31.128191 | orchestrator | 2026-01-10 14:39:31.128197 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-10 14:39:31.128203 | orchestrator | Saturday 10 January 2026 14:32:29 +0000 (0:00:00.587) 0:00:01.504 ****** 2026-01-10 14:39:31.128210 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2026-01-10 14:39:31.128216 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2026-01-10 14:39:31.128222 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2026-01-10 14:39:31.128228 | orchestrator | 2026-01-10 14:39:31.128234 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2026-01-10 14:39:31.128240 | orchestrator | 2026-01-10 14:39:31.128246 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-01-10 14:39:31.128252 | orchestrator | Saturday 10 January 2026 14:32:29 +0000 (0:00:00.952) 0:00:02.456 ****** 2026-01-10 14:39:31.128258 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:39:31.128264 | orchestrator | 2026-01-10 14:39:31.128334 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2026-01-10 14:39:31.128345 | orchestrator | Saturday 10 January 2026 14:32:30 +0000 (0:00:00.827) 0:00:03.284 ****** 2026-01-10 14:39:31.128351 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:39:31.128365 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:39:31.128403 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:39:31.128411 | orchestrator | 2026-01-10 14:39:31.128425 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-01-10 14:39:31.128439 | orchestrator | Saturday 10 January 2026 14:32:31 +0000 (0:00:00.909) 0:00:04.193 ****** 2026-01-10 14:39:31.128446 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:39:31.128453 | orchestrator | 2026-01-10 14:39:31.128459 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2026-01-10 14:39:31.128466 | orchestrator | Saturday 10 January 2026 14:32:32 +0000 (0:00:00.798) 0:00:04.992 ****** 2026-01-10 14:39:31.128472 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:39:31.128588 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:39:31.128598 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:39:31.128604 | orchestrator | 2026-01-10 14:39:31.128657 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2026-01-10 14:39:31.128664 | orchestrator | Saturday 10 January 2026 14:32:34 +0000 (0:00:01.571) 0:00:06.563 ****** 2026-01-10 14:39:31.128671 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-01-10 14:39:31.128678 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-01-10 14:39:31.128684 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-01-10 14:39:31.128689 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-01-10 14:39:31.128736 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-01-10 14:39:31.128743 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-01-10 14:39:31.128768 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-01-10 14:39:31.128778 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-01-10 14:39:31.128784 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-01-10 14:39:31.128794 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-01-10 14:39:31.128804 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-01-10 14:39:31.128813 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-01-10 14:39:31.128824 | orchestrator | 2026-01-10 14:39:31.128834 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-01-10 14:39:31.128843 | orchestrator | Saturday 10 January 2026 14:32:37 +0000 (0:00:03.890) 0:00:10.454 ****** 2026-01-10 14:39:31.128853 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-01-10 14:39:31.128860 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-01-10 14:39:31.128867 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-01-10 14:39:31.128874 | orchestrator | 2026-01-10 14:39:31.128879 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-01-10 14:39:31.128889 | orchestrator | Saturday 10 January 2026 14:32:39 +0000 (0:00:01.329) 0:00:11.783 ****** 2026-01-10 14:39:31.128896 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-01-10 14:39:31.128903 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-01-10 14:39:31.128909 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-01-10 14:39:31.128915 | orchestrator | 2026-01-10 14:39:31.128922 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-01-10 14:39:31.128928 | orchestrator | Saturday 10 January 2026 14:32:41 +0000 (0:00:02.668) 0:00:14.452 ****** 2026-01-10 14:39:31.128934 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2026-01-10 14:39:31.128940 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:39:31.128963 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2026-01-10 14:39:31.128969 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:39:31.128975 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2026-01-10 14:39:31.128981 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:39:31.128987 | orchestrator | 2026-01-10 14:39:31.128994 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2026-01-10 14:39:31.129000 | orchestrator | Saturday 10 January 2026 14:32:43 +0000 (0:00:01.619) 0:00:16.072 ****** 2026-01-10 14:39:31.129010 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-01-10 14:39:31.129022 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-01-10 14:39:31.129037 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-01-10 14:39:31.129049 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-10 14:39:31.129056 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-10 14:39:31.129066 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-10 14:39:31.129074 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-10 14:39:31.129081 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-10 14:39:31.129088 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-10 14:39:31.129100 | orchestrator | 2026-01-10 14:39:31.129107 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2026-01-10 14:39:31.129213 | orchestrator | Saturday 10 January 2026 14:32:46 +0000 (0:00:02.415) 0:00:18.487 ****** 2026-01-10 14:39:31.129223 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:39:31.129229 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:39:31.129247 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:39:31.129255 | orchestrator | 2026-01-10 14:39:31.129281 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2026-01-10 14:39:31.129288 | orchestrator | Saturday 10 January 2026 14:32:48 +0000 (0:00:02.407) 0:00:20.895 ****** 2026-01-10 14:39:31.129294 | orchestrator | changed: [testbed-node-0] => (item=users) 2026-01-10 14:39:31.129314 | orchestrator | changed: [testbed-node-2] => (item=users) 2026-01-10 14:39:31.129320 | orchestrator | changed: [testbed-node-1] => (item=users) 2026-01-10 14:39:31.129327 | orchestrator | changed: [testbed-node-0] => (item=rules) 2026-01-10 14:39:31.129333 | orchestrator | changed: [testbed-node-2] => (item=rules) 2026-01-10 14:39:31.129340 | orchestrator | changed: [testbed-node-1] => (item=rules) 2026-01-10 14:39:31.129346 | orchestrator | 2026-01-10 14:39:31.129353 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2026-01-10 14:39:31.129364 | orchestrator | Saturday 10 January 2026 14:32:52 +0000 (0:00:03.910) 0:00:24.806 ****** 2026-01-10 14:39:31.129371 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:39:31.129377 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:39:31.129383 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:39:31.129389 | orchestrator | 2026-01-10 14:39:31.129395 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2026-01-10 14:39:31.129401 | orchestrator | Saturday 10 January 2026 14:32:55 +0000 (0:00:03.277) 0:00:28.083 ****** 2026-01-10 14:39:31.129407 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:39:31.129413 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:39:31.129419 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:39:31.129426 | orchestrator | 2026-01-10 14:39:31.129432 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2026-01-10 14:39:31.129437 | orchestrator | Saturday 10 January 2026 14:32:57 +0000 (0:00:01.881) 0:00:29.965 ****** 2026-01-10 14:39:31.129444 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-01-10 14:39:31.129460 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-10 14:39:31.129467 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-10 14:39:31.129490 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2025.1', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__c45777043d0bc683c5183281bb1d717623d37d7d', '__omit_place_holder__c45777043d0bc683c5183281bb1d717623d37d7d'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-01-10 14:39:31.129497 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:39:31.129503 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-01-10 14:39:31.129513 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-10 14:39:31.129519 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-10 14:39:31.129525 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2025.1', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__c45777043d0bc683c5183281bb1d717623d37d7d', '__omit_place_holder__c45777043d0bc683c5183281bb1d717623d37d7d'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-01-10 14:39:31.129531 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:39:31.129543 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-01-10 14:39:31.129554 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-10 14:39:31.129560 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-10 14:39:31.129566 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2025.1', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__c45777043d0bc683c5183281bb1d717623d37d7d', '__omit_place_holder__c45777043d0bc683c5183281bb1d717623d37d7d'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-01-10 14:39:31.129572 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:39:31.129578 | orchestrator | 2026-01-10 14:39:31.129587 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2026-01-10 14:39:31.129593 | orchestrator | Saturday 10 January 2026 14:32:58 +0000 (0:00:01.245) 0:00:31.210 ****** 2026-01-10 14:39:31.129600 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-01-10 14:39:31.129606 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-01-10 14:39:31.129699 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-01-10 14:39:31.129724 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-10 14:39:31.129732 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-10 14:39:31.129739 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2025.1', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__c45777043d0bc683c5183281bb1d717623d37d7d', '__omit_place_holder__c45777043d0bc683c5183281bb1d717623d37d7d'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-01-10 14:39:31.129811 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-10 14:39:31.129832 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-10 14:39:31.129839 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2025.1', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__c45777043d0bc683c5183281bb1d717623d37d7d', '__omit_place_holder__c45777043d0bc683c5183281bb1d717623d37d7d'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-01-10 14:39:31.129867 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-10 14:39:31.129875 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-10 14:39:31.129882 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2025.1', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__c45777043d0bc683c5183281bb1d717623d37d7d', '__omit_place_holder__c45777043d0bc683c5183281bb1d717623d37d7d'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-01-10 14:39:31.129888 | orchestrator | 2026-01-10 14:39:31.129895 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2026-01-10 14:39:31.129901 | orchestrator | Saturday 10 January 2026 14:33:03 +0000 (0:00:04.409) 0:00:35.619 ****** 2026-01-10 14:39:31.129952 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-01-10 14:39:31.129958 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-01-10 14:39:31.129964 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-01-10 14:39:31.129985 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-10 14:39:31.129991 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-10 14:39:31.129997 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-10 14:39:31.130003 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-10 14:39:31.130051 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-10 14:39:31.130060 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-10 14:39:31.130066 | orchestrator | 2026-01-10 14:39:31.130072 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2026-01-10 14:39:31.130078 | orchestrator | Saturday 10 January 2026 14:33:06 +0000 (0:00:03.616) 0:00:39.236 ****** 2026-01-10 14:39:31.130089 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-01-10 14:39:31.130095 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-01-10 14:39:31.130101 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-01-10 14:39:31.130107 | orchestrator | 2026-01-10 14:39:31.130112 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2026-01-10 14:39:31.130119 | orchestrator | Saturday 10 January 2026 14:33:09 +0000 (0:00:02.512) 0:00:41.748 ****** 2026-01-10 14:39:31.130123 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-01-10 14:39:31.130128 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-01-10 14:39:31.130131 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-01-10 14:39:31.130135 | orchestrator | 2026-01-10 14:39:31.131115 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2026-01-10 14:39:31.131167 | orchestrator | Saturday 10 January 2026 14:33:15 +0000 (0:00:06.045) 0:00:47.794 ****** 2026-01-10 14:39:31.131174 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:39:31.131179 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:39:31.131183 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:39:31.131187 | orchestrator | 2026-01-10 14:39:31.131191 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2026-01-10 14:39:31.131195 | orchestrator | Saturday 10 January 2026 14:33:16 +0000 (0:00:00.788) 0:00:48.582 ****** 2026-01-10 14:39:31.131200 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-01-10 14:39:31.131205 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-01-10 14:39:31.131210 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-01-10 14:39:31.131213 | orchestrator | 2026-01-10 14:39:31.131217 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2026-01-10 14:39:31.131221 | orchestrator | Saturday 10 January 2026 14:33:18 +0000 (0:00:02.059) 0:00:50.641 ****** 2026-01-10 14:39:31.131226 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-01-10 14:39:31.131230 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-01-10 14:39:31.131234 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-01-10 14:39:31.131238 | orchestrator | 2026-01-10 14:39:31.131242 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-01-10 14:39:31.131248 | orchestrator | Saturday 10 January 2026 14:33:20 +0000 (0:00:02.304) 0:00:52.945 ****** 2026-01-10 14:39:31.131264 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:39:31.131274 | orchestrator | 2026-01-10 14:39:31.131280 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2026-01-10 14:39:31.131286 | orchestrator | Saturday 10 January 2026 14:33:21 +0000 (0:00:00.617) 0:00:53.563 ****** 2026-01-10 14:39:31.131293 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2026-01-10 14:39:31.131300 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2026-01-10 14:39:31.131306 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2026-01-10 14:39:31.131312 | orchestrator | 2026-01-10 14:39:31.131319 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2026-01-10 14:39:31.131326 | orchestrator | Saturday 10 January 2026 14:33:23 +0000 (0:00:02.077) 0:00:55.640 ****** 2026-01-10 14:39:31.131349 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2026-01-10 14:39:31.131354 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2026-01-10 14:39:31.131358 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2026-01-10 14:39:31.131361 | orchestrator | 2026-01-10 14:39:31.131374 | orchestrator | TASK [loadbalancer : Copying over proxysql-cert.pem] *************************** 2026-01-10 14:39:31.131382 | orchestrator | Saturday 10 January 2026 14:33:25 +0000 (0:00:02.334) 0:00:57.975 ****** 2026-01-10 14:39:31.131389 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:39:31.131396 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:39:31.131401 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:39:31.131408 | orchestrator | 2026-01-10 14:39:31.131414 | orchestrator | TASK [loadbalancer : Copying over proxysql-key.pem] **************************** 2026-01-10 14:39:31.131419 | orchestrator | Saturday 10 January 2026 14:33:25 +0000 (0:00:00.427) 0:00:58.403 ****** 2026-01-10 14:39:31.131425 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:39:31.131431 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:39:31.131437 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:39:31.131443 | orchestrator | 2026-01-10 14:39:31.131448 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-01-10 14:39:31.131453 | orchestrator | Saturday 10 January 2026 14:33:26 +0000 (0:00:00.353) 0:00:58.757 ****** 2026-01-10 14:39:31.131462 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-01-10 14:39:31.131486 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-01-10 14:39:31.131493 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-01-10 14:39:31.131499 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-10 14:39:31.131519 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-10 14:39:31.131531 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-10 14:39:31.131538 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-10 14:39:31.131545 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-10 14:39:31.131556 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-10 14:39:31.131563 | orchestrator | 2026-01-10 14:39:31.131570 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-01-10 14:39:31.131577 | orchestrator | Saturday 10 January 2026 14:33:29 +0000 (0:00:03.461) 0:01:02.218 ****** 2026-01-10 14:39:31.131582 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-01-10 14:39:31.131627 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-10 14:39:31.131639 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-10 14:39:31.131643 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:39:31.131653 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-01-10 14:39:31.131658 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-10 14:39:31.131664 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-10 14:39:31.131671 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:39:31.131683 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-01-10 14:39:31.131693 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-10 14:39:31.131706 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-10 14:39:31.131712 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:39:31.131718 | orchestrator | 2026-01-10 14:39:31.131724 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-01-10 14:39:31.131730 | orchestrator | Saturday 10 January 2026 14:33:30 +0000 (0:00:00.919) 0:01:03.138 ****** 2026-01-10 14:39:31.131739 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-01-10 14:39:31.131745 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-10 14:39:31.131752 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-10 14:39:31.131764 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-01-10 14:39:31.131770 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:39:31.131777 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-10 14:39:31.131789 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-10 14:39:31.131796 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:39:31.131801 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-01-10 14:39:31.131808 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-10 14:39:31.131812 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-10 14:39:31.131817 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:39:31.131821 | orchestrator | 2026-01-10 14:39:31.131825 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2026-01-10 14:39:31.131829 | orchestrator | Saturday 10 January 2026 14:33:32 +0000 (0:00:01.708) 0:01:04.846 ****** 2026-01-10 14:39:31.131834 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-01-10 14:39:31.131839 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-01-10 14:39:31.131843 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-01-10 14:39:31.131848 | orchestrator | 2026-01-10 14:39:31.131852 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2026-01-10 14:39:31.131856 | orchestrator | Saturday 10 January 2026 14:33:33 +0000 (0:00:01.369) 0:01:06.216 ****** 2026-01-10 14:39:31.131861 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-01-10 14:39:31.131868 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-01-10 14:39:31.131872 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-01-10 14:39:31.131880 | orchestrator | 2026-01-10 14:39:31.131884 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2026-01-10 14:39:31.131887 | orchestrator | Saturday 10 January 2026 14:33:35 +0000 (0:00:01.726) 0:01:07.943 ****** 2026-01-10 14:39:31.131891 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-01-10 14:39:31.131895 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-01-10 14:39:31.131899 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:39:31.131903 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-01-10 14:39:31.131907 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-01-10 14:39:31.131911 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-01-10 14:39:31.131915 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:39:31.131918 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-01-10 14:39:31.131922 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:39:31.131926 | orchestrator | 2026-01-10 14:39:31.131929 | orchestrator | TASK [service-check-containers : loadbalancer | Check containers] ************** 2026-01-10 14:39:31.131933 | orchestrator | Saturday 10 January 2026 14:33:37 +0000 (0:00:01.643) 0:01:09.586 ****** 2026-01-10 14:39:31.131937 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-01-10 14:39:31.131945 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-01-10 14:39:31.131949 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-01-10 14:39:31.131953 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-10 14:39:31.131966 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-10 14:39:31.131971 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-10 14:39:31.131975 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-10 14:39:31.131979 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-10 14:39:31.131985 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-10 14:39:31.131989 | orchestrator | 2026-01-10 14:39:31.131993 | orchestrator | TASK [service-check-containers : loadbalancer | Notify handlers to restart containers] *** 2026-01-10 14:39:31.131997 | orchestrator | Saturday 10 January 2026 14:33:39 +0000 (0:00:02.686) 0:01:12.272 ****** 2026-01-10 14:39:31.132002 | orchestrator | changed: [testbed-node-0] => { 2026-01-10 14:39:31.132005 | orchestrator |  "msg": "Notifying handlers" 2026-01-10 14:39:31.132009 | orchestrator | } 2026-01-10 14:39:31.132013 | orchestrator | changed: [testbed-node-1] => { 2026-01-10 14:39:31.132017 | orchestrator |  "msg": "Notifying handlers" 2026-01-10 14:39:31.132021 | orchestrator | } 2026-01-10 14:39:31.132026 | orchestrator | changed: [testbed-node-2] => { 2026-01-10 14:39:31.132031 | orchestrator |  "msg": "Notifying handlers" 2026-01-10 14:39:31.132037 | orchestrator | } 2026-01-10 14:39:31.132042 | orchestrator | 2026-01-10 14:39:31.132048 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-01-10 14:39:31.132053 | orchestrator | Saturday 10 January 2026 14:33:40 +0000 (0:00:00.331) 0:01:12.604 ****** 2026-01-10 14:39:31.132063 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-01-10 14:39:31.132078 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-10 14:39:31.132086 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-10 14:39:31.132090 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:39:31.132094 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-01-10 14:39:31.132098 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-10 14:39:31.132105 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-10 14:39:31.132110 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:39:31.132113 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-01-10 14:39:31.132122 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-10 14:39:31.132130 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-10 14:39:31.132135 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:39:31.132139 | orchestrator | 2026-01-10 14:39:31.132142 | orchestrator | TASK [include_role : aodh] ***************************************************** 2026-01-10 14:39:31.132146 | orchestrator | Saturday 10 January 2026 14:33:41 +0000 (0:00:01.115) 0:01:13.720 ****** 2026-01-10 14:39:31.132150 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:39:31.132154 | orchestrator | 2026-01-10 14:39:31.132158 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2026-01-10 14:39:31.132162 | orchestrator | Saturday 10 January 2026 14:33:41 +0000 (0:00:00.614) 0:01:14.334 ****** 2026-01-10 14:39:31.132168 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2025.1', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-10 14:39:31.132174 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2025.1', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-01-10 14:39:31.132184 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2025.1', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-01-10 14:39:31.132193 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2025.1', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-01-10 14:39:31.132201 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2025.1', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-10 14:39:31.132205 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2025.1', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-01-10 14:39:31.132209 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2025.1', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-10 14:39:31.132217 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2025.1', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-01-10 14:39:31.132225 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2025.1', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-01-10 14:39:31.132229 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2025.1', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-01-10 14:39:31.132235 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2025.1', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-01-10 14:39:31.132240 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2025.1', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-01-10 14:39:31.132246 | orchestrator | 2026-01-10 14:39:31.132252 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2026-01-10 14:39:31.132261 | orchestrator | Saturday 10 January 2026 14:33:46 +0000 (0:00:04.583) 0:01:18.917 ****** 2026-01-10 14:39:31.132269 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2025.1', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-01-10 14:39:31.132275 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2025.1', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-01-10 14:39:31.132287 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2025.1', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-01-10 14:39:31.132294 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2025.1', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-01-10 14:39:31.132300 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:39:31.132311 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2025.1', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-01-10 14:39:31.132318 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2025.1', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-01-10 14:39:31.132324 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2025.1', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-01-10 14:39:31.132370 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2025.1', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-01-10 14:39:31.132382 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:39:31.132390 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2025.1', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-01-10 14:39:31.132394 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2025.1', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-01-10 14:39:31.132404 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2025.1', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-01-10 14:39:31.132408 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2025.1', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-01-10 14:39:31.132412 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:39:31.132416 | orchestrator | 2026-01-10 14:39:31.132420 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2026-01-10 14:39:31.132423 | orchestrator | Saturday 10 January 2026 14:33:47 +0000 (0:00:01.021) 0:01:19.938 ****** 2026-01-10 14:39:31.132427 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-01-10 14:39:31.132433 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-01-10 14:39:31.132442 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:39:31.132446 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-01-10 14:39:31.132450 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-01-10 14:39:31.132454 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:39:31.132461 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-01-10 14:39:31.132465 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-01-10 14:39:31.132469 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:39:31.132473 | orchestrator | 2026-01-10 14:39:31.132477 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2026-01-10 14:39:31.132481 | orchestrator | Saturday 10 January 2026 14:33:48 +0000 (0:00:01.183) 0:01:21.122 ****** 2026-01-10 14:39:31.132485 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:39:31.132488 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:39:31.132492 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:39:31.132496 | orchestrator | 2026-01-10 14:39:31.132499 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2026-01-10 14:39:31.132503 | orchestrator | Saturday 10 January 2026 14:33:50 +0000 (0:00:01.421) 0:01:22.543 ****** 2026-01-10 14:39:31.132507 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:39:31.132511 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:39:31.132515 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:39:31.132518 | orchestrator | 2026-01-10 14:39:31.132522 | orchestrator | TASK [include_role : barbican] ************************************************* 2026-01-10 14:39:31.132526 | orchestrator | Saturday 10 January 2026 14:33:52 +0000 (0:00:01.932) 0:01:24.476 ****** 2026-01-10 14:39:31.132530 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:39:31.132534 | orchestrator | 2026-01-10 14:39:31.132537 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2026-01-10 14:39:31.132541 | orchestrator | Saturday 10 January 2026 14:33:52 +0000 (0:00:00.746) 0:01:25.222 ****** 2026-01-10 14:39:31.132549 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-10 14:39:31.132557 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-10 14:39:31.132572 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-10 14:39:31.132586 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-10 14:39:31.132594 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-10 14:39:31.132606 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-10 14:39:31.132676 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-10 14:39:31.132687 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-10 14:39:31.132691 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-10 14:39:31.132696 | orchestrator | 2026-01-10 14:39:31.132703 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2026-01-10 14:39:31.132707 | orchestrator | Saturday 10 January 2026 14:33:58 +0000 (0:00:05.522) 0:01:30.745 ****** 2026-01-10 14:39:31.132712 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-10 14:39:31.132724 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-10 14:39:31.132730 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-10 14:39:31.132743 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:39:31.132749 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-10 14:39:31.132762 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-10 14:39:31.132770 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-10 14:39:31.133843 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-10 14:39:31.133896 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-10 14:39:31.133920 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:39:31.133929 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-10 14:39:31.133936 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:39:31.133942 | orchestrator | 2026-01-10 14:39:31.133948 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2026-01-10 14:39:31.133954 | orchestrator | Saturday 10 January 2026 14:33:59 +0000 (0:00:00.860) 0:01:31.605 ****** 2026-01-10 14:39:31.133961 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-01-10 14:39:31.133970 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-01-10 14:39:31.133977 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:39:31.133983 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-01-10 14:39:31.133998 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-01-10 14:39:31.134005 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:39:31.134011 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-01-10 14:39:31.134071 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-01-10 14:39:31.134077 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:39:31.134083 | orchestrator | 2026-01-10 14:39:31.134089 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2026-01-10 14:39:31.134095 | orchestrator | Saturday 10 January 2026 14:34:01 +0000 (0:00:02.089) 0:01:33.695 ****** 2026-01-10 14:39:31.134101 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:39:31.134108 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:39:31.134114 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:39:31.134120 | orchestrator | 2026-01-10 14:39:31.134126 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2026-01-10 14:39:31.134142 | orchestrator | Saturday 10 January 2026 14:34:02 +0000 (0:00:01.539) 0:01:35.235 ****** 2026-01-10 14:39:31.134149 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:39:31.134169 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:39:31.134173 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:39:31.134177 | orchestrator | 2026-01-10 14:39:31.134181 | orchestrator | TASK [include_role : blazar] *************************************************** 2026-01-10 14:39:31.134185 | orchestrator | Saturday 10 January 2026 14:34:04 +0000 (0:00:01.970) 0:01:37.205 ****** 2026-01-10 14:39:31.134188 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:39:31.134192 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:39:31.134196 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:39:31.134200 | orchestrator | 2026-01-10 14:39:31.134216 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2026-01-10 14:39:31.134220 | orchestrator | Saturday 10 January 2026 14:34:04 +0000 (0:00:00.221) 0:01:37.427 ****** 2026-01-10 14:39:31.134223 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:39:31.134228 | orchestrator | 2026-01-10 14:39:31.134241 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2026-01-10 14:39:31.134247 | orchestrator | Saturday 10 January 2026 14:34:05 +0000 (0:00:00.872) 0:01:38.299 ****** 2026-01-10 14:39:31.134254 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}}) 2026-01-10 14:39:31.134263 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}}) 2026-01-10 14:39:31.134274 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}}) 2026-01-10 14:39:31.134281 | orchestrator | 2026-01-10 14:39:31.134287 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2026-01-10 14:39:31.134300 | orchestrator | Saturday 10 January 2026 14:34:11 +0000 (0:00:05.741) 0:01:44.041 ****** 2026-01-10 14:39:31.134307 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}})  2026-01-10 14:39:31.134313 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:39:31.134325 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}})  2026-01-10 14:39:31.134332 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:39:31.134338 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}})  2026-01-10 14:39:31.134341 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:39:31.134345 | orchestrator | 2026-01-10 14:39:31.134349 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2026-01-10 14:39:31.134353 | orchestrator | Saturday 10 January 2026 14:34:13 +0000 (0:00:01.708) 0:01:45.749 ****** 2026-01-10 14:39:31.134358 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-01-10 14:39:31.134367 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-01-10 14:39:31.134376 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:39:31.134380 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-01-10 14:39:31.134384 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-01-10 14:39:31.134388 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:39:31.134392 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-01-10 14:39:31.134400 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-01-10 14:39:31.134405 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:39:31.134409 | orchestrator | 2026-01-10 14:39:31.134413 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2026-01-10 14:39:31.134418 | orchestrator | Saturday 10 January 2026 14:34:15 +0000 (0:00:02.462) 0:01:48.212 ****** 2026-01-10 14:39:31.134422 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:39:31.134426 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:39:31.134430 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:39:31.134434 | orchestrator | 2026-01-10 14:39:31.134438 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2026-01-10 14:39:31.134443 | orchestrator | Saturday 10 January 2026 14:34:16 +0000 (0:00:00.568) 0:01:48.780 ****** 2026-01-10 14:39:31.134447 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:39:31.134451 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:39:31.134456 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:39:31.134460 | orchestrator | 2026-01-10 14:39:31.134464 | orchestrator | TASK [include_role : cinder] *************************************************** 2026-01-10 14:39:31.134468 | orchestrator | Saturday 10 January 2026 14:34:17 +0000 (0:00:01.448) 0:01:50.228 ****** 2026-01-10 14:39:31.134473 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:39:31.134477 | orchestrator | 2026-01-10 14:39:31.134482 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2026-01-10 14:39:31.134487 | orchestrator | Saturday 10 January 2026 14:34:18 +0000 (0:00:01.170) 0:01:51.399 ****** 2026-01-10 14:39:31.134494 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-10 14:39:31.134511 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-10 14:39:31.134520 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-10 14:39:31.134534 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-10 14:39:31.134539 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-10 14:39:31.134544 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-10 14:39:31.134554 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-10 14:39:31.134561 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-10 14:39:31.134570 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-10 14:39:31.134576 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-10 14:39:31.134586 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-10 14:39:31.134600 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-10 14:39:31.134606 | orchestrator | 2026-01-10 14:39:31.134658 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2026-01-10 14:39:31.134665 | orchestrator | Saturday 10 January 2026 14:34:23 +0000 (0:00:04.243) 0:01:55.642 ****** 2026-01-10 14:39:31.134676 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-10 14:39:31.134682 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-10 14:39:31.134705 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-10 14:39:31.134711 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-10 14:39:31.134723 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:39:31.134730 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-10 14:39:31.134743 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-10 14:39:31.134750 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-10 14:39:31.134762 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-10 14:39:31.134769 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:39:31.134776 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-10 14:39:31.134787 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-10 14:39:31.134797 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-10 14:39:31.134804 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-10 14:39:31.134811 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:39:31.134817 | orchestrator | 2026-01-10 14:39:31.134824 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2026-01-10 14:39:31.134830 | orchestrator | Saturday 10 January 2026 14:34:24 +0000 (0:00:00.884) 0:01:56.526 ****** 2026-01-10 14:39:31.134837 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-01-10 14:39:31.134848 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-01-10 14:39:31.134854 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:39:31.134861 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-01-10 14:39:31.134868 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-01-10 14:39:31.134874 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:39:31.134881 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-01-10 14:39:31.134892 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-01-10 14:39:31.134898 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:39:31.134905 | orchestrator | 2026-01-10 14:39:31.134912 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2026-01-10 14:39:31.134918 | orchestrator | Saturday 10 January 2026 14:34:25 +0000 (0:00:01.007) 0:01:57.534 ****** 2026-01-10 14:39:31.134924 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:39:31.134931 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:39:31.134938 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:39:31.134944 | orchestrator | 2026-01-10 14:39:31.134951 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2026-01-10 14:39:31.134957 | orchestrator | Saturday 10 January 2026 14:34:26 +0000 (0:00:01.556) 0:01:59.091 ****** 2026-01-10 14:39:31.134964 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:39:31.134970 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:39:31.134976 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:39:31.134983 | orchestrator | 2026-01-10 14:39:31.134989 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2026-01-10 14:39:31.134995 | orchestrator | Saturday 10 January 2026 14:34:28 +0000 (0:00:02.156) 0:02:01.247 ****** 2026-01-10 14:39:31.135001 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:39:31.135007 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:39:31.135014 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:39:31.135021 | orchestrator | 2026-01-10 14:39:31.135028 | orchestrator | TASK [include_role : cyborg] *************************************************** 2026-01-10 14:39:31.135034 | orchestrator | Saturday 10 January 2026 14:34:29 +0000 (0:00:00.326) 0:02:01.574 ****** 2026-01-10 14:39:31.135040 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:39:31.135047 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:39:31.135053 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:39:31.135060 | orchestrator | 2026-01-10 14:39:31.135067 | orchestrator | TASK [include_role : designate] ************************************************ 2026-01-10 14:39:31.135077 | orchestrator | Saturday 10 January 2026 14:34:29 +0000 (0:00:00.378) 0:02:01.952 ****** 2026-01-10 14:39:31.135084 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:39:31.135090 | orchestrator | 2026-01-10 14:39:31.135096 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2026-01-10 14:39:31.135102 | orchestrator | Saturday 10 January 2026 14:34:30 +0000 (0:00:01.241) 0:02:03.194 ****** 2026-01-10 14:39:31.135110 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-10 14:39:31.135121 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-10 14:39:31.135133 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-10 14:39:31.135140 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-10 14:39:31.135147 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-10 14:39:31.135158 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-10 14:39:31.135164 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-10 14:39:31.135179 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2025.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-01-10 14:39:31.135185 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-10 14:39:31.135191 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-10 14:39:31.135198 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-10 14:39:31.135208 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-10 14:39:31.135215 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-10 14:39:31.135221 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2025.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-01-10 14:39:31.135237 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-10 14:39:31.135243 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-10 14:39:31.135250 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-10 14:39:31.135260 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-10 14:39:31.135267 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-10 14:39:31.135273 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-10 14:39:31.135288 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2025.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-01-10 14:39:31.135294 | orchestrator | 2026-01-10 14:39:31.135301 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2026-01-10 14:39:31.135307 | orchestrator | Saturday 10 January 2026 14:34:34 +0000 (0:00:03.841) 0:02:07.035 ****** 2026-01-10 14:39:31.135313 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-01-10 14:39:31.135320 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-10 14:39:31.135329 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-10 14:39:31.135335 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-10 14:39:31.135346 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-10 14:39:31.135357 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-01-10 14:39:31.135363 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-10 14:39:31.135370 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-10 14:39:31.135380 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-10 14:39:31.135386 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2025.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-01-10 14:39:31.135400 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:39:31.135406 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-10 14:39:31.135417 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-10 14:39:31.135424 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-10 14:39:31.135430 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2025.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-01-10 14:39:31.135436 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:39:31.135446 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-01-10 14:39:31.135461 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-10 14:39:31.135468 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-10 14:39:31.135478 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-10 14:39:31.135485 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-10 14:39:31.135491 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-10 14:39:31.135498 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2025.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-01-10 14:39:31.135504 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:39:31.135510 | orchestrator | 2026-01-10 14:39:31.135520 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2026-01-10 14:39:31.135531 | orchestrator | Saturday 10 January 2026 14:34:35 +0000 (0:00:00.882) 0:02:07.918 ****** 2026-01-10 14:39:31.135539 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-01-10 14:39:31.135548 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-01-10 14:39:31.135555 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:39:31.135561 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-01-10 14:39:31.135567 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-01-10 14:39:31.135573 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:39:31.135579 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-01-10 14:39:31.135585 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-01-10 14:39:31.135591 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:39:31.135597 | orchestrator | 2026-01-10 14:39:31.137802 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2026-01-10 14:39:31.137854 | orchestrator | Saturday 10 January 2026 14:34:37 +0000 (0:00:01.759) 0:02:09.677 ****** 2026-01-10 14:39:31.137859 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:39:31.137864 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:39:31.137868 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:39:31.137872 | orchestrator | 2026-01-10 14:39:31.137876 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2026-01-10 14:39:31.137880 | orchestrator | Saturday 10 January 2026 14:34:38 +0000 (0:00:01.391) 0:02:11.068 ****** 2026-01-10 14:39:31.137885 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:39:31.137889 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:39:31.137892 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:39:31.137896 | orchestrator | 2026-01-10 14:39:31.137900 | orchestrator | TASK [include_role : etcd] ***************************************************** 2026-01-10 14:39:31.137904 | orchestrator | Saturday 10 January 2026 14:34:40 +0000 (0:00:02.092) 0:02:13.161 ****** 2026-01-10 14:39:31.137908 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:39:31.137911 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:39:31.137915 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:39:31.137919 | orchestrator | 2026-01-10 14:39:31.137922 | orchestrator | TASK [include_role : glance] *************************************************** 2026-01-10 14:39:31.137926 | orchestrator | Saturday 10 January 2026 14:34:40 +0000 (0:00:00.303) 0:02:13.464 ****** 2026-01-10 14:39:31.137930 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:39:31.137934 | orchestrator | 2026-01-10 14:39:31.137937 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2026-01-10 14:39:31.137941 | orchestrator | Saturday 10 January 2026 14:34:42 +0000 (0:00:01.105) 0:02:14.570 ****** 2026-01-10 14:39:31.137951 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-10 14:39:31.137976 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2025.1', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-01-10 14:39:31.137982 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-10 14:39:31.137993 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2025.1', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-01-10 14:39:31.138008 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-10 14:39:31.138058 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2025.1', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-01-10 14:39:31.138064 | orchestrator | 2026-01-10 14:39:31.138072 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2026-01-10 14:39:31.138076 | orchestrator | Saturday 10 January 2026 14:34:46 +0000 (0:00:04.492) 0:02:19.062 ****** 2026-01-10 14:39:31.138080 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-01-10 14:39:31.138089 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-01-10 14:39:31.138097 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2025.1', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-01-10 14:39:31.138104 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:39:31.138110 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2025.1', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-01-10 14:39:31.138115 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:39:31.138122 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-01-10 14:39:31.138131 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2025.1', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-01-10 14:39:31.138136 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:39:31.138139 | orchestrator | 2026-01-10 14:39:31.138143 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2026-01-10 14:39:31.138147 | orchestrator | Saturday 10 January 2026 14:34:50 +0000 (0:00:03.559) 0:02:22.622 ****** 2026-01-10 14:39:31.138151 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-01-10 14:39:31.138160 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-01-10 14:39:31.138165 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:39:31.138169 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-01-10 14:39:31.138176 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-01-10 14:39:31.138181 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:39:31.138185 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-01-10 14:39:31.138192 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-01-10 14:39:31.138196 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:39:31.138200 | orchestrator | 2026-01-10 14:39:31.138204 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2026-01-10 14:39:31.138209 | orchestrator | Saturday 10 January 2026 14:34:54 +0000 (0:00:04.162) 0:02:26.785 ****** 2026-01-10 14:39:31.138213 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:39:31.138217 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:39:31.138222 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:39:31.138226 | orchestrator | 2026-01-10 14:39:31.138231 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2026-01-10 14:39:31.138235 | orchestrator | Saturday 10 January 2026 14:34:55 +0000 (0:00:01.400) 0:02:28.185 ****** 2026-01-10 14:39:31.138239 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:39:31.138243 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:39:31.138248 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:39:31.138252 | orchestrator | 2026-01-10 14:39:31.138256 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2026-01-10 14:39:31.138260 | orchestrator | Saturday 10 January 2026 14:34:57 +0000 (0:00:02.239) 0:02:30.425 ****** 2026-01-10 14:39:31.138265 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:39:31.138269 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:39:31.138273 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:39:31.138277 | orchestrator | 2026-01-10 14:39:31.138282 | orchestrator | TASK [include_role : grafana] ************************************************** 2026-01-10 14:39:31.138286 | orchestrator | Saturday 10 January 2026 14:34:58 +0000 (0:00:00.316) 0:02:30.741 ****** 2026-01-10 14:39:31.138290 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:39:31.138294 | orchestrator | 2026-01-10 14:39:31.138299 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2026-01-10 14:39:31.138303 | orchestrator | Saturday 10 January 2026 14:34:59 +0000 (0:00:00.845) 0:02:31.587 ****** 2026-01-10 14:39:31.138311 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-10 14:39:31.138320 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-10 14:39:31.138325 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-10 14:39:31.138329 | orchestrator | 2026-01-10 14:39:31.138333 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2026-01-10 14:39:31.138338 | orchestrator | Saturday 10 January 2026 14:35:02 +0000 (0:00:03.595) 0:02:35.182 ****** 2026-01-10 14:39:31.138344 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-01-10 14:39:31.138349 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:39:31.138353 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-01-10 14:39:31.138360 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:39:31.138370 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-01-10 14:39:31.138377 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:39:31.138384 | orchestrator | 2026-01-10 14:39:31.138389 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2026-01-10 14:39:31.138397 | orchestrator | Saturday 10 January 2026 14:35:03 +0000 (0:00:00.461) 0:02:35.644 ****** 2026-01-10 14:39:31.138406 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-01-10 14:39:31.138450 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-01-10 14:39:31.138457 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:39:31.138463 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-01-10 14:39:31.138468 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-01-10 14:39:31.138474 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:39:31.138481 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-01-10 14:39:31.138487 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-01-10 14:39:31.138492 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:39:31.138498 | orchestrator | 2026-01-10 14:39:31.138504 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2026-01-10 14:39:31.138508 | orchestrator | Saturday 10 January 2026 14:35:04 +0000 (0:00:01.027) 0:02:36.672 ****** 2026-01-10 14:39:31.138514 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:39:31.138519 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:39:31.138526 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:39:31.138531 | orchestrator | 2026-01-10 14:39:31.138540 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2026-01-10 14:39:31.138546 | orchestrator | Saturday 10 January 2026 14:35:05 +0000 (0:00:01.796) 0:02:38.469 ****** 2026-01-10 14:39:31.138552 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:39:31.138558 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:39:31.138564 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:39:31.138569 | orchestrator | 2026-01-10 14:39:31.138575 | orchestrator | TASK [include_role : heat] ***************************************************** 2026-01-10 14:39:31.138582 | orchestrator | Saturday 10 January 2026 14:35:08 +0000 (0:00:02.510) 0:02:40.979 ****** 2026-01-10 14:39:31.138592 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:39:31.138598 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:39:31.138605 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:39:31.138635 | orchestrator | 2026-01-10 14:39:31.138642 | orchestrator | TASK [include_role : horizon] ************************************************** 2026-01-10 14:39:31.138648 | orchestrator | Saturday 10 January 2026 14:35:08 +0000 (0:00:00.291) 0:02:41.271 ****** 2026-01-10 14:39:31.138653 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:39:31.138660 | orchestrator | 2026-01-10 14:39:31.138666 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2026-01-10 14:39:31.138672 | orchestrator | Saturday 10 January 2026 14:35:09 +0000 (0:00:00.796) 0:02:42.067 ****** 2026-01-10 14:39:31.138686 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-10 14:39:31.138698 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-10 14:39:31.138727 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-10 14:39:31.138734 | orchestrator | 2026-01-10 14:39:31.138740 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2026-01-10 14:39:31.138746 | orchestrator | Saturday 10 January 2026 14:35:14 +0000 (0:00:04.653) 0:02:46.720 ****** 2026-01-10 14:39:31.138760 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-10 14:39:31.138772 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:39:31.138778 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-10 14:39:31.138789 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:39:31.138799 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-10 14:39:31.138811 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:39:31.138815 | orchestrator | 2026-01-10 14:39:31.138819 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2026-01-10 14:39:31.138823 | orchestrator | Saturday 10 January 2026 14:35:15 +0000 (0:00:00.885) 0:02:47.605 ****** 2026-01-10 14:39:31.138835 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-01-10 14:39:31.138841 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-01-10 14:39:31.138847 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-01-10 14:39:31.138852 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-01-10 14:39:31.138859 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-01-10 14:39:31.138864 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:39:31.138870 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-01-10 14:39:31.138874 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-01-10 14:39:31.138877 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-01-10 14:39:31.138881 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-01-10 14:39:31.138885 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-01-10 14:39:31.138892 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-01-10 14:39:31.138896 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-01-10 14:39:31.138900 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-01-10 14:39:31.138904 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-01-10 14:39:31.138908 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:39:31.138911 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-01-10 14:39:31.138915 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:39:31.138919 | orchestrator | 2026-01-10 14:39:31.138922 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2026-01-10 14:39:31.138926 | orchestrator | Saturday 10 January 2026 14:35:16 +0000 (0:00:01.290) 0:02:48.896 ****** 2026-01-10 14:39:31.138932 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:39:31.138936 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:39:31.138940 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:39:31.138943 | orchestrator | 2026-01-10 14:39:31.138947 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2026-01-10 14:39:31.138951 | orchestrator | Saturday 10 January 2026 14:35:18 +0000 (0:00:01.610) 0:02:50.507 ****** 2026-01-10 14:39:31.138954 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:39:31.138958 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:39:31.138962 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:39:31.138972 | orchestrator | 2026-01-10 14:39:31.138976 | orchestrator | TASK [include_role : influxdb] ************************************************* 2026-01-10 14:39:31.138980 | orchestrator | Saturday 10 January 2026 14:35:20 +0000 (0:00:02.815) 0:02:53.322 ****** 2026-01-10 14:39:31.138984 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:39:31.138993 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:39:31.138997 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:39:31.139001 | orchestrator | 2026-01-10 14:39:31.139012 | orchestrator | TASK [include_role : ironic] *************************************************** 2026-01-10 14:39:31.139016 | orchestrator | Saturday 10 January 2026 14:35:21 +0000 (0:00:00.358) 0:02:53.681 ****** 2026-01-10 14:39:31.139019 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:39:31.139023 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:39:31.139027 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:39:31.139030 | orchestrator | 2026-01-10 14:39:31.139036 | orchestrator | TASK [include_role : keystone] ************************************************* 2026-01-10 14:39:31.139040 | orchestrator | Saturday 10 January 2026 14:35:21 +0000 (0:00:00.415) 0:02:54.097 ****** 2026-01-10 14:39:31.139043 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:39:31.139047 | orchestrator | 2026-01-10 14:39:31.139051 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2026-01-10 14:39:31.139054 | orchestrator | Saturday 10 January 2026 14:35:22 +0000 (0:00:01.192) 0:02:55.289 ****** 2026-01-10 14:39:31.139059 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-01-10 14:39:31.139068 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-01-10 14:39:31.139077 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-10 14:39:31.139090 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-10 14:39:31.139096 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-10 14:39:31.139100 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-10 14:39:31.139107 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-01-10 14:39:31.139112 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-10 14:39:31.139118 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-10 14:39:31.139122 | orchestrator | 2026-01-10 14:39:31.139126 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2026-01-10 14:39:31.139138 | orchestrator | Saturday 10 January 2026 14:35:25 +0000 (0:00:03.119) 0:02:58.410 ****** 2026-01-10 14:39:31.139144 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-01-10 14:39:31.139148 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-10 14:39:31.139152 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-10 14:39:31.139156 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:39:31.139279 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-01-10 14:39:31.139297 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-10 14:39:31.139303 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-10 14:39:31.139319 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:39:31.139330 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-01-10 14:39:31.139337 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-10 14:39:31.139349 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-10 14:39:31.139361 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:39:31.139365 | orchestrator | 2026-01-10 14:39:31.139369 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2026-01-10 14:39:31.139373 | orchestrator | Saturday 10 January 2026 14:35:26 +0000 (0:00:01.052) 0:02:59.463 ****** 2026-01-10 14:39:31.139377 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-01-10 14:39:31.139381 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-01-10 14:39:31.139387 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:39:31.139391 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-01-10 14:39:31.139395 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-01-10 14:39:31.139398 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:39:31.139402 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-01-10 14:39:31.139406 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-01-10 14:39:31.139412 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:39:31.139415 | orchestrator | 2026-01-10 14:39:31.139419 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2026-01-10 14:39:31.139423 | orchestrator | Saturday 10 January 2026 14:35:28 +0000 (0:00:01.429) 0:03:00.892 ****** 2026-01-10 14:39:31.139427 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:39:31.139431 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:39:31.139434 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:39:31.139438 | orchestrator | 2026-01-10 14:39:31.139442 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2026-01-10 14:39:31.139445 | orchestrator | Saturday 10 January 2026 14:35:29 +0000 (0:00:01.393) 0:03:02.285 ****** 2026-01-10 14:39:31.139449 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:39:31.139453 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:39:31.139456 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:39:31.139460 | orchestrator | 2026-01-10 14:39:31.139464 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2026-01-10 14:39:31.139468 | orchestrator | Saturday 10 January 2026 14:35:32 +0000 (0:00:02.571) 0:03:04.856 ****** 2026-01-10 14:39:31.139471 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:39:31.139478 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:39:31.139481 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:39:31.139485 | orchestrator | 2026-01-10 14:39:31.139489 | orchestrator | TASK [include_role : magnum] *************************************************** 2026-01-10 14:39:31.139492 | orchestrator | Saturday 10 January 2026 14:35:32 +0000 (0:00:00.330) 0:03:05.187 ****** 2026-01-10 14:39:31.139496 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:39:31.139500 | orchestrator | 2026-01-10 14:39:31.139504 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2026-01-10 14:39:31.139507 | orchestrator | Saturday 10 January 2026 14:35:33 +0000 (0:00:01.142) 0:03:06.330 ****** 2026-01-10 14:39:31.139514 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-10 14:39:31.139518 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-10 14:39:31.139523 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-10 14:39:31.139530 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-10 14:39:31.139536 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-10 14:39:31.139543 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-10 14:39:31.139547 | orchestrator | 2026-01-10 14:39:31.139551 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2026-01-10 14:39:31.139555 | orchestrator | Saturday 10 January 2026 14:35:37 +0000 (0:00:03.804) 0:03:10.134 ****** 2026-01-10 14:39:31.139559 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-01-10 14:39:31.139565 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-10 14:39:31.139572 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:39:31.139576 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-01-10 14:39:31.139584 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-10 14:39:31.139588 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:39:31.139592 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-01-10 14:39:31.139596 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-10 14:39:31.139599 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:39:31.139603 | orchestrator | 2026-01-10 14:39:31.139607 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2026-01-10 14:39:31.139632 | orchestrator | Saturday 10 January 2026 14:35:38 +0000 (0:00:00.630) 0:03:10.765 ****** 2026-01-10 14:39:31.139642 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-01-10 14:39:31.139647 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-01-10 14:39:31.139651 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:39:31.139654 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-01-10 14:39:31.139658 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-01-10 14:39:31.139662 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:39:31.139666 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-01-10 14:39:31.139670 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-01-10 14:39:31.139674 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:39:31.139677 | orchestrator | 2026-01-10 14:39:31.139681 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2026-01-10 14:39:31.139685 | orchestrator | Saturday 10 January 2026 14:35:39 +0000 (0:00:00.934) 0:03:11.699 ****** 2026-01-10 14:39:31.139692 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:39:31.139695 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:39:31.139699 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:39:31.139714 | orchestrator | 2026-01-10 14:39:31.139720 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2026-01-10 14:39:31.139727 | orchestrator | Saturday 10 January 2026 14:35:40 +0000 (0:00:01.694) 0:03:13.394 ****** 2026-01-10 14:39:31.139733 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:39:31.139739 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:39:31.139754 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:39:31.139758 | orchestrator | 2026-01-10 14:39:31.139761 | orchestrator | TASK [include_role : manila] *************************************************** 2026-01-10 14:39:31.139765 | orchestrator | Saturday 10 January 2026 14:35:43 +0000 (0:00:02.489) 0:03:15.883 ****** 2026-01-10 14:39:31.139769 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:39:31.139773 | orchestrator | 2026-01-10 14:39:31.139776 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2026-01-10 14:39:31.139780 | orchestrator | Saturday 10 January 2026 14:35:44 +0000 (0:00:01.046) 0:03:16.930 ****** 2026-01-10 14:39:31.139784 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-10 14:39:31.139793 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-01-10 14:39:31.139800 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-01-10 14:39:31.139805 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-01-10 14:39:31.139813 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-10 14:39:31.139817 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-01-10 14:39:31.139821 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-01-10 14:39:31.139827 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-01-10 14:39:31.139833 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-10 14:39:31.139837 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-01-10 14:39:31.139851 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-01-10 14:39:31.139855 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-01-10 14:39:31.139859 | orchestrator | 2026-01-10 14:39:31.139862 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2026-01-10 14:39:31.139872 | orchestrator | Saturday 10 January 2026 14:35:49 +0000 (0:00:04.720) 0:03:21.651 ****** 2026-01-10 14:39:31.139876 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-01-10 14:39:31.139882 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-01-10 14:39:31.139887 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-01-10 14:39:31.139891 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-01-10 14:39:31.139896 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:39:31.139903 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-01-10 14:39:31.139911 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-01-10 14:39:31.139915 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-01-10 14:39:31.139921 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-01-10 14:39:31.139926 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:39:31.139930 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-01-10 14:39:31.139937 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-01-10 14:39:31.139941 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-01-10 14:39:31.139949 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-01-10 14:39:31.139953 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:39:31.139957 | orchestrator | 2026-01-10 14:39:31.139962 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2026-01-10 14:39:31.139966 | orchestrator | Saturday 10 January 2026 14:35:50 +0000 (0:00:00.834) 0:03:22.485 ****** 2026-01-10 14:39:31.139970 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-01-10 14:39:31.139975 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-01-10 14:39:31.139979 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:39:31.139985 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-01-10 14:39:31.139989 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-01-10 14:39:31.139993 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:39:31.139997 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-01-10 14:39:31.140000 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-01-10 14:39:31.140004 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:39:31.140008 | orchestrator | 2026-01-10 14:39:31.140012 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2026-01-10 14:39:31.140015 | orchestrator | Saturday 10 January 2026 14:35:50 +0000 (0:00:00.805) 0:03:23.290 ****** 2026-01-10 14:39:31.140019 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:39:31.140023 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:39:31.140027 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:39:31.140030 | orchestrator | 2026-01-10 14:39:31.140034 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2026-01-10 14:39:31.140038 | orchestrator | Saturday 10 January 2026 14:35:52 +0000 (0:00:01.341) 0:03:24.631 ****** 2026-01-10 14:39:31.140041 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:39:31.140053 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:39:31.140057 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:39:31.140061 | orchestrator | 2026-01-10 14:39:31.140064 | orchestrator | TASK [include_role : mariadb] ************************************************** 2026-01-10 14:39:31.140071 | orchestrator | Saturday 10 January 2026 14:35:54 +0000 (0:00:02.142) 0:03:26.774 ****** 2026-01-10 14:39:31.140077 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:39:31.140093 | orchestrator | 2026-01-10 14:39:31.140097 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2026-01-10 14:39:31.140101 | orchestrator | Saturday 10 January 2026 14:35:55 +0000 (0:00:01.415) 0:03:28.189 ****** 2026-01-10 14:39:31.140105 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-01-10 14:39:31.140109 | orchestrator | 2026-01-10 14:39:31.140112 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2026-01-10 14:39:31.140116 | orchestrator | Saturday 10 January 2026 14:35:58 +0000 (0:00:03.257) 0:03:31.447 ****** 2026-01-10 14:39:31.140120 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-10 14:39:31.140127 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2025.1', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-01-10 14:39:31.140140 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:39:31.140146 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-10 14:39:31.140154 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2025.1', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-01-10 14:39:31.140157 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:39:31.140162 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-10 14:39:31.140166 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2025.1', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-01-10 14:39:31.140173 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:39:31.140177 | orchestrator | 2026-01-10 14:39:31.140181 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2026-01-10 14:39:31.140185 | orchestrator | Saturday 10 January 2026 14:36:02 +0000 (0:00:03.126) 0:03:34.573 ****** 2026-01-10 14:39:31.140261 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-10 14:39:31.140275 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2025.1', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-01-10 14:39:31.140279 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:39:31.140286 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-10 14:39:31.140297 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2025.1', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-01-10 14:39:31.140302 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:39:31.140308 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-10 14:39:31.140312 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2025.1', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-01-10 14:39:31.140316 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:39:31.140320 | orchestrator | 2026-01-10 14:39:31.140327 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2026-01-10 14:39:31.140331 | orchestrator | Saturday 10 January 2026 14:36:04 +0000 (0:00:02.613) 0:03:37.187 ****** 2026-01-10 14:39:31.140335 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-01-10 14:39:31.140342 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-01-10 14:39:31.140346 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:39:31.140350 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-01-10 14:39:31.140354 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-01-10 14:39:31.140357 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:39:31.140361 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-01-10 14:39:31.140368 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-01-10 14:39:31.140375 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:39:31.140379 | orchestrator | 2026-01-10 14:39:31.140383 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2026-01-10 14:39:31.140386 | orchestrator | Saturday 10 January 2026 14:36:07 +0000 (0:00:03.209) 0:03:40.396 ****** 2026-01-10 14:39:31.140390 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:39:31.140394 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:39:31.140397 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:39:31.140401 | orchestrator | 2026-01-10 14:39:31.140405 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2026-01-10 14:39:31.140408 | orchestrator | Saturday 10 January 2026 14:36:10 +0000 (0:00:02.170) 0:03:42.566 ****** 2026-01-10 14:39:31.140412 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:39:31.140416 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:39:31.140420 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:39:31.140423 | orchestrator | 2026-01-10 14:39:31.140427 | orchestrator | TASK [include_role : masakari] ************************************************* 2026-01-10 14:39:31.140431 | orchestrator | Saturday 10 January 2026 14:36:11 +0000 (0:00:01.834) 0:03:44.400 ****** 2026-01-10 14:39:31.140434 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:39:31.140438 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:39:31.140442 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:39:31.140445 | orchestrator | 2026-01-10 14:39:31.140449 | orchestrator | TASK [include_role : memcached] ************************************************ 2026-01-10 14:39:31.140453 | orchestrator | Saturday 10 January 2026 14:36:12 +0000 (0:00:00.401) 0:03:44.802 ****** 2026-01-10 14:39:31.140457 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:39:31.140460 | orchestrator | 2026-01-10 14:39:31.140464 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2026-01-10 14:39:31.140468 | orchestrator | Saturday 10 January 2026 14:36:13 +0000 (0:00:01.436) 0:03:46.239 ****** 2026-01-10 14:39:31.140474 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2025.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-01-10 14:39:31.140478 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2025.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-01-10 14:39:31.140482 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2025.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-01-10 14:39:31.140489 | orchestrator | 2026-01-10 14:39:31.140493 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2026-01-10 14:39:31.140497 | orchestrator | Saturday 10 January 2026 14:36:15 +0000 (0:00:01.599) 0:03:47.838 ****** 2026-01-10 14:39:31.140503 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2025.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-01-10 14:39:31.140507 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:39:31.140512 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2025.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-01-10 14:39:31.140518 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:39:31.140672 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2025.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-01-10 14:39:31.140687 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:39:31.140693 | orchestrator | 2026-01-10 14:39:31.140699 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2026-01-10 14:39:31.140705 | orchestrator | Saturday 10 January 2026 14:36:15 +0000 (0:00:00.516) 0:03:48.355 ****** 2026-01-10 14:39:31.140712 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-01-10 14:39:31.140720 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:39:31.140724 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-01-10 14:39:31.140728 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:39:31.140732 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-01-10 14:39:31.140741 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:39:31.140745 | orchestrator | 2026-01-10 14:39:31.140749 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2026-01-10 14:39:31.140752 | orchestrator | Saturday 10 January 2026 14:36:16 +0000 (0:00:01.108) 0:03:49.463 ****** 2026-01-10 14:39:31.140756 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:39:31.140760 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:39:31.140764 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:39:31.140767 | orchestrator | 2026-01-10 14:39:31.140771 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2026-01-10 14:39:31.140775 | orchestrator | Saturday 10 January 2026 14:36:17 +0000 (0:00:00.475) 0:03:49.939 ****** 2026-01-10 14:39:31.140779 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:39:31.140782 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:39:31.140786 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:39:31.140790 | orchestrator | 2026-01-10 14:39:31.140793 | orchestrator | TASK [include_role : mistral] ************************************************** 2026-01-10 14:39:31.140801 | orchestrator | Saturday 10 January 2026 14:36:18 +0000 (0:00:01.340) 0:03:51.279 ****** 2026-01-10 14:39:31.140805 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:39:31.140809 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:39:31.140813 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:39:31.140816 | orchestrator | 2026-01-10 14:39:31.140820 | orchestrator | TASK [include_role : neutron] ************************************************** 2026-01-10 14:39:31.140824 | orchestrator | Saturday 10 January 2026 14:36:19 +0000 (0:00:00.330) 0:03:51.609 ****** 2026-01-10 14:39:31.140828 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:39:31.140831 | orchestrator | 2026-01-10 14:39:31.140835 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2026-01-10 14:39:31.140839 | orchestrator | Saturday 10 January 2026 14:36:20 +0000 (0:00:01.552) 0:03:53.162 ****** 2026-01-10 14:39:31.140843 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-10 14:39:31.140853 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2025.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-01-10 14:39:31.140861 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/neutron-dhcp-agent:2025.1', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-01-10 14:39:31.140867 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2025.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/neutron-l3-agent:2025.1', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-01-10 14:39:31.140872 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-01-10 14:39:31.140878 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2025.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-01-10 14:39:31.140885 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-01-10 14:39:31.140889 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-01-10 14:39:31.140897 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-10 14:39:31.140901 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-01-10 14:39:31.140910 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-01-10 14:39:31.140914 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-01-10 14:39:31.140918 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2025.1', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-01-10 14:39:31.140925 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2025.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-01-10 14:39:31.140932 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2025.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-01-10 14:39:31.140938 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-10 14:39:31.140943 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2025.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-01-10 14:39:31.140950 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-10 14:39:31.140957 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/neutron-dhcp-agent:2025.1', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-01-10 14:39:31.140961 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2025.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/neutron-l3-agent:2025.1', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-01-10 14:39:31.140967 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2025.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-01-10 14:39:31.140971 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-01-10 14:39:31.140975 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2025.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-01-10 14:39:31.140983 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/neutron-dhcp-agent:2025.1', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-01-10 14:39:31.140989 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-01-10 14:39:31.140993 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-01-10 14:39:31.140999 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2025.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/neutron-l3-agent:2025.1', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-01-10 14:39:31.141003 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-10 14:39:31.141011 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-01-10 14:39:31.141018 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-01-10 14:39:31.141022 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-01-10 14:39:31.141026 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-01-10 14:39:31.141032 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2025.1', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-01-10 14:39:31.141037 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2025.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-01-10 14:39:31.141044 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2025.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-01-10 14:39:31.141051 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2025.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-01-10 14:39:31.141055 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-01-10 14:39:31.141058 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-01-10 14:39:31.141064 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-10 14:39:31.141069 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-01-10 14:39:31.141073 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-01-10 14:39:31.141082 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-01-10 14:39:31.141087 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2025.1', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-01-10 14:39:31.141091 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2025.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-01-10 14:39:31.141097 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2025.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-01-10 14:39:31.141101 | orchestrator | 2026-01-10 14:39:31.141105 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2026-01-10 14:39:31.141109 | orchestrator | Saturday 10 January 2026 14:36:26 +0000 (0:00:05.846) 0:03:59.009 ****** 2026-01-10 14:39:31.141113 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-01-10 14:39:31.141123 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2025.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-01-10 14:39:31.141127 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/neutron-dhcp-agent:2025.1', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-01-10 14:39:31.141133 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2025.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/neutron-l3-agent:2025.1', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-01-10 14:39:31.141137 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-01-10 14:39:31.141149 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2025.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-01-10 14:39:31.141153 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-01-10 14:39:31.141157 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/neutron-dhcp-agent:2025.1', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-01-10 14:39:31.141161 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2025.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-01-10 14:39:31.141168 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2025.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/neutron-l3-agent:2025.1', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-01-10 14:39:31.141176 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-01-10 14:39:31.141182 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-01-10 14:39:31.141187 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-01-10 14:39:31.141190 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-10 14:39:31.141194 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2025.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-01-10 14:39:31.141201 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-01-10 14:39:31.141205 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-01-10 14:39:31.141212 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-01-10 14:39:31.141219 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-01-10 14:39:31.141223 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-01-10 14:39:31.141227 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-10 14:39:31.141231 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2025.1', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-01-10 14:39:31.141238 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-01-10 14:39:31.141244 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2025.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-01-10 14:39:31.141255 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-01-10 14:39:31.141259 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2025.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-01-10 14:39:31.141263 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-01-10 14:39:31.141267 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:39:31.141271 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2025.1', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-01-10 14:39:31.141278 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-01-10 14:39:31.141287 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2025.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-01-10 14:39:31.141291 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2025.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-01-10 14:39:31.141295 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/neutron-dhcp-agent:2025.1', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-01-10 14:39:31.141301 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2025.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-01-10 14:39:31.141309 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:39:31.141313 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2025.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/neutron-l3-agent:2025.1', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-01-10 14:39:31.141319 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-01-10 14:39:31.141323 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2025.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-01-10 14:39:31.141327 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-01-10 14:39:31.141331 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-01-10 14:39:31.141337 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-10 14:39:31.141344 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-01-10 14:39:31.141348 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-01-10 14:39:31.141426 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-01-10 14:39:31.141436 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2025.1', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-01-10 14:39:31.141443 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2025.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-01-10 14:39:31.141452 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2025.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-01-10 14:39:31.141465 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:39:31.141470 | orchestrator | 2026-01-10 14:39:31.141476 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2026-01-10 14:39:31.141482 | orchestrator | Saturday 10 January 2026 14:36:28 +0000 (0:00:01.996) 0:04:01.005 ****** 2026-01-10 14:39:31.141489 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-01-10 14:39:31.141496 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-01-10 14:39:31.141502 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:39:31.141508 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-01-10 14:39:31.141514 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-01-10 14:39:31.141520 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:39:31.141526 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-01-10 14:39:31.141541 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-01-10 14:39:31.141545 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:39:31.141549 | orchestrator | 2026-01-10 14:39:31.141553 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2026-01-10 14:39:31.141556 | orchestrator | Saturday 10 January 2026 14:36:30 +0000 (0:00:01.986) 0:04:02.991 ****** 2026-01-10 14:39:31.141560 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:39:31.141564 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:39:31.141567 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:39:31.141571 | orchestrator | 2026-01-10 14:39:31.141575 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2026-01-10 14:39:31.141578 | orchestrator | Saturday 10 January 2026 14:36:31 +0000 (0:00:01.315) 0:04:04.307 ****** 2026-01-10 14:39:31.141582 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:39:31.141586 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:39:31.141589 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:39:31.141593 | orchestrator | 2026-01-10 14:39:31.141597 | orchestrator | TASK [include_role : placement] ************************************************ 2026-01-10 14:39:31.141601 | orchestrator | Saturday 10 January 2026 14:36:33 +0000 (0:00:02.087) 0:04:06.395 ****** 2026-01-10 14:39:31.141604 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:39:31.141641 | orchestrator | 2026-01-10 14:39:31.141646 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2026-01-10 14:39:31.141656 | orchestrator | Saturday 10 January 2026 14:36:35 +0000 (0:00:01.584) 0:04:07.979 ****** 2026-01-10 14:39:31.141661 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-01-10 14:39:31.141669 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-01-10 14:39:31.141677 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-01-10 14:39:31.141682 | orchestrator | 2026-01-10 14:39:31.141685 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2026-01-10 14:39:31.141689 | orchestrator | Saturday 10 January 2026 14:36:40 +0000 (0:00:05.227) 0:04:13.207 ****** 2026-01-10 14:39:31.141693 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-01-10 14:39:31.141701 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:39:31.141707 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-01-10 14:39:31.141711 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:39:31.141715 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-01-10 14:39:31.141719 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:39:31.141723 | orchestrator | 2026-01-10 14:39:31.141727 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2026-01-10 14:39:31.141731 | orchestrator | Saturday 10 January 2026 14:36:41 +0000 (0:00:00.787) 0:04:13.995 ****** 2026-01-10 14:39:31.141735 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-01-10 14:39:31.141741 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-01-10 14:39:31.141746 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-01-10 14:39:31.141751 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:39:31.141758 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-01-10 14:39:31.141762 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:39:31.141766 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-01-10 14:39:31.141770 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-01-10 14:39:31.141774 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:39:31.141777 | orchestrator | 2026-01-10 14:39:31.141781 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2026-01-10 14:39:31.141786 | orchestrator | Saturday 10 January 2026 14:36:42 +0000 (0:00:01.353) 0:04:15.348 ****** 2026-01-10 14:39:31.141792 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:39:31.141798 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:39:31.141803 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:39:31.141809 | orchestrator | 2026-01-10 14:39:31.141816 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2026-01-10 14:39:31.141821 | orchestrator | Saturday 10 January 2026 14:36:44 +0000 (0:00:01.358) 0:04:16.706 ****** 2026-01-10 14:39:31.141827 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:39:31.141833 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:39:31.141840 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:39:31.141844 | orchestrator | 2026-01-10 14:39:31.141848 | orchestrator | TASK [include_role : nova] ***************************************************** 2026-01-10 14:39:31.141851 | orchestrator | Saturday 10 January 2026 14:36:46 +0000 (0:00:02.233) 0:04:18.940 ****** 2026-01-10 14:39:31.141855 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:39:31.141871 | orchestrator | 2026-01-10 14:39:31.141875 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2026-01-10 14:39:31.141879 | orchestrator | Saturday 10 January 2026 14:36:47 +0000 (0:00:01.341) 0:04:20.282 ****** 2026-01-10 14:39:31.141886 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-10 14:39:31.141893 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-10 14:39:31.141902 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-10 14:39:31.141908 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-10 14:39:31.141913 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-10 14:39:31.141918 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2025.1', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-10 14:39:31.141924 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-10 14:39:31.141933 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-10 14:39:31.141937 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2025.1', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-10 14:39:31.141944 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-10 14:39:31.141948 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-10 14:39:31.141954 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2025.1', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-10 14:39:31.141962 | orchestrator | 2026-01-10 14:39:31.141966 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2026-01-10 14:39:31.141970 | orchestrator | Saturday 10 January 2026 14:36:55 +0000 (0:00:08.160) 0:04:28.442 ****** 2026-01-10 14:39:31.141974 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-10 14:39:31.141978 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-10 14:39:31.141985 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-10 14:39:31.141989 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-10 14:39:31.141999 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2025.1', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-10 14:39:31.142003 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-10 14:39:31.142007 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:39:31.142045 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-10 14:39:31.142055 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2025.1', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-10 14:39:31.142059 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:39:31.142064 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-10 14:39:31.142074 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-10 14:39:31.142078 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-10 14:39:31.142083 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2025.1', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-10 14:39:31.142086 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:39:31.142090 | orchestrator | 2026-01-10 14:39:31.142094 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2026-01-10 14:39:31.142098 | orchestrator | Saturday 10 January 2026 14:36:57 +0000 (0:00:01.171) 0:04:29.614 ****** 2026-01-10 14:39:31.142102 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-01-10 14:39:31.142106 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-01-10 14:39:31.142111 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-01-10 14:39:31.142120 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-01-10 14:39:31.142124 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:39:31.142128 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-01-10 14:39:31.142131 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-01-10 14:39:31.142135 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-01-10 14:39:31.142144 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-01-10 14:39:31.142148 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-01-10 14:39:31.142152 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-01-10 14:39:31.142156 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-01-10 14:39:31.142160 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:39:31.142164 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-01-10 14:39:31.142167 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:39:31.142171 | orchestrator | 2026-01-10 14:39:31.142175 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2026-01-10 14:39:31.142197 | orchestrator | Saturday 10 January 2026 14:36:58 +0000 (0:00:01.096) 0:04:30.710 ****** 2026-01-10 14:39:31.142201 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:39:31.142205 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:39:31.142209 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:39:31.142212 | orchestrator | 2026-01-10 14:39:31.142216 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2026-01-10 14:39:31.142220 | orchestrator | Saturday 10 January 2026 14:36:59 +0000 (0:00:01.694) 0:04:32.404 ****** 2026-01-10 14:39:31.142224 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:39:31.142227 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:39:31.142231 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:39:31.142235 | orchestrator | 2026-01-10 14:39:31.142239 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2026-01-10 14:39:31.142242 | orchestrator | Saturday 10 January 2026 14:37:02 +0000 (0:00:02.177) 0:04:34.582 ****** 2026-01-10 14:39:31.142256 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:39:31.142262 | orchestrator | 2026-01-10 14:39:31.142268 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2026-01-10 14:39:31.142275 | orchestrator | Saturday 10 January 2026 14:37:03 +0000 (0:00:01.468) 0:04:36.050 ****** 2026-01-10 14:39:31.142284 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2026-01-10 14:39:31.142292 | orchestrator | 2026-01-10 14:39:31.142299 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2026-01-10 14:39:31.142306 | orchestrator | Saturday 10 January 2026 14:37:05 +0000 (0:00:01.781) 0:04:37.832 ****** 2026-01-10 14:39:31.142313 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-01-10 14:39:31.142321 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-01-10 14:39:31.142337 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-01-10 14:39:31.142341 | orchestrator | 2026-01-10 14:39:31.142345 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2026-01-10 14:39:31.142348 | orchestrator | Saturday 10 January 2026 14:37:12 +0000 (0:00:06.878) 0:04:44.711 ****** 2026-01-10 14:39:31.142352 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-01-10 14:39:31.142356 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:39:31.142360 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-01-10 14:39:31.142364 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:39:31.142368 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-01-10 14:39:31.142375 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:39:31.142379 | orchestrator | 2026-01-10 14:39:31.142383 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2026-01-10 14:39:31.142387 | orchestrator | Saturday 10 January 2026 14:37:13 +0000 (0:00:01.142) 0:04:45.853 ****** 2026-01-10 14:39:31.142391 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-01-10 14:39:31.142398 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-01-10 14:39:31.142403 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:39:31.142407 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-01-10 14:39:31.142410 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-01-10 14:39:31.142414 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:39:31.142418 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-01-10 14:39:31.142422 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-01-10 14:39:31.142426 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:39:31.142430 | orchestrator | 2026-01-10 14:39:31.142434 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-01-10 14:39:31.142437 | orchestrator | Saturday 10 January 2026 14:37:14 +0000 (0:00:01.549) 0:04:47.403 ****** 2026-01-10 14:39:31.142441 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:39:31.142445 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:39:31.142448 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:39:31.142452 | orchestrator | 2026-01-10 14:39:31.142456 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-01-10 14:39:31.142462 | orchestrator | Saturday 10 January 2026 14:37:17 +0000 (0:00:02.422) 0:04:49.825 ****** 2026-01-10 14:39:31.142466 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:39:31.142470 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:39:31.142474 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:39:31.142477 | orchestrator | 2026-01-10 14:39:31.142481 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2026-01-10 14:39:31.142485 | orchestrator | Saturday 10 January 2026 14:37:20 +0000 (0:00:03.076) 0:04:52.901 ****** 2026-01-10 14:39:31.142489 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2026-01-10 14:39:31.142493 | orchestrator | 2026-01-10 14:39:31.142497 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2026-01-10 14:39:31.142501 | orchestrator | Saturday 10 January 2026 14:37:21 +0000 (0:00:01.259) 0:04:54.160 ****** 2026-01-10 14:39:31.142508 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-01-10 14:39:31.142513 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:39:31.142517 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-01-10 14:39:31.142521 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:39:31.142524 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-01-10 14:39:31.142528 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:39:31.142532 | orchestrator | 2026-01-10 14:39:31.142539 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2026-01-10 14:39:31.142543 | orchestrator | Saturday 10 January 2026 14:37:23 +0000 (0:00:01.339) 0:04:55.500 ****** 2026-01-10 14:39:31.142547 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-01-10 14:39:31.142551 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:39:31.142555 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-01-10 14:39:31.142559 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:39:31.142565 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-01-10 14:39:31.142569 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:39:31.142573 | orchestrator | 2026-01-10 14:39:31.142580 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2026-01-10 14:39:31.142585 | orchestrator | Saturday 10 January 2026 14:37:24 +0000 (0:00:01.585) 0:04:57.085 ****** 2026-01-10 14:39:31.142588 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:39:31.142592 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:39:31.142596 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:39:31.142600 | orchestrator | 2026-01-10 14:39:31.142603 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-01-10 14:39:31.142620 | orchestrator | Saturday 10 January 2026 14:37:26 +0000 (0:00:01.585) 0:04:58.671 ****** 2026-01-10 14:39:31.142624 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:39:31.142628 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:39:31.142631 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:39:31.142635 | orchestrator | 2026-01-10 14:39:31.142639 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-01-10 14:39:31.142643 | orchestrator | Saturday 10 January 2026 14:37:28 +0000 (0:00:02.447) 0:05:01.119 ****** 2026-01-10 14:39:31.142647 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:39:31.142650 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:39:31.142654 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:39:31.142658 | orchestrator | 2026-01-10 14:39:31.142662 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2026-01-10 14:39:31.142666 | orchestrator | Saturday 10 January 2026 14:37:31 +0000 (0:00:03.051) 0:05:04.171 ****** 2026-01-10 14:39:31.142670 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2026-01-10 14:39:31.142674 | orchestrator | 2026-01-10 14:39:31.142677 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2026-01-10 14:39:31.142681 | orchestrator | Saturday 10 January 2026 14:37:32 +0000 (0:00:00.907) 0:05:05.079 ****** 2026-01-10 14:39:31.142685 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-01-10 14:39:31.142689 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:39:31.142696 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-01-10 14:39:31.142700 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:39:31.142704 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-01-10 14:39:31.142708 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:39:31.142712 | orchestrator | 2026-01-10 14:39:31.142716 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2026-01-10 14:39:31.142719 | orchestrator | Saturday 10 January 2026 14:37:34 +0000 (0:00:01.650) 0:05:06.729 ****** 2026-01-10 14:39:31.142727 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-01-10 14:39:31.142731 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:39:31.142738 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-01-10 14:39:31.142742 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:39:31.142745 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-01-10 14:39:31.142749 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:39:31.142753 | orchestrator | 2026-01-10 14:39:31.142757 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2026-01-10 14:39:31.142761 | orchestrator | Saturday 10 January 2026 14:37:35 +0000 (0:00:01.070) 0:05:07.800 ****** 2026-01-10 14:39:31.142765 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:39:31.142768 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:39:31.142772 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:39:31.142776 | orchestrator | 2026-01-10 14:39:31.142780 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-01-10 14:39:31.142783 | orchestrator | Saturday 10 January 2026 14:37:36 +0000 (0:00:01.572) 0:05:09.372 ****** 2026-01-10 14:39:31.142787 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:39:31.142791 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:39:31.142795 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:39:31.142798 | orchestrator | 2026-01-10 14:39:31.142802 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-01-10 14:39:31.142806 | orchestrator | Saturday 10 January 2026 14:37:39 +0000 (0:00:02.856) 0:05:12.229 ****** 2026-01-10 14:39:31.142810 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:39:31.142814 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:39:31.142817 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:39:31.142821 | orchestrator | 2026-01-10 14:39:31.142825 | orchestrator | TASK [include_role : octavia] ************************************************** 2026-01-10 14:39:31.142829 | orchestrator | Saturday 10 January 2026 14:37:42 +0000 (0:00:02.998) 0:05:15.227 ****** 2026-01-10 14:39:31.142832 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:39:31.142836 | orchestrator | 2026-01-10 14:39:31.142840 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2026-01-10 14:39:31.142844 | orchestrator | Saturday 10 January 2026 14:37:44 +0000 (0:00:01.624) 0:05:16.852 ****** 2026-01-10 14:39:31.142851 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-10 14:39:31.142860 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-10 14:39:31.142868 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-10 14:39:31.142872 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-10 14:39:31.142876 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-10 14:39:31.142880 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-10 14:39:31.142891 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-10 14:39:31.142905 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-10 14:39:31.142912 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-10 14:39:31.142916 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-10 14:39:31.142920 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-10 14:39:31.142924 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-10 14:39:31.142937 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-10 14:39:31.142941 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-10 14:39:31.142954 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-10 14:39:31.142959 | orchestrator | 2026-01-10 14:39:31.142966 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2026-01-10 14:39:31.142970 | orchestrator | Saturday 10 January 2026 14:37:48 +0000 (0:00:03.689) 0:05:20.542 ****** 2026-01-10 14:39:31.142974 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-01-10 14:39:31.142985 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-10 14:39:31.142989 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-10 14:39:31.143000 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-10 14:39:31.143004 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-10 14:39:31.143008 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:39:31.143015 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-01-10 14:39:31.143019 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-10 14:39:31.143023 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-10 14:39:31.143027 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-10 14:39:31.143038 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-10 14:39:31.143042 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:39:31.143046 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-01-10 14:39:31.143052 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-10 14:39:31.143056 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-10 14:39:31.143061 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-10 14:39:31.143068 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-10 14:39:31.143078 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:39:31.143086 | orchestrator | 2026-01-10 14:39:31.143097 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2026-01-10 14:39:31.143106 | orchestrator | Saturday 10 January 2026 14:37:49 +0000 (0:00:01.324) 0:05:21.867 ****** 2026-01-10 14:39:31.143112 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-01-10 14:39:31.143123 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-01-10 14:39:31.143130 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:39:31.143136 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-01-10 14:39:31.143142 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-01-10 14:39:31.143148 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:39:31.143154 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-01-10 14:39:31.143159 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-01-10 14:39:31.143166 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:39:31.143172 | orchestrator | 2026-01-10 14:39:31.143178 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2026-01-10 14:39:31.143184 | orchestrator | Saturday 10 January 2026 14:37:50 +0000 (0:00:00.929) 0:05:22.797 ****** 2026-01-10 14:39:31.143190 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:39:31.143196 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:39:31.143203 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:39:31.143209 | orchestrator | 2026-01-10 14:39:31.143215 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2026-01-10 14:39:31.143222 | orchestrator | Saturday 10 January 2026 14:37:51 +0000 (0:00:01.287) 0:05:24.084 ****** 2026-01-10 14:39:31.143233 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:39:31.143240 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:39:31.143246 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:39:31.143252 | orchestrator | 2026-01-10 14:39:31.143256 | orchestrator | TASK [include_role : opensearch] *********************************************** 2026-01-10 14:39:31.143260 | orchestrator | Saturday 10 January 2026 14:37:53 +0000 (0:00:02.158) 0:05:26.243 ****** 2026-01-10 14:39:31.143264 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:39:31.143268 | orchestrator | 2026-01-10 14:39:31.143271 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2026-01-10 14:39:31.143275 | orchestrator | Saturday 10 January 2026 14:37:55 +0000 (0:00:01.702) 0:05:27.946 ****** 2026-01-10 14:39:31.143286 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-01-10 14:39:31.143291 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-01-10 14:39:31.143299 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-01-10 14:39:31.143307 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-01-10 14:39:31.143312 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-01-10 14:39:31.143323 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-01-10 14:39:31.143328 | orchestrator | 2026-01-10 14:39:31.143332 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2026-01-10 14:39:31.143336 | orchestrator | Saturday 10 January 2026 14:38:00 +0000 (0:00:05.378) 0:05:33.324 ****** 2026-01-10 14:39:31.143341 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-01-10 14:39:31.143349 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-01-10 14:39:31.143357 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:39:31.143361 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-01-10 14:39:31.143368 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-01-10 14:39:31.143372 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:39:31.143376 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-01-10 14:39:31.143384 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-01-10 14:39:31.143392 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:39:31.143396 | orchestrator | 2026-01-10 14:39:31.143400 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2026-01-10 14:39:31.143404 | orchestrator | Saturday 10 January 2026 14:38:01 +0000 (0:00:00.696) 0:05:34.021 ****** 2026-01-10 14:39:31.143408 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}})  2026-01-10 14:39:31.143413 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-01-10 14:39:31.143417 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-01-10 14:39:31.143421 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:39:31.143425 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}})  2026-01-10 14:39:31.143429 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-01-10 14:39:31.143438 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-01-10 14:39:31.143443 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:39:31.143446 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}})  2026-01-10 14:39:31.143450 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-01-10 14:39:31.143455 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-01-10 14:39:31.143458 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:39:31.143465 | orchestrator | 2026-01-10 14:39:31.143469 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2026-01-10 14:39:31.143473 | orchestrator | Saturday 10 January 2026 14:38:03 +0000 (0:00:01.712) 0:05:35.733 ****** 2026-01-10 14:39:31.143477 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:39:31.143480 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:39:31.143503 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:39:31.143520 | orchestrator | 2026-01-10 14:39:31.143532 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2026-01-10 14:39:31.143536 | orchestrator | Saturday 10 January 2026 14:38:03 +0000 (0:00:00.554) 0:05:36.288 ****** 2026-01-10 14:39:31.143550 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:39:31.143554 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:39:31.143558 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:39:31.143561 | orchestrator | 2026-01-10 14:39:31.143570 | orchestrator | TASK [include_role : prometheus] *********************************************** 2026-01-10 14:39:31.143574 | orchestrator | Saturday 10 January 2026 14:38:05 +0000 (0:00:01.405) 0:05:37.693 ****** 2026-01-10 14:39:31.143578 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:39:31.143582 | orchestrator | 2026-01-10 14:39:31.143585 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2026-01-10 14:39:31.143589 | orchestrator | Saturday 10 January 2026 14:38:07 +0000 (0:00:01.892) 0:05:39.585 ****** 2026-01-10 14:39:31.143593 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-server:2025.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-01-10 14:39:31.143600 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-server:2025.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-01-10 14:39:31.143605 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-10 14:39:31.143652 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-10 14:39:31.143663 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:39:31.143667 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:39:31.143671 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:39:31.143675 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:39:31.143679 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-10 14:39:31.143686 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-10 14:39:31.143691 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-server:2025.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-01-10 14:39:31.143702 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-10 14:39:31.143706 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:39:31.143710 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:39:31.143714 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-10 14:39:31.143722 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2025.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-01-10 14:39:31.143730 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-01-10 14:39:31.143814 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:39:31.143820 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:39:31.143824 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-10 14:39:31.143828 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2025.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-01-10 14:39:31.143836 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-01-10 14:39:31.143844 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:39:31.143861 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:39:31.143866 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-10 14:39:31.143870 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2025.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-01-10 14:39:31.143880 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-01-10 14:39:31.143891 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:39:31.143897 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:39:31.143903 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-10 14:39:31.143910 | orchestrator | 2026-01-10 14:39:31.143931 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2026-01-10 14:39:31.143938 | orchestrator | Saturday 10 January 2026 14:38:11 +0000 (0:00:04.438) 0:05:44.023 ****** 2026-01-10 14:39:31.143945 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-server:2025.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-01-10 14:39:31.143951 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-10 14:39:31.143957 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:39:31.143973 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:39:31.143979 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-10 14:39:31.144004 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2025.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-01-10 14:39:31.144012 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-01-10 14:39:31.144019 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-server:2025.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-01-10 14:39:31.144037 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:39:31.144043 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-10 14:39:31.144049 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:39:31.144059 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:39:31.144065 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-10 14:39:31.144104 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:39:31.144145 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:39:31.144168 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-10 14:39:31.144242 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2025.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-01-10 14:39:31.144251 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-01-10 14:39:31.144278 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:39:31.144284 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:39:31.144291 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-server:2025.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-01-10 14:39:31.144338 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-10 14:39:31.144349 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-10 14:39:31.144355 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:39:31.144362 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:39:31.144369 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:39:31.144380 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-10 14:39:31.144387 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2025.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-01-10 14:39:31.144397 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-01-10 14:39:31.144431 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:39:31.144438 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:39:31.144446 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-10 14:39:31.144453 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:39:31.144459 | orchestrator | 2026-01-10 14:39:31.144465 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2026-01-10 14:39:31.144472 | orchestrator | Saturday 10 January 2026 14:38:12 +0000 (0:00:00.897) 0:05:44.921 ****** 2026-01-10 14:39:31.144484 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-01-10 14:39:31.144492 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-01-10 14:39:31.144499 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-01-10 14:39:31.144511 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-01-10 14:39:31.144519 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:39:31.144526 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-01-10 14:39:31.144533 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-01-10 14:39:31.144543 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-01-10 14:39:31.144550 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-01-10 14:39:31.144568 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-01-10 14:39:31.144574 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-01-10 14:39:31.144588 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-01-10 14:39:31.144594 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:39:31.144605 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-01-10 14:39:31.144628 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:39:31.144634 | orchestrator | 2026-01-10 14:39:31.144641 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2026-01-10 14:39:31.144647 | orchestrator | Saturday 10 January 2026 14:38:13 +0000 (0:00:01.459) 0:05:46.380 ****** 2026-01-10 14:39:31.144653 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:39:31.144664 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:39:31.144671 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:39:31.144676 | orchestrator | 2026-01-10 14:39:31.144682 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2026-01-10 14:39:31.144688 | orchestrator | Saturday 10 January 2026 14:38:14 +0000 (0:00:00.476) 0:05:46.856 ****** 2026-01-10 14:39:31.144693 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:39:31.144699 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:39:31.144705 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:39:31.144747 | orchestrator | 2026-01-10 14:39:31.144759 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2026-01-10 14:39:31.144763 | orchestrator | Saturday 10 January 2026 14:38:15 +0000 (0:00:01.471) 0:05:48.328 ****** 2026-01-10 14:39:31.144767 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:39:31.144776 | orchestrator | 2026-01-10 14:39:31.144780 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2026-01-10 14:39:31.144784 | orchestrator | Saturday 10 January 2026 14:38:17 +0000 (0:00:01.494) 0:05:49.823 ****** 2026-01-10 14:39:31.144788 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-10 14:39:31.144804 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-10 14:39:31.144813 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-10 14:39:31.144821 | orchestrator | 2026-01-10 14:39:31.144825 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2026-01-10 14:39:31.144829 | orchestrator | Saturday 10 January 2026 14:38:20 +0000 (0:00:03.029) 0:05:52.852 ****** 2026-01-10 14:39:31.144833 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-01-10 14:39:31.144837 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-01-10 14:39:31.144841 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:39:31.144845 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:39:31.144852 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-01-10 14:39:31.144856 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:39:31.144860 | orchestrator | 2026-01-10 14:39:31.144864 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2026-01-10 14:39:31.144868 | orchestrator | Saturday 10 January 2026 14:38:20 +0000 (0:00:00.413) 0:05:53.266 ****** 2026-01-10 14:39:31.144872 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-01-10 14:39:31.144876 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:39:31.144883 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-01-10 14:39:31.144887 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:39:31.144890 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-01-10 14:39:31.144894 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:39:31.144898 | orchestrator | 2026-01-10 14:39:31.144904 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2026-01-10 14:39:31.144908 | orchestrator | Saturday 10 January 2026 14:38:21 +0000 (0:00:00.656) 0:05:53.922 ****** 2026-01-10 14:39:31.144911 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:39:31.144915 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:39:31.144919 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:39:31.144923 | orchestrator | 2026-01-10 14:39:31.144926 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2026-01-10 14:39:31.144930 | orchestrator | Saturday 10 January 2026 14:38:21 +0000 (0:00:00.457) 0:05:54.380 ****** 2026-01-10 14:39:31.144934 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:39:31.144938 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:39:31.144941 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:39:31.144945 | orchestrator | 2026-01-10 14:39:31.144949 | orchestrator | TASK [include_role : skyline] ************************************************** 2026-01-10 14:39:31.144952 | orchestrator | Saturday 10 January 2026 14:38:23 +0000 (0:00:01.550) 0:05:55.930 ****** 2026-01-10 14:39:31.144956 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:39:31.144960 | orchestrator | 2026-01-10 14:39:31.144963 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2026-01-10 14:39:31.144967 | orchestrator | Saturday 10 January 2026 14:38:25 +0000 (0:00:01.889) 0:05:57.819 ****** 2026-01-10 14:39:31.144971 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2025.1', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-01-10 14:39:31.144979 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2025.1', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-01-10 14:39:31.144990 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2025.1', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-01-10 14:39:31.144998 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2025.1', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-01-10 14:39:31.145002 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2025.1', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-01-10 14:39:31.145009 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2025.1', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-01-10 14:39:31.145017 | orchestrator | 2026-01-10 14:39:31.145021 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2026-01-10 14:39:31.145025 | orchestrator | Saturday 10 January 2026 14:38:31 +0000 (0:00:06.435) 0:06:04.255 ****** 2026-01-10 14:39:31.145031 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2025.1', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}})  2026-01-10 14:39:31.145036 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2025.1', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-01-10 14:39:31.145040 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:39:31.145044 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2025.1', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}})  2026-01-10 14:39:31.145052 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2025.1', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-01-10 14:39:31.145059 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:39:31.145066 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2025.1', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}})  2026-01-10 14:39:31.145070 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2025.1', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-01-10 14:39:31.145074 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:39:31.145078 | orchestrator | 2026-01-10 14:39:31.145082 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2026-01-10 14:39:31.145086 | orchestrator | Saturday 10 January 2026 14:38:33 +0000 (0:00:01.458) 0:06:05.713 ****** 2026-01-10 14:39:31.145090 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-01-10 14:39:31.145094 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-01-10 14:39:31.145099 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-01-10 14:39:31.145106 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-01-10 14:39:31.145114 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:39:31.145118 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-01-10 14:39:31.145122 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-01-10 14:39:31.145126 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-01-10 14:39:31.145130 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-01-10 14:39:31.145133 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:39:31.145137 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-01-10 14:39:31.145144 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-01-10 14:39:31.145148 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-01-10 14:39:31.145152 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-01-10 14:39:31.145155 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:39:31.145159 | orchestrator | 2026-01-10 14:39:31.145163 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2026-01-10 14:39:31.145167 | orchestrator | Saturday 10 January 2026 14:38:34 +0000 (0:00:00.970) 0:06:06.684 ****** 2026-01-10 14:39:31.145170 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:39:31.145174 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:39:31.145178 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:39:31.145182 | orchestrator | 2026-01-10 14:39:31.145185 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2026-01-10 14:39:31.145189 | orchestrator | Saturday 10 January 2026 14:38:35 +0000 (0:00:01.308) 0:06:07.993 ****** 2026-01-10 14:39:31.145193 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:39:31.145197 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:39:31.145200 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:39:31.145204 | orchestrator | 2026-01-10 14:39:31.145208 | orchestrator | TASK [include_role : tacker] *************************************************** 2026-01-10 14:39:31.145211 | orchestrator | Saturday 10 January 2026 14:38:38 +0000 (0:00:02.479) 0:06:10.473 ****** 2026-01-10 14:39:31.145215 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:39:31.145223 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:39:31.145227 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:39:31.145230 | orchestrator | 2026-01-10 14:39:31.145234 | orchestrator | TASK [include_role : trove] **************************************************** 2026-01-10 14:39:31.145238 | orchestrator | Saturday 10 January 2026 14:38:38 +0000 (0:00:00.336) 0:06:10.810 ****** 2026-01-10 14:39:31.145254 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:39:31.145258 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:39:31.145271 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:39:31.145275 | orchestrator | 2026-01-10 14:39:31.145279 | orchestrator | TASK [include_role : venus] **************************************************** 2026-01-10 14:39:31.145295 | orchestrator | Saturday 10 January 2026 14:38:39 +0000 (0:00:00.673) 0:06:11.484 ****** 2026-01-10 14:39:31.145299 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:39:31.145308 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:39:31.145312 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:39:31.145316 | orchestrator | 2026-01-10 14:39:31.145319 | orchestrator | TASK [include_role : watcher] ************************************************** 2026-01-10 14:39:31.145323 | orchestrator | Saturday 10 January 2026 14:38:39 +0000 (0:00:00.345) 0:06:11.829 ****** 2026-01-10 14:39:31.145349 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:39:31.145353 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:39:31.145356 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:39:31.145366 | orchestrator | 2026-01-10 14:39:31.145373 | orchestrator | TASK [include_role : zun] ****************************************************** 2026-01-10 14:39:31.145377 | orchestrator | Saturday 10 January 2026 14:38:39 +0000 (0:00:00.341) 0:06:12.171 ****** 2026-01-10 14:39:31.145380 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:39:31.145384 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:39:31.145388 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:39:31.145392 | orchestrator | 2026-01-10 14:39:31.145396 | orchestrator | TASK [include_role : loadbalancer] ********************************************* 2026-01-10 14:39:31.145399 | orchestrator | Saturday 10 January 2026 14:38:40 +0000 (0:00:00.372) 0:06:12.544 ****** 2026-01-10 14:39:31.145403 | orchestrator | included: loadbalancer for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:39:31.145407 | orchestrator | 2026-01-10 14:39:31.145410 | orchestrator | TASK [service-check-containers : loadbalancer | Check containers] ************** 2026-01-10 14:39:31.145414 | orchestrator | Saturday 10 January 2026 14:38:41 +0000 (0:00:01.889) 0:06:14.433 ****** 2026-01-10 14:39:31.145419 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-01-10 14:39:31.145427 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-01-10 14:39:31.145432 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-01-10 14:39:31.145440 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-10 14:39:31.145445 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-10 14:39:31.145449 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-10 14:39:31.145453 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-10 14:39:31.145457 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-10 14:39:31.145483 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-10 14:39:31.145488 | orchestrator | 2026-01-10 14:39:31.145492 | orchestrator | TASK [service-check-containers : loadbalancer | Notify handlers to restart containers] *** 2026-01-10 14:39:31.145500 | orchestrator | Saturday 10 January 2026 14:38:44 +0000 (0:00:02.573) 0:06:17.006 ****** 2026-01-10 14:39:31.145504 | orchestrator | changed: [testbed-node-0] => { 2026-01-10 14:39:31.145508 | orchestrator |  "msg": "Notifying handlers" 2026-01-10 14:39:31.145511 | orchestrator | } 2026-01-10 14:39:31.145515 | orchestrator | changed: [testbed-node-1] => { 2026-01-10 14:39:31.145519 | orchestrator |  "msg": "Notifying handlers" 2026-01-10 14:39:31.145523 | orchestrator | } 2026-01-10 14:39:31.145527 | orchestrator | changed: [testbed-node-2] => { 2026-01-10 14:39:31.145531 | orchestrator |  "msg": "Notifying handlers" 2026-01-10 14:39:31.145534 | orchestrator | } 2026-01-10 14:39:31.145538 | orchestrator | 2026-01-10 14:39:31.145542 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-01-10 14:39:31.145545 | orchestrator | Saturday 10 January 2026 14:38:45 +0000 (0:00:00.731) 0:06:17.738 ****** 2026-01-10 14:39:31.145549 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-01-10 14:39:31.145554 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-10 14:39:31.145560 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-10 14:39:31.145564 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:39:31.145568 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-01-10 14:39:31.145572 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-10 14:39:31.145582 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-10 14:39:31.145586 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:39:31.145590 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-01-10 14:39:31.145594 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-10 14:39:31.145598 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-10 14:39:31.145602 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:39:31.145605 | orchestrator | 2026-01-10 14:39:31.145630 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2026-01-10 14:39:31.145653 | orchestrator | Saturday 10 January 2026 14:38:46 +0000 (0:00:01.340) 0:06:19.079 ****** 2026-01-10 14:39:31.145659 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:39:31.145665 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:39:31.145669 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:39:31.145673 | orchestrator | 2026-01-10 14:39:31.145677 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2026-01-10 14:39:31.145681 | orchestrator | Saturday 10 January 2026 14:38:47 +0000 (0:00:01.149) 0:06:20.228 ****** 2026-01-10 14:39:31.145684 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:39:31.145688 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:39:31.145692 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:39:31.145695 | orchestrator | 2026-01-10 14:39:31.145699 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2026-01-10 14:39:31.145703 | orchestrator | Saturday 10 January 2026 14:38:48 +0000 (0:00:00.355) 0:06:20.583 ****** 2026-01-10 14:39:31.145707 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:39:31.145710 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:39:31.145714 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:39:31.145718 | orchestrator | 2026-01-10 14:39:31.145722 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2026-01-10 14:39:31.145731 | orchestrator | Saturday 10 January 2026 14:38:49 +0000 (0:00:00.954) 0:06:21.538 ****** 2026-01-10 14:39:31.145743 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:39:31.145747 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:39:31.145750 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:39:31.145765 | orchestrator | 2026-01-10 14:39:31.145769 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2026-01-10 14:39:31.145780 | orchestrator | Saturday 10 January 2026 14:38:50 +0000 (0:00:00.952) 0:06:22.490 ****** 2026-01-10 14:39:31.145784 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:39:31.145788 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:39:31.145791 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:39:31.145802 | orchestrator | 2026-01-10 14:39:31.145806 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2026-01-10 14:39:31.145810 | orchestrator | Saturday 10 January 2026 14:38:51 +0000 (0:00:01.296) 0:06:23.787 ****** 2026-01-10 14:39:31.145813 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:39:31.145817 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:39:31.145826 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:39:31.145830 | orchestrator | 2026-01-10 14:39:31.145834 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2026-01-10 14:39:31.145838 | orchestrator | Saturday 10 January 2026 14:38:56 +0000 (0:00:04.800) 0:06:28.587 ****** 2026-01-10 14:39:31.145841 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:39:31.145845 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:39:31.145849 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:39:31.145852 | orchestrator | 2026-01-10 14:39:31.145859 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2026-01-10 14:39:31.145863 | orchestrator | Saturday 10 January 2026 14:38:58 +0000 (0:00:02.835) 0:06:31.423 ****** 2026-01-10 14:39:31.145867 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:39:31.145870 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:39:31.145874 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:39:31.145878 | orchestrator | 2026-01-10 14:39:31.145881 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2026-01-10 14:39:31.145885 | orchestrator | Saturday 10 January 2026 14:39:12 +0000 (0:00:13.889) 0:06:45.312 ****** 2026-01-10 14:39:31.145889 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:39:31.145893 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:39:31.145896 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:39:31.145900 | orchestrator | 2026-01-10 14:39:31.145904 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2026-01-10 14:39:31.145908 | orchestrator | Saturday 10 January 2026 14:39:14 +0000 (0:00:01.191) 0:06:46.503 ****** 2026-01-10 14:39:31.145911 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:39:31.145915 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:39:31.145919 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:39:31.145923 | orchestrator | 2026-01-10 14:39:31.145926 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2026-01-10 14:39:31.145930 | orchestrator | Saturday 10 January 2026 14:39:23 +0000 (0:00:09.356) 0:06:55.859 ****** 2026-01-10 14:39:31.145934 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:39:31.145937 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:39:31.145941 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:39:31.145945 | orchestrator | 2026-01-10 14:39:31.145948 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2026-01-10 14:39:31.145952 | orchestrator | Saturday 10 January 2026 14:39:23 +0000 (0:00:00.359) 0:06:56.218 ****** 2026-01-10 14:39:31.145956 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:39:31.145959 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:39:31.145963 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:39:31.145967 | orchestrator | 2026-01-10 14:39:31.145971 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2026-01-10 14:39:31.145974 | orchestrator | Saturday 10 January 2026 14:39:24 +0000 (0:00:00.365) 0:06:56.584 ****** 2026-01-10 14:39:31.146061 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:39:31.146073 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:39:31.146079 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:39:31.146086 | orchestrator | 2026-01-10 14:39:31.146093 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2026-01-10 14:39:31.146098 | orchestrator | Saturday 10 January 2026 14:39:24 +0000 (0:00:00.740) 0:06:57.324 ****** 2026-01-10 14:39:31.146105 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:39:31.146111 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:39:31.146118 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:39:31.146122 | orchestrator | 2026-01-10 14:39:31.146125 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2026-01-10 14:39:31.146129 | orchestrator | Saturday 10 January 2026 14:39:25 +0000 (0:00:00.391) 0:06:57.716 ****** 2026-01-10 14:39:31.146133 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:39:31.146136 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:39:31.146140 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:39:31.146144 | orchestrator | 2026-01-10 14:39:31.146147 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2026-01-10 14:39:31.146151 | orchestrator | Saturday 10 January 2026 14:39:25 +0000 (0:00:00.429) 0:06:58.146 ****** 2026-01-10 14:39:31.146155 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:39:31.146159 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:39:31.146162 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:39:31.146166 | orchestrator | 2026-01-10 14:39:31.146174 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2026-01-10 14:39:31.146178 | orchestrator | Saturday 10 January 2026 14:39:26 +0000 (0:00:00.381) 0:06:58.527 ****** 2026-01-10 14:39:31.146182 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:39:31.146186 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:39:31.146189 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:39:31.146193 | orchestrator | 2026-01-10 14:39:31.146197 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2026-01-10 14:39:31.146200 | orchestrator | Saturday 10 January 2026 14:39:27 +0000 (0:00:01.423) 0:06:59.951 ****** 2026-01-10 14:39:31.146204 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:39:31.146208 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:39:31.146212 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:39:31.146215 | orchestrator | 2026-01-10 14:39:31.146219 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-10 14:39:31.146223 | orchestrator | testbed-node-0 : ok=127  changed=79  unreachable=0 failed=0 skipped=94  rescued=0 ignored=0 2026-01-10 14:39:31.146227 | orchestrator | testbed-node-1 : ok=126  changed=79  unreachable=0 failed=0 skipped=94  rescued=0 ignored=0 2026-01-10 14:39:31.146231 | orchestrator | testbed-node-2 : ok=126  changed=79  unreachable=0 failed=0 skipped=94  rescued=0 ignored=0 2026-01-10 14:39:31.146235 | orchestrator | 2026-01-10 14:39:31.146239 | orchestrator | 2026-01-10 14:39:31.146243 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-10 14:39:31.146246 | orchestrator | Saturday 10 January 2026 14:39:28 +0000 (0:00:00.867) 0:07:00.818 ****** 2026-01-10 14:39:31.146250 | orchestrator | =============================================================================== 2026-01-10 14:39:31.146264 | orchestrator | loadbalancer : Start backup proxysql container ------------------------- 13.89s 2026-01-10 14:39:31.146268 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 9.36s 2026-01-10 14:39:31.146272 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 8.16s 2026-01-10 14:39:31.146291 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 6.88s 2026-01-10 14:39:31.146314 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 6.44s 2026-01-10 14:39:31.146331 | orchestrator | loadbalancer : Copying over proxysql config ----------------------------- 6.05s 2026-01-10 14:39:31.146334 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 5.85s 2026-01-10 14:39:31.146345 | orchestrator | haproxy-config : Copying over ceph-rgw haproxy config ------------------- 5.74s 2026-01-10 14:39:31.146349 | orchestrator | haproxy-config : Copying over barbican haproxy config ------------------- 5.52s 2026-01-10 14:39:31.146352 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 5.38s 2026-01-10 14:39:31.146362 | orchestrator | haproxy-config : Copying over placement haproxy config ------------------ 5.23s 2026-01-10 14:39:31.146366 | orchestrator | loadbalancer : Start backup haproxy container --------------------------- 4.80s 2026-01-10 14:39:31.146369 | orchestrator | haproxy-config : Copying over manila haproxy config --------------------- 4.72s 2026-01-10 14:39:31.146373 | orchestrator | haproxy-config : Copying over horizon haproxy config -------------------- 4.65s 2026-01-10 14:39:31.146377 | orchestrator | haproxy-config : Copying over aodh haproxy config ----------------------- 4.58s 2026-01-10 14:39:31.146380 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 4.49s 2026-01-10 14:39:31.146384 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 4.44s 2026-01-10 14:39:31.146388 | orchestrator | loadbalancer : Copying checks for services which are enabled ------------ 4.41s 2026-01-10 14:39:31.146392 | orchestrator | haproxy-config : Copying over cinder haproxy config --------------------- 4.24s 2026-01-10 14:39:31.146395 | orchestrator | haproxy-config : Configuring firewall for glance ------------------------ 4.16s 2026-01-10 14:39:31.146399 | orchestrator | 2026-01-10 14:39:31 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:39:34.186352 | orchestrator | 2026-01-10 14:39:34 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:39:34.188480 | orchestrator | 2026-01-10 14:39:34 | INFO  | Task e1c1f69f-b10d-408b-abed-be39393f1ae8 is in state STARTED 2026-01-10 14:39:34.191093 | orchestrator | 2026-01-10 14:39:34 | INFO  | Task b6367eb5-2686-46ac-9949-20af3bf840c4 is in state STARTED 2026-01-10 14:39:34.191156 | orchestrator | 2026-01-10 14:39:34 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:39:37.238244 | orchestrator | 2026-01-10 14:39:37 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:39:37.238732 | orchestrator | 2026-01-10 14:39:37 | INFO  | Task e1c1f69f-b10d-408b-abed-be39393f1ae8 is in state STARTED 2026-01-10 14:39:37.239805 | orchestrator | 2026-01-10 14:39:37 | INFO  | Task b6367eb5-2686-46ac-9949-20af3bf840c4 is in state STARTED 2026-01-10 14:39:37.240024 | orchestrator | 2026-01-10 14:39:37 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:39:40.278916 | orchestrator | 2026-01-10 14:39:40 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:39:40.279986 | orchestrator | 2026-01-10 14:39:40 | INFO  | Task e1c1f69f-b10d-408b-abed-be39393f1ae8 is in state STARTED 2026-01-10 14:39:40.280833 | orchestrator | 2026-01-10 14:39:40 | INFO  | Task b6367eb5-2686-46ac-9949-20af3bf840c4 is in state STARTED 2026-01-10 14:39:40.280886 | orchestrator | 2026-01-10 14:39:40 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:39:43.325226 | orchestrator | 2026-01-10 14:39:43 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:39:43.326536 | orchestrator | 2026-01-10 14:39:43 | INFO  | Task e1c1f69f-b10d-408b-abed-be39393f1ae8 is in state STARTED 2026-01-10 14:39:43.327532 | orchestrator | 2026-01-10 14:39:43 | INFO  | Task b6367eb5-2686-46ac-9949-20af3bf840c4 is in state STARTED 2026-01-10 14:39:43.327935 | orchestrator | 2026-01-10 14:39:43 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:39:46.365406 | orchestrator | 2026-01-10 14:39:46 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:39:46.369418 | orchestrator | 2026-01-10 14:39:46 | INFO  | Task e1c1f69f-b10d-408b-abed-be39393f1ae8 is in state STARTED 2026-01-10 14:39:46.369865 | orchestrator | 2026-01-10 14:39:46 | INFO  | Task b6367eb5-2686-46ac-9949-20af3bf840c4 is in state STARTED 2026-01-10 14:39:46.369890 | orchestrator | 2026-01-10 14:39:46 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:39:49.409063 | orchestrator | 2026-01-10 14:39:49 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:39:49.413060 | orchestrator | 2026-01-10 14:39:49 | INFO  | Task e1c1f69f-b10d-408b-abed-be39393f1ae8 is in state STARTED 2026-01-10 14:39:49.415719 | orchestrator | 2026-01-10 14:39:49 | INFO  | Task b6367eb5-2686-46ac-9949-20af3bf840c4 is in state STARTED 2026-01-10 14:39:49.415795 | orchestrator | 2026-01-10 14:39:49 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:39:52.490907 | orchestrator | 2026-01-10 14:39:52 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:39:52.491010 | orchestrator | 2026-01-10 14:39:52 | INFO  | Task e1c1f69f-b10d-408b-abed-be39393f1ae8 is in state STARTED 2026-01-10 14:39:52.491761 | orchestrator | 2026-01-10 14:39:52 | INFO  | Task b6367eb5-2686-46ac-9949-20af3bf840c4 is in state STARTED 2026-01-10 14:39:52.491822 | orchestrator | 2026-01-10 14:39:52 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:39:55.535859 | orchestrator | 2026-01-10 14:39:55 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:39:55.536141 | orchestrator | 2026-01-10 14:39:55 | INFO  | Task e1c1f69f-b10d-408b-abed-be39393f1ae8 is in state STARTED 2026-01-10 14:39:55.540057 | orchestrator | 2026-01-10 14:39:55 | INFO  | Task b6367eb5-2686-46ac-9949-20af3bf840c4 is in state STARTED 2026-01-10 14:39:55.540127 | orchestrator | 2026-01-10 14:39:55 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:39:58.645865 | orchestrator | 2026-01-10 14:39:58 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:39:58.645957 | orchestrator | 2026-01-10 14:39:58 | INFO  | Task e1c1f69f-b10d-408b-abed-be39393f1ae8 is in state STARTED 2026-01-10 14:39:58.646812 | orchestrator | 2026-01-10 14:39:58 | INFO  | Task b6367eb5-2686-46ac-9949-20af3bf840c4 is in state STARTED 2026-01-10 14:39:58.646883 | orchestrator | 2026-01-10 14:39:58 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:40:01.701491 | orchestrator | 2026-01-10 14:40:01 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:40:01.703672 | orchestrator | 2026-01-10 14:40:01 | INFO  | Task e1c1f69f-b10d-408b-abed-be39393f1ae8 is in state STARTED 2026-01-10 14:40:01.705885 | orchestrator | 2026-01-10 14:40:01 | INFO  | Task b6367eb5-2686-46ac-9949-20af3bf840c4 is in state STARTED 2026-01-10 14:40:01.705940 | orchestrator | 2026-01-10 14:40:01 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:40:04.760911 | orchestrator | 2026-01-10 14:40:04 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:40:04.762944 | orchestrator | 2026-01-10 14:40:04 | INFO  | Task e1c1f69f-b10d-408b-abed-be39393f1ae8 is in state STARTED 2026-01-10 14:40:04.763929 | orchestrator | 2026-01-10 14:40:04 | INFO  | Task b6367eb5-2686-46ac-9949-20af3bf840c4 is in state STARTED 2026-01-10 14:40:04.763991 | orchestrator | 2026-01-10 14:40:04 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:40:07.914129 | orchestrator | 2026-01-10 14:40:07 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:40:07.914201 | orchestrator | 2026-01-10 14:40:07 | INFO  | Task e1c1f69f-b10d-408b-abed-be39393f1ae8 is in state STARTED 2026-01-10 14:40:07.921436 | orchestrator | 2026-01-10 14:40:07 | INFO  | Task b6367eb5-2686-46ac-9949-20af3bf840c4 is in state STARTED 2026-01-10 14:40:07.921507 | orchestrator | 2026-01-10 14:40:07 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:40:10.956166 | orchestrator | 2026-01-10 14:40:10 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:40:10.956804 | orchestrator | 2026-01-10 14:40:10 | INFO  | Task e1c1f69f-b10d-408b-abed-be39393f1ae8 is in state STARTED 2026-01-10 14:40:10.958383 | orchestrator | 2026-01-10 14:40:10 | INFO  | Task b6367eb5-2686-46ac-9949-20af3bf840c4 is in state STARTED 2026-01-10 14:40:10.958423 | orchestrator | 2026-01-10 14:40:10 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:40:13.986404 | orchestrator | 2026-01-10 14:40:13 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:40:13.986992 | orchestrator | 2026-01-10 14:40:13 | INFO  | Task e1c1f69f-b10d-408b-abed-be39393f1ae8 is in state STARTED 2026-01-10 14:40:13.988739 | orchestrator | 2026-01-10 14:40:13 | INFO  | Task b6367eb5-2686-46ac-9949-20af3bf840c4 is in state STARTED 2026-01-10 14:40:13.989091 | orchestrator | 2026-01-10 14:40:13 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:40:17.030894 | orchestrator | 2026-01-10 14:40:17 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:40:17.033411 | orchestrator | 2026-01-10 14:40:17 | INFO  | Task e1c1f69f-b10d-408b-abed-be39393f1ae8 is in state STARTED 2026-01-10 14:40:17.035243 | orchestrator | 2026-01-10 14:40:17 | INFO  | Task b6367eb5-2686-46ac-9949-20af3bf840c4 is in state STARTED 2026-01-10 14:40:17.035843 | orchestrator | 2026-01-10 14:40:17 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:40:20.070777 | orchestrator | 2026-01-10 14:40:20 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:40:20.073017 | orchestrator | 2026-01-10 14:40:20 | INFO  | Task e1c1f69f-b10d-408b-abed-be39393f1ae8 is in state STARTED 2026-01-10 14:40:20.075051 | orchestrator | 2026-01-10 14:40:20 | INFO  | Task b6367eb5-2686-46ac-9949-20af3bf840c4 is in state STARTED 2026-01-10 14:40:20.075118 | orchestrator | 2026-01-10 14:40:20 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:40:23.115799 | orchestrator | 2026-01-10 14:40:23 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:40:23.117421 | orchestrator | 2026-01-10 14:40:23 | INFO  | Task e1c1f69f-b10d-408b-abed-be39393f1ae8 is in state STARTED 2026-01-10 14:40:23.119536 | orchestrator | 2026-01-10 14:40:23 | INFO  | Task b6367eb5-2686-46ac-9949-20af3bf840c4 is in state STARTED 2026-01-10 14:40:23.119608 | orchestrator | 2026-01-10 14:40:23 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:40:26.164908 | orchestrator | 2026-01-10 14:40:26 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:40:26.166736 | orchestrator | 2026-01-10 14:40:26 | INFO  | Task e1c1f69f-b10d-408b-abed-be39393f1ae8 is in state STARTED 2026-01-10 14:40:26.168779 | orchestrator | 2026-01-10 14:40:26 | INFO  | Task b6367eb5-2686-46ac-9949-20af3bf840c4 is in state STARTED 2026-01-10 14:40:26.168845 | orchestrator | 2026-01-10 14:40:26 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:40:29.211795 | orchestrator | 2026-01-10 14:40:29 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:40:29.212052 | orchestrator | 2026-01-10 14:40:29 | INFO  | Task e1c1f69f-b10d-408b-abed-be39393f1ae8 is in state STARTED 2026-01-10 14:40:29.213406 | orchestrator | 2026-01-10 14:40:29 | INFO  | Task b6367eb5-2686-46ac-9949-20af3bf840c4 is in state STARTED 2026-01-10 14:40:29.213456 | orchestrator | 2026-01-10 14:40:29 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:40:32.260362 | orchestrator | 2026-01-10 14:40:32 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:40:32.262781 | orchestrator | 2026-01-10 14:40:32 | INFO  | Task e1c1f69f-b10d-408b-abed-be39393f1ae8 is in state STARTED 2026-01-10 14:40:32.265109 | orchestrator | 2026-01-10 14:40:32 | INFO  | Task b6367eb5-2686-46ac-9949-20af3bf840c4 is in state STARTED 2026-01-10 14:40:32.265584 | orchestrator | 2026-01-10 14:40:32 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:40:35.313121 | orchestrator | 2026-01-10 14:40:35 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:40:35.313209 | orchestrator | 2026-01-10 14:40:35 | INFO  | Task e1c1f69f-b10d-408b-abed-be39393f1ae8 is in state STARTED 2026-01-10 14:40:35.314277 | orchestrator | 2026-01-10 14:40:35 | INFO  | Task b6367eb5-2686-46ac-9949-20af3bf840c4 is in state STARTED 2026-01-10 14:40:35.314339 | orchestrator | 2026-01-10 14:40:35 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:40:38.361433 | orchestrator | 2026-01-10 14:40:38 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:40:38.362347 | orchestrator | 2026-01-10 14:40:38 | INFO  | Task e1c1f69f-b10d-408b-abed-be39393f1ae8 is in state STARTED 2026-01-10 14:40:38.364068 | orchestrator | 2026-01-10 14:40:38 | INFO  | Task b6367eb5-2686-46ac-9949-20af3bf840c4 is in state STARTED 2026-01-10 14:40:38.364207 | orchestrator | 2026-01-10 14:40:38 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:40:41.405172 | orchestrator | 2026-01-10 14:40:41 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:40:41.407078 | orchestrator | 2026-01-10 14:40:41 | INFO  | Task e1c1f69f-b10d-408b-abed-be39393f1ae8 is in state STARTED 2026-01-10 14:40:41.408829 | orchestrator | 2026-01-10 14:40:41 | INFO  | Task b6367eb5-2686-46ac-9949-20af3bf840c4 is in state STARTED 2026-01-10 14:40:41.408989 | orchestrator | 2026-01-10 14:40:41 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:40:44.463093 | orchestrator | 2026-01-10 14:40:44 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:40:44.464267 | orchestrator | 2026-01-10 14:40:44 | INFO  | Task e1c1f69f-b10d-408b-abed-be39393f1ae8 is in state STARTED 2026-01-10 14:40:44.466529 | orchestrator | 2026-01-10 14:40:44 | INFO  | Task b6367eb5-2686-46ac-9949-20af3bf840c4 is in state STARTED 2026-01-10 14:40:44.466684 | orchestrator | 2026-01-10 14:40:44 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:40:47.518395 | orchestrator | 2026-01-10 14:40:47 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:40:47.521404 | orchestrator | 2026-01-10 14:40:47 | INFO  | Task e1c1f69f-b10d-408b-abed-be39393f1ae8 is in state STARTED 2026-01-10 14:40:47.523312 | orchestrator | 2026-01-10 14:40:47 | INFO  | Task b6367eb5-2686-46ac-9949-20af3bf840c4 is in state STARTED 2026-01-10 14:40:47.523484 | orchestrator | 2026-01-10 14:40:47 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:40:50.574755 | orchestrator | 2026-01-10 14:40:50 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:40:50.576226 | orchestrator | 2026-01-10 14:40:50 | INFO  | Task e1c1f69f-b10d-408b-abed-be39393f1ae8 is in state STARTED 2026-01-10 14:40:50.578083 | orchestrator | 2026-01-10 14:40:50 | INFO  | Task b6367eb5-2686-46ac-9949-20af3bf840c4 is in state STARTED 2026-01-10 14:40:50.578166 | orchestrator | 2026-01-10 14:40:50 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:40:53.614916 | orchestrator | 2026-01-10 14:40:53 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:40:53.616155 | orchestrator | 2026-01-10 14:40:53 | INFO  | Task e1c1f69f-b10d-408b-abed-be39393f1ae8 is in state STARTED 2026-01-10 14:40:53.617012 | orchestrator | 2026-01-10 14:40:53 | INFO  | Task b6367eb5-2686-46ac-9949-20af3bf840c4 is in state STARTED 2026-01-10 14:40:53.617045 | orchestrator | 2026-01-10 14:40:53 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:40:56.664142 | orchestrator | 2026-01-10 14:40:56 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:40:56.667249 | orchestrator | 2026-01-10 14:40:56 | INFO  | Task e1c1f69f-b10d-408b-abed-be39393f1ae8 is in state STARTED 2026-01-10 14:40:56.668097 | orchestrator | 2026-01-10 14:40:56 | INFO  | Task b6367eb5-2686-46ac-9949-20af3bf840c4 is in state STARTED 2026-01-10 14:40:56.669093 | orchestrator | 2026-01-10 14:40:56 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:40:59.715697 | orchestrator | 2026-01-10 14:40:59 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:40:59.716996 | orchestrator | 2026-01-10 14:40:59 | INFO  | Task e1c1f69f-b10d-408b-abed-be39393f1ae8 is in state STARTED 2026-01-10 14:40:59.719086 | orchestrator | 2026-01-10 14:40:59 | INFO  | Task b6367eb5-2686-46ac-9949-20af3bf840c4 is in state STARTED 2026-01-10 14:40:59.719128 | orchestrator | 2026-01-10 14:40:59 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:41:02.770376 | orchestrator | 2026-01-10 14:41:02 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:41:02.773477 | orchestrator | 2026-01-10 14:41:02 | INFO  | Task e1c1f69f-b10d-408b-abed-be39393f1ae8 is in state STARTED 2026-01-10 14:41:02.775913 | orchestrator | 2026-01-10 14:41:02 | INFO  | Task b6367eb5-2686-46ac-9949-20af3bf840c4 is in state STARTED 2026-01-10 14:41:02.775988 | orchestrator | 2026-01-10 14:41:02 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:41:05.822229 | orchestrator | 2026-01-10 14:41:05 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:41:05.823129 | orchestrator | 2026-01-10 14:41:05 | INFO  | Task e1c1f69f-b10d-408b-abed-be39393f1ae8 is in state STARTED 2026-01-10 14:41:05.825715 | orchestrator | 2026-01-10 14:41:05 | INFO  | Task b6367eb5-2686-46ac-9949-20af3bf840c4 is in state STARTED 2026-01-10 14:41:05.825752 | orchestrator | 2026-01-10 14:41:05 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:41:08.880336 | orchestrator | 2026-01-10 14:41:08 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:41:08.884125 | orchestrator | 2026-01-10 14:41:08 | INFO  | Task e1c1f69f-b10d-408b-abed-be39393f1ae8 is in state STARTED 2026-01-10 14:41:08.886485 | orchestrator | 2026-01-10 14:41:08 | INFO  | Task b6367eb5-2686-46ac-9949-20af3bf840c4 is in state STARTED 2026-01-10 14:41:08.886583 | orchestrator | 2026-01-10 14:41:08 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:41:11.930000 | orchestrator | 2026-01-10 14:41:11 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:41:11.933253 | orchestrator | 2026-01-10 14:41:11 | INFO  | Task e1c1f69f-b10d-408b-abed-be39393f1ae8 is in state STARTED 2026-01-10 14:41:11.936047 | orchestrator | 2026-01-10 14:41:11 | INFO  | Task b6367eb5-2686-46ac-9949-20af3bf840c4 is in state STARTED 2026-01-10 14:41:11.936329 | orchestrator | 2026-01-10 14:41:11 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:41:14.995387 | orchestrator | 2026-01-10 14:41:14 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:41:14.997873 | orchestrator | 2026-01-10 14:41:14 | INFO  | Task e1c1f69f-b10d-408b-abed-be39393f1ae8 is in state STARTED 2026-01-10 14:41:15.000334 | orchestrator | 2026-01-10 14:41:14 | INFO  | Task b6367eb5-2686-46ac-9949-20af3bf840c4 is in state STARTED 2026-01-10 14:41:15.000403 | orchestrator | 2026-01-10 14:41:14 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:41:18.056938 | orchestrator | 2026-01-10 14:41:18 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:41:18.060896 | orchestrator | 2026-01-10 14:41:18 | INFO  | Task e1c1f69f-b10d-408b-abed-be39393f1ae8 is in state STARTED 2026-01-10 14:41:18.062281 | orchestrator | 2026-01-10 14:41:18 | INFO  | Task b6367eb5-2686-46ac-9949-20af3bf840c4 is in state STARTED 2026-01-10 14:41:18.062331 | orchestrator | 2026-01-10 14:41:18 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:41:21.114784 | orchestrator | 2026-01-10 14:41:21 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:41:21.115610 | orchestrator | 2026-01-10 14:41:21 | INFO  | Task e1c1f69f-b10d-408b-abed-be39393f1ae8 is in state STARTED 2026-01-10 14:41:21.116666 | orchestrator | 2026-01-10 14:41:21 | INFO  | Task b6367eb5-2686-46ac-9949-20af3bf840c4 is in state STARTED 2026-01-10 14:41:21.116718 | orchestrator | 2026-01-10 14:41:21 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:41:24.166275 | orchestrator | 2026-01-10 14:41:24 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state STARTED 2026-01-10 14:41:24.167267 | orchestrator | 2026-01-10 14:41:24 | INFO  | Task e1c1f69f-b10d-408b-abed-be39393f1ae8 is in state STARTED 2026-01-10 14:41:24.171129 | orchestrator | 2026-01-10 14:41:24 | INFO  | Task b6367eb5-2686-46ac-9949-20af3bf840c4 is in state STARTED 2026-01-10 14:41:24.171185 | orchestrator | 2026-01-10 14:41:24 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:41:27.222227 | orchestrator | 2026-01-10 14:41:27 | INFO  | Task f29721a7-0425-4b61-a68c-0fc9d87346a6 is in state SUCCESS 2026-01-10 14:41:27.224209 | orchestrator | 2026-01-10 14:41:27.224334 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-01-10 14:41:27.224348 | orchestrator | 2.16.14 2026-01-10 14:41:27.224358 | orchestrator | 2026-01-10 14:41:27.224406 | orchestrator | PLAY [Prepare deployment of Ceph services] ************************************* 2026-01-10 14:41:27.224414 | orchestrator | 2026-01-10 14:41:27.224422 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-01-10 14:41:27.224431 | orchestrator | Saturday 10 January 2026 14:29:40 +0000 (0:00:00.738) 0:00:00.738 ****** 2026-01-10 14:41:27.224483 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:41:27.224494 | orchestrator | 2026-01-10 14:41:27.224703 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-01-10 14:41:27.224716 | orchestrator | Saturday 10 January 2026 14:29:41 +0000 (0:00:01.087) 0:00:01.826 ****** 2026-01-10 14:41:27.224721 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:41:27.224727 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:41:27.224731 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:41:27.224737 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:41:27.224742 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:41:27.224746 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:41:27.224751 | orchestrator | 2026-01-10 14:41:27.224756 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-01-10 14:41:27.224832 | orchestrator | Saturday 10 January 2026 14:29:43 +0000 (0:00:01.463) 0:00:03.290 ****** 2026-01-10 14:41:27.224842 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:41:27.224851 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:41:27.224858 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:41:27.224865 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:41:27.224873 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:41:27.224880 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:41:27.224887 | orchestrator | 2026-01-10 14:41:27.224895 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-01-10 14:41:27.224903 | orchestrator | Saturday 10 January 2026 14:29:44 +0000 (0:00:00.782) 0:00:04.072 ****** 2026-01-10 14:41:27.224911 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:41:27.224919 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:41:27.224955 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:41:27.224965 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:41:27.224973 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:41:27.224980 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:41:27.225012 | orchestrator | 2026-01-10 14:41:27.225021 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-01-10 14:41:27.225029 | orchestrator | Saturday 10 January 2026 14:29:45 +0000 (0:00:01.112) 0:00:05.184 ****** 2026-01-10 14:41:27.225038 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:41:27.225046 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:41:27.225055 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:41:27.225063 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:41:27.225071 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:41:27.225081 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:41:27.225089 | orchestrator | 2026-01-10 14:41:27.225098 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-01-10 14:41:27.225106 | orchestrator | Saturday 10 January 2026 14:29:46 +0000 (0:00:00.771) 0:00:05.956 ****** 2026-01-10 14:41:27.225115 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:41:27.225123 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:41:27.225132 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:41:27.225140 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:41:27.225148 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:41:27.225156 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:41:27.225172 | orchestrator | 2026-01-10 14:41:27.225200 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-01-10 14:41:27.225215 | orchestrator | Saturday 10 January 2026 14:29:46 +0000 (0:00:00.630) 0:00:06.586 ****** 2026-01-10 14:41:27.225223 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:41:27.225230 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:41:27.225287 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:41:27.225292 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:41:27.225296 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:41:27.225301 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:41:27.225305 | orchestrator | 2026-01-10 14:41:27.225310 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-01-10 14:41:27.225315 | orchestrator | Saturday 10 January 2026 14:29:47 +0000 (0:00:00.953) 0:00:07.540 ****** 2026-01-10 14:41:27.225320 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:41:27.225326 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:41:27.225330 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:41:27.225356 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:41:27.225361 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:41:27.225366 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:41:27.225371 | orchestrator | 2026-01-10 14:41:27.225375 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-01-10 14:41:27.225380 | orchestrator | Saturday 10 January 2026 14:29:48 +0000 (0:00:01.112) 0:00:08.652 ****** 2026-01-10 14:41:27.225388 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:41:27.225395 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:41:27.225436 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:41:27.225443 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:41:27.225459 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:41:27.225466 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:41:27.225531 | orchestrator | 2026-01-10 14:41:27.225538 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-01-10 14:41:27.225543 | orchestrator | Saturday 10 January 2026 14:29:49 +0000 (0:00:00.997) 0:00:09.652 ****** 2026-01-10 14:41:27.225548 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-10 14:41:27.225552 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-10 14:41:27.225557 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-10 14:41:27.225561 | orchestrator | 2026-01-10 14:41:27.225566 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-01-10 14:41:27.225570 | orchestrator | Saturday 10 January 2026 14:29:51 +0000 (0:00:01.251) 0:00:10.903 ****** 2026-01-10 14:41:27.225575 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:41:27.225579 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:41:27.225584 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:41:27.225606 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:41:27.225611 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:41:27.225616 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:41:27.225620 | orchestrator | 2026-01-10 14:41:27.225625 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-01-10 14:41:27.225629 | orchestrator | Saturday 10 January 2026 14:29:52 +0000 (0:00:01.030) 0:00:11.933 ****** 2026-01-10 14:41:27.225634 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-10 14:41:27.225638 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-10 14:41:27.225650 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-10 14:41:27.225655 | orchestrator | 2026-01-10 14:41:27.225702 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-01-10 14:41:27.225707 | orchestrator | Saturday 10 January 2026 14:29:54 +0000 (0:00:02.842) 0:00:14.775 ****** 2026-01-10 14:41:27.225712 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-01-10 14:41:27.225730 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-01-10 14:41:27.225777 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-01-10 14:41:27.225782 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:41:27.225787 | orchestrator | 2026-01-10 14:41:27.225791 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-01-10 14:41:27.225796 | orchestrator | Saturday 10 January 2026 14:29:55 +0000 (0:00:00.793) 0:00:15.569 ****** 2026-01-10 14:41:27.225817 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-01-10 14:41:27.225826 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-01-10 14:41:27.225830 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-01-10 14:41:27.225835 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:41:27.225840 | orchestrator | 2026-01-10 14:41:27.225844 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-01-10 14:41:27.225848 | orchestrator | Saturday 10 January 2026 14:29:56 +0000 (0:00:01.229) 0:00:16.798 ****** 2026-01-10 14:41:27.225854 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-01-10 14:41:27.225868 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-01-10 14:41:27.225873 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-01-10 14:41:27.225877 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:41:27.225882 | orchestrator | 2026-01-10 14:41:27.225886 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-01-10 14:41:27.225917 | orchestrator | Saturday 10 January 2026 14:29:57 +0000 (0:00:00.545) 0:00:17.344 ****** 2026-01-10 14:41:27.225951 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-01-10 14:29:52.813306', 'end': '2026-01-10 14:29:53.134779', 'delta': '0:00:00.321473', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-01-10 14:41:27.225966 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-01-10 14:29:53.717913', 'end': '2026-01-10 14:29:53.980253', 'delta': '0:00:00.262340', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-01-10 14:41:27.225971 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-01-10 14:29:54.446609', 'end': '2026-01-10 14:29:54.745592', 'delta': '0:00:00.298983', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-01-10 14:41:27.225976 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:41:27.225980 | orchestrator | 2026-01-10 14:41:27.225985 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-01-10 14:41:27.225989 | orchestrator | Saturday 10 January 2026 14:29:57 +0000 (0:00:00.408) 0:00:17.753 ****** 2026-01-10 14:41:27.225998 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:41:27.226002 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:41:27.226007 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:41:27.226011 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:41:27.226052 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:41:27.226057 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:41:27.226061 | orchestrator | 2026-01-10 14:41:27.226066 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-01-10 14:41:27.226071 | orchestrator | Saturday 10 January 2026 14:30:00 +0000 (0:00:02.912) 0:00:20.665 ****** 2026-01-10 14:41:27.226075 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-01-10 14:41:27.226080 | orchestrator | 2026-01-10 14:41:27.226084 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-01-10 14:41:27.226089 | orchestrator | Saturday 10 January 2026 14:30:01 +0000 (0:00:00.825) 0:00:21.491 ****** 2026-01-10 14:41:27.226093 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:41:27.226097 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:41:27.226102 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:41:27.226106 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:41:27.226134 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:41:27.226140 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:41:27.226144 | orchestrator | 2026-01-10 14:41:27.226149 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-01-10 14:41:27.226153 | orchestrator | Saturday 10 January 2026 14:30:03 +0000 (0:00:01.548) 0:00:23.040 ****** 2026-01-10 14:41:27.226158 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:41:27.226162 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:41:27.226167 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:41:27.226171 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:41:27.226176 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:41:27.226180 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:41:27.226185 | orchestrator | 2026-01-10 14:41:27.226189 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-01-10 14:41:27.226194 | orchestrator | Saturday 10 January 2026 14:30:04 +0000 (0:00:01.250) 0:00:24.291 ****** 2026-01-10 14:41:27.226198 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:41:27.226203 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:41:27.226207 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:41:27.226211 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:41:27.226216 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:41:27.226220 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:41:27.226225 | orchestrator | 2026-01-10 14:41:27.226229 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-01-10 14:41:27.226234 | orchestrator | Saturday 10 January 2026 14:30:05 +0000 (0:00:01.377) 0:00:25.668 ****** 2026-01-10 14:41:27.226238 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:41:27.226242 | orchestrator | 2026-01-10 14:41:27.226259 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-01-10 14:41:27.226264 | orchestrator | Saturday 10 January 2026 14:30:05 +0000 (0:00:00.124) 0:00:25.793 ****** 2026-01-10 14:41:27.226268 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:41:27.226273 | orchestrator | 2026-01-10 14:41:27.226277 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-01-10 14:41:27.226282 | orchestrator | Saturday 10 January 2026 14:30:06 +0000 (0:00:00.288) 0:00:26.081 ****** 2026-01-10 14:41:27.226286 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:41:27.226291 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:41:27.226295 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:41:27.226304 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:41:27.226309 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:41:27.226314 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:41:27.226318 | orchestrator | 2026-01-10 14:41:27.226323 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-01-10 14:41:27.226332 | orchestrator | Saturday 10 January 2026 14:30:06 +0000 (0:00:00.704) 0:00:26.786 ****** 2026-01-10 14:41:27.226336 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:41:27.226341 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:41:27.226345 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:41:27.226350 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:41:27.226354 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:41:27.226359 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:41:27.226363 | orchestrator | 2026-01-10 14:41:27.226371 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-01-10 14:41:27.226376 | orchestrator | Saturday 10 January 2026 14:30:07 +0000 (0:00:00.855) 0:00:27.642 ****** 2026-01-10 14:41:27.226380 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:41:27.226384 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:41:27.226389 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:41:27.226393 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:41:27.226398 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:41:27.226402 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:41:27.226406 | orchestrator | 2026-01-10 14:41:27.226411 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-01-10 14:41:27.226415 | orchestrator | Saturday 10 January 2026 14:30:08 +0000 (0:00:00.779) 0:00:28.421 ****** 2026-01-10 14:41:27.226420 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:41:27.226424 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:41:27.226428 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:41:27.226433 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:41:27.226437 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:41:27.226442 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:41:27.226446 | orchestrator | 2026-01-10 14:41:27.226450 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-01-10 14:41:27.226455 | orchestrator | Saturday 10 January 2026 14:30:09 +0000 (0:00:00.857) 0:00:29.279 ****** 2026-01-10 14:41:27.226459 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:41:27.226464 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:41:27.226468 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:41:27.226472 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:41:27.226477 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:41:27.226481 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:41:27.226486 | orchestrator | 2026-01-10 14:41:27.226536 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-01-10 14:41:27.226543 | orchestrator | Saturday 10 January 2026 14:30:10 +0000 (0:00:00.591) 0:00:29.870 ****** 2026-01-10 14:41:27.226548 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:41:27.226552 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:41:27.226557 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:41:27.226562 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:41:27.226570 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:41:27.226577 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:41:27.226584 | orchestrator | 2026-01-10 14:41:27.226684 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-01-10 14:41:27.226693 | orchestrator | Saturday 10 January 2026 14:30:10 +0000 (0:00:00.775) 0:00:30.646 ****** 2026-01-10 14:41:27.226701 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:41:27.226706 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:41:27.226710 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:41:27.226715 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:41:27.226719 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:41:27.226724 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:41:27.226728 | orchestrator | 2026-01-10 14:41:27.226733 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-01-10 14:41:27.226737 | orchestrator | Saturday 10 January 2026 14:30:11 +0000 (0:00:00.551) 0:00:31.197 ****** 2026-01-10 14:41:27.226748 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--6bac10f4--8703--5b93--90a3--91ba865f27b3-osd--block--6bac10f4--8703--5b93--90a3--91ba865f27b3', 'dm-uuid-LVM-uH2Al5eNaR4ncNlj6O0iPJ5SHvylf9HIo5uifasG5P7LrbpfS2web6cXCqroC1KK'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-10 14:41:27.226755 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--ef830303--d908--5775--964e--bef8687288a6-osd--block--ef830303--d908--5775--964e--bef8687288a6', 'dm-uuid-LVM-hwyi5YZZ5T0V9hBEIvqpWwg3zruYopvYJ3dpdkoCkycM0D263lUAQLxdyI128ab2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-10 14:41:27.226791 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-10 14:41:27.226814 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-10 14:41:27.226828 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-10 14:41:27.226832 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-10 14:41:27.226844 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-10 14:41:27.226849 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-10 14:41:27.226853 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-10 14:41:27.226903 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-10 14:41:27.226926 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_21aeb2d8-b6d0-4615-8a9a-0fb4b5fbf431', 'scsi-SQEMU_QEMU_HARDDISK_21aeb2d8-b6d0-4615-8a9a-0fb4b5fbf431'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_21aeb2d8-b6d0-4615-8a9a-0fb4b5fbf431-part1', 'scsi-SQEMU_QEMU_HARDDISK_21aeb2d8-b6d0-4615-8a9a-0fb4b5fbf431-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_21aeb2d8-b6d0-4615-8a9a-0fb4b5fbf431-part14', 'scsi-SQEMU_QEMU_HARDDISK_21aeb2d8-b6d0-4615-8a9a-0fb4b5fbf431-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_21aeb2d8-b6d0-4615-8a9a-0fb4b5fbf431-part15', 'scsi-SQEMU_QEMU_HARDDISK_21aeb2d8-b6d0-4615-8a9a-0fb4b5fbf431-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_21aeb2d8-b6d0-4615-8a9a-0fb4b5fbf431-part16', 'scsi-SQEMU_QEMU_HARDDISK_21aeb2d8-b6d0-4615-8a9a-0fb4b5fbf431-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-10 14:41:27.226936 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--6bac10f4--8703--5b93--90a3--91ba865f27b3-osd--block--6bac10f4--8703--5b93--90a3--91ba865f27b3'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-B32ZwJ-eBQc-y79V-idgx-GHMM-RIEc-kPdv3Y', 'scsi-0QEMU_QEMU_HARDDISK_70c6fd94-218f-483a-b965-10c70b1b97fc', 'scsi-SQEMU_QEMU_HARDDISK_70c6fd94-218f-483a-b965-10c70b1b97fc'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-10 14:41:27.226945 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--ef830303--d908--5775--964e--bef8687288a6-osd--block--ef830303--d908--5775--964e--bef8687288a6'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-dDr3Q4-vkot-1toB-qHzf-rt63-1YC4-a2cdsm', 'scsi-0QEMU_QEMU_HARDDISK_f7705bd4-29b3-411e-b8b9-50568fcffd73', 'scsi-SQEMU_QEMU_HARDDISK_f7705bd4-29b3-411e-b8b9-50568fcffd73'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-10 14:41:27.226958 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--0fad3856--f6d1--50e2--a5cb--d9f4a0859299-osd--block--0fad3856--f6d1--50e2--a5cb--d9f4a0859299', 'dm-uuid-LVM-NrSndplu8YjxJZJR7UELD6OYsvV50bPZ1u2VEUIggImfgCLc9zhjhhZDbVtJX9QT'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-10 14:41:27.226966 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2130b2ec-580e-4b39-88b4-748d7926916f', 'scsi-SQEMU_QEMU_HARDDISK_2130b2ec-580e-4b39-88b4-748d7926916f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-10 14:41:27.226980 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-10-13-45-52-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-10 14:41:27.226991 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--39355231--3192--5ff7--9e27--947e8968f1e9-osd--block--39355231--3192--5ff7--9e27--947e8968f1e9', 'dm-uuid-LVM-0dLLKJtm6H324NqK1ZOHec17jVXqGr5vNKj6jpTpF1lhxA6YbcYHPuFYNGYyDWSE'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-10 14:41:27.226999 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-10 14:41:27.227006 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-10 14:41:27.227014 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-10 14:41:27.227027 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:41:27.227035 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-10 14:41:27.227042 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-10 14:41:27.227049 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-10 14:41:27.227054 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-10 14:41:27.227088 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-10 14:41:27.227109 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6133482b-469c-4f4b-9769-bc6dc055ce78', 'scsi-SQEMU_QEMU_HARDDISK_6133482b-469c-4f4b-9769-bc6dc055ce78'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6133482b-469c-4f4b-9769-bc6dc055ce78-part1', 'scsi-SQEMU_QEMU_HARDDISK_6133482b-469c-4f4b-9769-bc6dc055ce78-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6133482b-469c-4f4b-9769-bc6dc055ce78-part14', 'scsi-SQEMU_QEMU_HARDDISK_6133482b-469c-4f4b-9769-bc6dc055ce78-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6133482b-469c-4f4b-9769-bc6dc055ce78-part15', 'scsi-SQEMU_QEMU_HARDDISK_6133482b-469c-4f4b-9769-bc6dc055ce78-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6133482b-469c-4f4b-9769-bc6dc055ce78-part16', 'scsi-SQEMU_QEMU_HARDDISK_6133482b-469c-4f4b-9769-bc6dc055ce78-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-10 14:41:27.227198 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--0fad3856--f6d1--50e2--a5cb--d9f4a0859299-osd--block--0fad3856--f6d1--50e2--a5cb--d9f4a0859299'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Vvc4YC-2Ex3-eCr9-vnZS-ADWO-gj04-g7abB6', 'scsi-0QEMU_QEMU_HARDDISK_763a4a26-d97a-40e2-a569-d464b2971007', 'scsi-SQEMU_QEMU_HARDDISK_763a4a26-d97a-40e2-a569-d464b2971007'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-10 14:41:27.227204 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--39355231--3192--5ff7--9e27--947e8968f1e9-osd--block--39355231--3192--5ff7--9e27--947e8968f1e9'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-60uRIt-IULU-CMat-eEKR-GmLG-bbFO-QnA2Tt', 'scsi-0QEMU_QEMU_HARDDISK_45b03c06-0ab6-4b62-8b16-77c772305c6a', 'scsi-SQEMU_QEMU_HARDDISK_45b03c06-0ab6-4b62-8b16-77c772305c6a'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-10 14:41:27.227225 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e6c5241f-60aa-42cf-822c-98275b24deb1', 'scsi-SQEMU_QEMU_HARDDISK_e6c5241f-60aa-42cf-822c-98275b24deb1'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-10 14:41:27.227231 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-10-13-45-44-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-10 14:41:27.227236 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--4cb3fc90--004d--5443--9ae7--f5eff9c4438f-osd--block--4cb3fc90--004d--5443--9ae7--f5eff9c4438f', 'dm-uuid-LVM-fIYasPDKY6yyb0lbN1hYZudeZijwr05t0znOImwORuoEgjaGyB4fyTgEynvK6HFS'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-10 14:41:27.227244 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--dec76364--a7ee--5469--8bc3--2dcf5060f83e-osd--block--dec76364--a7ee--5469--8bc3--2dcf5060f83e', 'dm-uuid-LVM-ELjjXI7PsiwNbDCw3Snq8tT0U2GbdoLWczg8BVDKOFs22fypwHVROqY12ftkOQHx'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-10 14:41:27.227249 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-10 14:41:27.227254 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-10 14:41:27.227259 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-10 14:41:27.227264 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-10 14:41:27.227272 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-10 14:41:27.227283 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-10 14:41:27.227292 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-10 14:41:27.227299 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:41:27.227307 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-10 14:41:27.227320 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_832b7b05-f737-40ff-a441-99af22cffa7c', 'scsi-SQEMU_QEMU_HARDDISK_832b7b05-f737-40ff-a441-99af22cffa7c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_832b7b05-f737-40ff-a441-99af22cffa7c-part1', 'scsi-SQEMU_QEMU_HARDDISK_832b7b05-f737-40ff-a441-99af22cffa7c-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_832b7b05-f737-40ff-a441-99af22cffa7c-part14', 'scsi-SQEMU_QEMU_HARDDISK_832b7b05-f737-40ff-a441-99af22cffa7c-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_832b7b05-f737-40ff-a441-99af22cffa7c-part15', 'scsi-SQEMU_QEMU_HARDDISK_832b7b05-f737-40ff-a441-99af22cffa7c-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_832b7b05-f737-40ff-a441-99af22cffa7c-part16', 'scsi-SQEMU_QEMU_HARDDISK_832b7b05-f737-40ff-a441-99af22cffa7c-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-10 14:41:27.227334 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--4cb3fc90--004d--5443--9ae7--f5eff9c4438f-osd--block--4cb3fc90--004d--5443--9ae7--f5eff9c4438f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-J4k07S-Vr1V-a78k-IT0w-c3z0-Eftr-0EfL69', 'scsi-0QEMU_QEMU_HARDDISK_4515c98e-1f25-421e-81d3-264e20827141', 'scsi-SQEMU_QEMU_HARDDISK_4515c98e-1f25-421e-81d3-264e20827141'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-10 14:41:27.227346 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--dec76364--a7ee--5469--8bc3--2dcf5060f83e-osd--block--dec76364--a7ee--5469--8bc3--2dcf5060f83e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-rvYJNa-YbX5-CU38-DuHY-Y6W2-TgfW-vshxzL', 'scsi-0QEMU_QEMU_HARDDISK_9cc4e4a6-fdb0-4f2b-8497-7d80ad86af00', 'scsi-SQEMU_QEMU_HARDDISK_9cc4e4a6-fdb0-4f2b-8497-7d80ad86af00'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-10 14:41:27.227355 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_355a7212-75f2-41c4-a284-fbc15ac49d3c', 'scsi-SQEMU_QEMU_HARDDISK_355a7212-75f2-41c4-a284-fbc15ac49d3c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-10 14:41:27.227379 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-10-13-45-50-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-10 14:41:27.227388 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-10 14:41:27.227396 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-10 14:41:27.227419 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-10 14:41:27.227424 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-10 14:41:27.227433 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-10 14:41:27.227440 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-10 14:41:27.227445 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-10 14:41:27.227454 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-10 14:41:27.227459 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_eb1a97c9-b500-4e71-8a2b-c22723210725', 'scsi-SQEMU_QEMU_HARDDISK_eb1a97c9-b500-4e71-8a2b-c22723210725'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_eb1a97c9-b500-4e71-8a2b-c22723210725-part1', 'scsi-SQEMU_QEMU_HARDDISK_eb1a97c9-b500-4e71-8a2b-c22723210725-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_eb1a97c9-b500-4e71-8a2b-c22723210725-part14', 'scsi-SQEMU_QEMU_HARDDISK_eb1a97c9-b500-4e71-8a2b-c22723210725-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_eb1a97c9-b500-4e71-8a2b-c22723210725-part15', 'scsi-SQEMU_QEMU_HARDDISK_eb1a97c9-b500-4e71-8a2b-c22723210725-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_eb1a97c9-b500-4e71-8a2b-c22723210725-part16', 'scsi-SQEMU_QEMU_HARDDISK_eb1a97c9-b500-4e71-8a2b-c22723210725-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-10 14:41:27.227470 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-10-13-45-59-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-10 14:41:27.227475 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:41:27.227483 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-10 14:41:27.227487 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-10 14:41:27.227495 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:41:27.227546 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-10 14:41:27.227551 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-10 14:41:27.227556 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-10 14:41:27.227561 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-10 14:41:27.227565 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-10 14:41:27.227570 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-10 14:41:27.227580 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-10 14:41:27.227585 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-10 14:41:27.227593 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-10 14:41:27.227602 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-10 14:41:27.227607 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-10 14:41:27.227613 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9057548c-db5e-442a-947a-e28af578a58f', 'scsi-SQEMU_QEMU_HARDDISK_9057548c-db5e-442a-947a-e28af578a58f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9057548c-db5e-442a-947a-e28af578a58f-part1', 'scsi-SQEMU_QEMU_HARDDISK_9057548c-db5e-442a-947a-e28af578a58f-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9057548c-db5e-442a-947a-e28af578a58f-part14', 'scsi-SQEMU_QEMU_HARDDISK_9057548c-db5e-442a-947a-e28af578a58f-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9057548c-db5e-442a-947a-e28af578a58f-part15', 'scsi-SQEMU_QEMU_HARDDISK_9057548c-db5e-442a-947a-e28af578a58f-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9057548c-db5e-442a-947a-e28af578a58f-part16', 'scsi-SQEMU_QEMU_HARDDISK_9057548c-db5e-442a-947a-e28af578a58f-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-10 14:41:27.227622 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-10-13-45-47-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-10 14:41:27.227627 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:41:27.227634 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-10 14:41:27.227645 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-10 14:41:27.227650 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-10 14:41:27.227655 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6ed1754d-0592-4676-ba37-32169761691d', 'scsi-SQEMU_QEMU_HARDDISK_6ed1754d-0592-4676-ba37-32169761691d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6ed1754d-0592-4676-ba37-32169761691d-part1', 'scsi-SQEMU_QEMU_HARDDISK_6ed1754d-0592-4676-ba37-32169761691d-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6ed1754d-0592-4676-ba37-32169761691d-part14', 'scsi-SQEMU_QEMU_HARDDISK_6ed1754d-0592-4676-ba37-32169761691d-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6ed1754d-0592-4676-ba37-32169761691d-part15', 'scsi-SQEMU_QEMU_HARDDISK_6ed1754d-0592-4676-ba37-32169761691d-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6ed1754d-0592-4676-ba37-32169761691d-part16', 'scsi-SQEMU_QEMU_HARDDISK_6ed1754d-0592-4676-ba37-32169761691d-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-10 14:41:27.227664 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-10-13-45-46-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-10 14:41:27.227669 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:41:27.227673 | orchestrator | 2026-01-10 14:41:27.227682 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-01-10 14:41:27.227687 | orchestrator | Saturday 10 January 2026 14:30:12 +0000 (0:00:01.441) 0:00:32.639 ****** 2026-01-10 14:41:27.227695 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--6bac10f4--8703--5b93--90a3--91ba865f27b3-osd--block--6bac10f4--8703--5b93--90a3--91ba865f27b3', 'dm-uuid-LVM-uH2Al5eNaR4ncNlj6O0iPJ5SHvylf9HIo5uifasG5P7LrbpfS2web6cXCqroC1KK'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:41:27.227701 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--ef830303--d908--5775--964e--bef8687288a6-osd--block--ef830303--d908--5775--964e--bef8687288a6', 'dm-uuid-LVM-hwyi5YZZ5T0V9hBEIvqpWwg3zruYopvYJ3dpdkoCkycM0D263lUAQLxdyI128ab2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:41:27.227706 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:41:27.227711 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:41:27.227716 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:41:27.227723 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:41:27.227737 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:41:27.227742 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:41:27.227747 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:41:27.227751 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:41:27.227764 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_21aeb2d8-b6d0-4615-8a9a-0fb4b5fbf431', 'scsi-SQEMU_QEMU_HARDDISK_21aeb2d8-b6d0-4615-8a9a-0fb4b5fbf431'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_21aeb2d8-b6d0-4615-8a9a-0fb4b5fbf431-part1', 'scsi-SQEMU_QEMU_HARDDISK_21aeb2d8-b6d0-4615-8a9a-0fb4b5fbf431-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_21aeb2d8-b6d0-4615-8a9a-0fb4b5fbf431-part14', 'scsi-SQEMU_QEMU_HARDDISK_21aeb2d8-b6d0-4615-8a9a-0fb4b5fbf431-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_21aeb2d8-b6d0-4615-8a9a-0fb4b5fbf431-part15', 'scsi-SQEMU_QEMU_HARDDISK_21aeb2d8-b6d0-4615-8a9a-0fb4b5fbf431-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_21aeb2d8-b6d0-4615-8a9a-0fb4b5fbf431-part16', 'scsi-SQEMU_QEMU_HARDDISK_21aeb2d8-b6d0-4615-8a9a-0fb4b5fbf431-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:41:27.227774 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--6bac10f4--8703--5b93--90a3--91ba865f27b3-osd--block--6bac10f4--8703--5b93--90a3--91ba865f27b3'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-B32ZwJ-eBQc-y79V-idgx-GHMM-RIEc-kPdv3Y', 'scsi-0QEMU_QEMU_HARDDISK_70c6fd94-218f-483a-b965-10c70b1b97fc', 'scsi-SQEMU_QEMU_HARDDISK_70c6fd94-218f-483a-b965-10c70b1b97fc'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:41:27.227780 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--ef830303--d908--5775--964e--bef8687288a6-osd--block--ef830303--d908--5775--964e--bef8687288a6'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-dDr3Q4-vkot-1toB-qHzf-rt63-1YC4-a2cdsm', 'scsi-0QEMU_QEMU_HARDDISK_f7705bd4-29b3-411e-b8b9-50568fcffd73', 'scsi-SQEMU_QEMU_HARDDISK_f7705bd4-29b3-411e-b8b9-50568fcffd73'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:41:27.227785 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2130b2ec-580e-4b39-88b4-748d7926916f', 'scsi-SQEMU_QEMU_HARDDISK_2130b2ec-580e-4b39-88b4-748d7926916f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:41:27.227793 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-10-13-45-52-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:41:27.227804 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--0fad3856--f6d1--50e2--a5cb--d9f4a0859299-osd--block--0fad3856--f6d1--50e2--a5cb--d9f4a0859299', 'dm-uuid-LVM-NrSndplu8YjxJZJR7UELD6OYsvV50bPZ1u2VEUIggImfgCLc9zhjhhZDbVtJX9QT'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:41:27.227809 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--39355231--3192--5ff7--9e27--947e8968f1e9-osd--block--39355231--3192--5ff7--9e27--947e8968f1e9', 'dm-uuid-LVM-0dLLKJtm6H324NqK1ZOHec17jVXqGr5vNKj6jpTpF1lhxA6YbcYHPuFYNGYyDWSE'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:41:27.227813 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:41:27.227818 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:41:27.227823 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:41:27.227835 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:41:27.227842 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:41:27.227847 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:41:27.227852 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:41:27.227857 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:41:27.227862 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:41:27.227873 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6133482b-469c-4f4b-9769-bc6dc055ce78', 'scsi-SQEMU_QEMU_HARDDISK_6133482b-469c-4f4b-9769-bc6dc055ce78'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6133482b-469c-4f4b-9769-bc6dc055ce78-part1', 'scsi-SQEMU_QEMU_HARDDISK_6133482b-469c-4f4b-9769-bc6dc055ce78-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6133482b-469c-4f4b-9769-bc6dc055ce78-part14', 'scsi-SQEMU_QEMU_HARDDISK_6133482b-469c-4f4b-9769-bc6dc055ce78-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6133482b-469c-4f4b-9769-bc6dc055ce78-part15', 'scsi-SQEMU_QEMU_HARDDISK_6133482b-469c-4f4b-9769-bc6dc055ce78-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6133482b-469c-4f4b-9769-bc6dc055ce78-part16', 'scsi-SQEMU_QEMU_HARDDISK_6133482b-469c-4f4b-9769-bc6dc055ce78-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:41:27.227881 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--0fad3856--f6d1--50e2--a5cb--d9f4a0859299-osd--block--0fad3856--f6d1--50e2--a5cb--d9f4a0859299'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Vvc4YC-2Ex3-eCr9-vnZS-ADWO-gj04-g7abB6', 'scsi-0QEMU_QEMU_HARDDISK_763a4a26-d97a-40e2-a569-d464b2971007', 'scsi-SQEMU_QEMU_HARDDISK_763a4a26-d97a-40e2-a569-d464b2971007'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:41:27.227886 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--39355231--3192--5ff7--9e27--947e8968f1e9-osd--block--39355231--3192--5ff7--9e27--947e8968f1e9'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-60uRIt-IULU-CMat-eEKR-GmLG-bbFO-QnA2Tt', 'scsi-0QEMU_QEMU_HARDDISK_45b03c06-0ab6-4b62-8b16-77c772305c6a', 'scsi-SQEMU_QEMU_HARDDISK_45b03c06-0ab6-4b62-8b16-77c772305c6a'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:41:27.227892 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e6c5241f-60aa-42cf-822c-98275b24deb1', 'scsi-SQEMU_QEMU_HARDDISK_e6c5241f-60aa-42cf-822c-98275b24deb1'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:41:27.227902 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-10-13-45-44-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:41:27.227911 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--4cb3fc90--004d--5443--9ae7--f5eff9c4438f-osd--block--4cb3fc90--004d--5443--9ae7--f5eff9c4438f', 'dm-uuid-LVM-fIYasPDKY6yyb0lbN1hYZudeZijwr05t0znOImwORuoEgjaGyB4fyTgEynvK6HFS'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:41:27.227916 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--dec76364--a7ee--5469--8bc3--2dcf5060f83e-osd--block--dec76364--a7ee--5469--8bc3--2dcf5060f83e', 'dm-uuid-LVM-ELjjXI7PsiwNbDCw3Snq8tT0U2GbdoLWczg8BVDKOFs22fypwHVROqY12ftkOQHx'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:41:27.227920 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:41:27.227925 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:41:27.227930 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:41:27.227935 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:41:27.227949 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:41:27.227956 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:41:27.227961 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:41:27.227966 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:41:27.227971 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:41:27.227976 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:41:27.227984 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:41:27.227996 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_832b7b05-f737-40ff-a441-99af22cffa7c', 'scsi-SQEMU_QEMU_HARDDISK_832b7b05-f737-40ff-a441-99af22cffa7c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_832b7b05-f737-40ff-a441-99af22cffa7c-part1', 'scsi-SQEMU_QEMU_HARDDISK_832b7b05-f737-40ff-a441-99af22cffa7c-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_832b7b05-f737-40ff-a441-99af22cffa7c-part14', 'scsi-SQEMU_QEMU_HARDDISK_832b7b05-f737-40ff-a441-99af22cffa7c-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_832b7b05-f737-40ff-a441-99af22cffa7c-part15', 'scsi-SQEMU_QEMU_HARDDISK_832b7b05-f737-40ff-a441-99af22cffa7c-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_832b7b05-f737-40ff-a441-99af22cffa7c-part16', 'scsi-SQEMU_QEMU_HARDDISK_832b7b05-f737-40ff-a441-99af22cffa7c-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:41:27.228001 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:41:27.228008 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--4cb3fc90--004d--5443--9ae7--f5eff9c4438f-osd--block--4cb3fc90--004d--5443--9ae7--f5eff9c4438f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-J4k07S-Vr1V-a78k-IT0w-c3z0-Eftr-0EfL69', 'scsi-0QEMU_QEMU_HARDDISK_4515c98e-1f25-421e-81d3-264e20827141', 'scsi-SQEMU_QEMU_HARDDISK_4515c98e-1f25-421e-81d3-264e20827141'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:41:27.228163 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:41:27.228182 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--dec76364--a7ee--5469--8bc3--2dcf5060f83e-osd--block--dec76364--a7ee--5469--8bc3--2dcf5060f83e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-rvYJNa-YbX5-CU38-DuHY-Y6W2-TgfW-vshxzL', 'scsi-0QEMU_QEMU_HARDDISK_9cc4e4a6-fdb0-4f2b-8497-7d80ad86af00', 'scsi-SQEMU_QEMU_HARDDISK_9cc4e4a6-fdb0-4f2b-8497-7d80ad86af00'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:41:27.228187 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:41:27.228193 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:41:27.228198 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_355a7212-75f2-41c4-a284-fbc15ac49d3c', 'scsi-SQEMU_QEMU_HARDDISK_355a7212-75f2-41c4-a284-fbc15ac49d3c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:41:27.228208 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:41:27.228218 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:41:27.228226 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-10-13-45-50-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:41:27.228232 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_eb1a97c9-b500-4e71-8a2b-c22723210725', 'scsi-SQEMU_QEMU_HARDDISK_eb1a97c9-b500-4e71-8a2b-c22723210725'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_eb1a97c9-b500-4e71-8a2b-c22723210725-part1', 'scsi-SQEMU_QEMU_HARDDISK_eb1a97c9-b500-4e71-8a2b-c22723210725-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_eb1a97c9-b500-4e71-8a2b-c22723210725-part14', 'scsi-SQEMU_QEMU_HARDDISK_eb1a97c9-b500-4e71-8a2b-c22723210725-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_eb1a97c9-b500-4e71-8a2b-c22723210725-part15', 'scsi-SQEMU_QEMU_HARDDISK_eb1a97c9-b500-4e71-8a2b-c22723210725-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_eb1a97c9-b500-4e71-8a2b-c22723210725-part16', 'scsi-SQEMU_QEMU_HARDDISK_eb1a97c9-b500-4e71-8a2b-c22723210725-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:41:27.228243 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-10-13-45-59-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:41:27.228251 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:41:27.228256 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:41:27.228261 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:41:27.228266 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:41:27.228276 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:41:27.228281 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:41:27.228289 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:41:27.228308 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:41:27.228314 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6ed1754d-0592-4676-ba37-32169761691d', 'scsi-SQEMU_QEMU_HARDDISK_6ed1754d-0592-4676-ba37-32169761691d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6ed1754d-0592-4676-ba37-32169761691d-part1', 'scsi-SQEMU_QEMU_HARDDISK_6ed1754d-0592-4676-ba37-32169761691d-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6ed1754d-0592-4676-ba37-32169761691d-part14', 'scsi-SQEMU_QEMU_HARDDISK_6ed1754d-0592-4676-ba37-32169761691d-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6ed1754d-0592-4676-ba37-32169761691d-part15', 'scsi-SQEMU_QEMU_HARDDISK_6ed1754d-0592-4676-ba37-32169761691d-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6ed1754d-0592-4676-ba37-32169761691d-part16', 'scsi-SQEMU_QEMU_HARDDISK_6ed1754d-0592-4676-ba37-32169761691d-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:41:27.228333 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-10-13-45-46-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:41:27.228344 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:41:27.228353 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:41:27.228360 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:41:27.228371 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:41:27.228379 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:41:27.228387 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:41:27.228395 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:41:27.228408 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:41:27.228416 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:41:27.228427 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:41:27.228435 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:41:27.228482 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9057548c-db5e-442a-947a-e28af578a58f', 'scsi-SQEMU_QEMU_HARDDISK_9057548c-db5e-442a-947a-e28af578a58f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9057548c-db5e-442a-947a-e28af578a58f-part1', 'scsi-SQEMU_QEMU_HARDDISK_9057548c-db5e-442a-947a-e28af578a58f-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9057548c-db5e-442a-947a-e28af578a58f-part14', 'scsi-SQEMU_QEMU_HARDDISK_9057548c-db5e-442a-947a-e28af578a58f-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9057548c-db5e-442a-947a-e28af578a58f-part15', 'scsi-SQEMU_QEMU_HARDDISK_9057548c-db5e-442a-947a-e28af578a58f-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9057548c-db5e-442a-947a-e28af578a58f-part16', 'scsi-SQEMU_QEMU_HARDDISK_9057548c-db5e-442a-947a-e28af578a58f-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:41:27.228497 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-10-13-45-47-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:41:27.228521 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:41:27.228528 | orchestrator | 2026-01-10 14:41:27.228541 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-01-10 14:41:27.228550 | orchestrator | Saturday 10 January 2026 14:30:14 +0000 (0:00:01.678) 0:00:34.317 ****** 2026-01-10 14:41:27.228557 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:41:27.228565 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:41:27.228573 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:41:27.228580 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:41:27.228587 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:41:27.228595 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:41:27.228600 | orchestrator | 2026-01-10 14:41:27.228604 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-01-10 14:41:27.228609 | orchestrator | Saturday 10 January 2026 14:30:15 +0000 (0:00:01.207) 0:00:35.525 ****** 2026-01-10 14:41:27.228617 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:41:27.228623 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:41:27.228631 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:41:27.228638 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:41:27.228668 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:41:27.228675 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:41:27.228683 | orchestrator | 2026-01-10 14:41:27.228688 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-01-10 14:41:27.228692 | orchestrator | Saturday 10 January 2026 14:30:16 +0000 (0:00:00.906) 0:00:36.432 ****** 2026-01-10 14:41:27.228697 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:41:27.228701 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:41:27.228706 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:41:27.228711 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:41:27.228715 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:41:27.228719 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:41:27.228724 | orchestrator | 2026-01-10 14:41:27.228729 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-01-10 14:41:27.228739 | orchestrator | Saturday 10 January 2026 14:30:18 +0000 (0:00:01.515) 0:00:37.948 ****** 2026-01-10 14:41:27.228743 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:41:27.228748 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:41:27.228752 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:41:27.228756 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:41:27.228761 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:41:27.228765 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:41:27.228770 | orchestrator | 2026-01-10 14:41:27.228774 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-01-10 14:41:27.228779 | orchestrator | Saturday 10 January 2026 14:30:18 +0000 (0:00:00.816) 0:00:38.764 ****** 2026-01-10 14:41:27.228784 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:41:27.228788 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:41:27.228792 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:41:27.228797 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:41:27.228801 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:41:27.228806 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:41:27.228810 | orchestrator | 2026-01-10 14:41:27.228815 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-01-10 14:41:27.228819 | orchestrator | Saturday 10 January 2026 14:30:20 +0000 (0:00:01.387) 0:00:40.152 ****** 2026-01-10 14:41:27.228824 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:41:27.228828 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:41:27.228832 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:41:27.228837 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:41:27.228841 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:41:27.228846 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:41:27.228850 | orchestrator | 2026-01-10 14:41:27.228855 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-01-10 14:41:27.228859 | orchestrator | Saturday 10 January 2026 14:30:21 +0000 (0:00:00.880) 0:00:41.032 ****** 2026-01-10 14:41:27.228864 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-01-10 14:41:27.228869 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-01-10 14:41:27.228874 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-01-10 14:41:27.228878 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-01-10 14:41:27.228883 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2026-01-10 14:41:27.228887 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-01-10 14:41:27.228892 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-01-10 14:41:27.228896 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-01-10 14:41:27.228901 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-01-10 14:41:27.228905 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-01-10 14:41:27.228909 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-01-10 14:41:27.228914 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-01-10 14:41:27.228918 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-01-10 14:41:27.228923 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2026-01-10 14:41:27.228927 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2026-01-10 14:41:27.228932 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-01-10 14:41:27.228936 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-01-10 14:41:27.228941 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2026-01-10 14:41:27.228945 | orchestrator | 2026-01-10 14:41:27.228950 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-01-10 14:41:27.228954 | orchestrator | Saturday 10 January 2026 14:30:26 +0000 (0:00:05.546) 0:00:46.578 ****** 2026-01-10 14:41:27.228960 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-01-10 14:41:27.228964 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-01-10 14:41:27.228974 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-01-10 14:41:27.228979 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:41:27.228983 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-01-10 14:41:27.228988 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-01-10 14:41:27.228992 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-01-10 14:41:27.228996 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:41:27.229001 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-01-10 14:41:27.229010 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-01-10 14:41:27.229015 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-01-10 14:41:27.229019 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:41:27.229024 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-01-10 14:41:27.229028 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-01-10 14:41:27.229033 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-01-10 14:41:27.229037 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:41:27.229042 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-01-10 14:41:27.229049 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-01-10 14:41:27.229054 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-01-10 14:41:27.229058 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:41:27.229063 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-01-10 14:41:27.229068 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-01-10 14:41:27.229072 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-01-10 14:41:27.229076 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:41:27.229081 | orchestrator | 2026-01-10 14:41:27.229086 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-01-10 14:41:27.229090 | orchestrator | Saturday 10 January 2026 14:30:27 +0000 (0:00:01.159) 0:00:47.738 ****** 2026-01-10 14:41:27.229094 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:41:27.229099 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:41:27.229103 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:41:27.229109 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-10 14:41:27.229114 | orchestrator | 2026-01-10 14:41:27.229118 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-01-10 14:41:27.229124 | orchestrator | Saturday 10 January 2026 14:30:30 +0000 (0:00:02.834) 0:00:50.572 ****** 2026-01-10 14:41:27.229129 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:41:27.229133 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:41:27.229138 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:41:27.229142 | orchestrator | 2026-01-10 14:41:27.229147 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-01-10 14:41:27.229151 | orchestrator | Saturday 10 January 2026 14:30:31 +0000 (0:00:00.678) 0:00:51.251 ****** 2026-01-10 14:41:27.229156 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:41:27.229160 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:41:27.229165 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:41:27.229169 | orchestrator | 2026-01-10 14:41:27.229174 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-01-10 14:41:27.229178 | orchestrator | Saturday 10 January 2026 14:30:31 +0000 (0:00:00.534) 0:00:51.785 ****** 2026-01-10 14:41:27.229183 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:41:27.229187 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:41:27.229192 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:41:27.229196 | orchestrator | 2026-01-10 14:41:27.229201 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-01-10 14:41:27.229205 | orchestrator | Saturday 10 January 2026 14:30:32 +0000 (0:00:00.599) 0:00:52.385 ****** 2026-01-10 14:41:27.229214 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:41:27.229218 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:41:27.229223 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:41:27.229228 | orchestrator | 2026-01-10 14:41:27.229232 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-01-10 14:41:27.229237 | orchestrator | Saturday 10 January 2026 14:30:33 +0000 (0:00:01.008) 0:00:53.393 ****** 2026-01-10 14:41:27.229241 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-10 14:41:27.229246 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-10 14:41:27.229250 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-10 14:41:27.229255 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:41:27.229259 | orchestrator | 2026-01-10 14:41:27.229263 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-01-10 14:41:27.229268 | orchestrator | Saturday 10 January 2026 14:30:34 +0000 (0:00:00.518) 0:00:53.911 ****** 2026-01-10 14:41:27.229273 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-10 14:41:27.229277 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-10 14:41:27.229282 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-10 14:41:27.229286 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:41:27.229291 | orchestrator | 2026-01-10 14:41:27.229295 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-01-10 14:41:27.229300 | orchestrator | Saturday 10 January 2026 14:30:34 +0000 (0:00:00.572) 0:00:54.484 ****** 2026-01-10 14:41:27.229304 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-10 14:41:27.229308 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-10 14:41:27.229313 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-10 14:41:27.229317 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:41:27.229322 | orchestrator | 2026-01-10 14:41:27.229326 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-01-10 14:41:27.229331 | orchestrator | Saturday 10 January 2026 14:30:35 +0000 (0:00:00.564) 0:00:55.049 ****** 2026-01-10 14:41:27.229335 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:41:27.229340 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:41:27.229344 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:41:27.229349 | orchestrator | 2026-01-10 14:41:27.229353 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-01-10 14:41:27.229358 | orchestrator | Saturday 10 January 2026 14:30:35 +0000 (0:00:00.486) 0:00:55.535 ****** 2026-01-10 14:41:27.229363 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-01-10 14:41:27.229367 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-01-10 14:41:27.229375 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-01-10 14:41:27.229379 | orchestrator | 2026-01-10 14:41:27.229384 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-01-10 14:41:27.229388 | orchestrator | Saturday 10 January 2026 14:30:37 +0000 (0:00:01.621) 0:00:57.157 ****** 2026-01-10 14:41:27.229393 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-10 14:41:27.229398 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-10 14:41:27.229403 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-10 14:41:27.229411 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-01-10 14:41:27.229415 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-01-10 14:41:27.229420 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-01-10 14:41:27.229424 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-01-10 14:41:27.229429 | orchestrator | 2026-01-10 14:41:27.229433 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-01-10 14:41:27.229442 | orchestrator | Saturday 10 January 2026 14:30:38 +0000 (0:00:00.848) 0:00:58.005 ****** 2026-01-10 14:41:27.229447 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-10 14:41:27.229451 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-10 14:41:27.229456 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-10 14:41:27.229460 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-01-10 14:41:27.229465 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-01-10 14:41:27.229469 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-01-10 14:41:27.229474 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-01-10 14:41:27.229478 | orchestrator | 2026-01-10 14:41:27.229482 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-01-10 14:41:27.229487 | orchestrator | Saturday 10 January 2026 14:30:40 +0000 (0:00:02.128) 0:01:00.134 ****** 2026-01-10 14:41:27.229492 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:41:27.229541 | orchestrator | 2026-01-10 14:41:27.229547 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-01-10 14:41:27.229551 | orchestrator | Saturday 10 January 2026 14:30:41 +0000 (0:00:01.170) 0:01:01.304 ****** 2026-01-10 14:41:27.229556 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:41:27.229561 | orchestrator | 2026-01-10 14:41:27.229567 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-01-10 14:41:27.229574 | orchestrator | Saturday 10 January 2026 14:30:42 +0000 (0:00:01.326) 0:01:02.630 ****** 2026-01-10 14:41:27.229581 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:41:27.229588 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:41:27.229596 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:41:27.229603 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:41:27.229610 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:41:27.229618 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:41:27.229625 | orchestrator | 2026-01-10 14:41:27.229633 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-01-10 14:41:27.229640 | orchestrator | Saturday 10 January 2026 14:30:44 +0000 (0:00:01.446) 0:01:04.077 ****** 2026-01-10 14:41:27.229648 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:41:27.229656 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:41:27.229663 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:41:27.229670 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:41:27.229679 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:41:27.229684 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:41:27.229688 | orchestrator | 2026-01-10 14:41:27.229693 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-01-10 14:41:27.229698 | orchestrator | Saturday 10 January 2026 14:30:45 +0000 (0:00:01.296) 0:01:05.373 ****** 2026-01-10 14:41:27.229702 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:41:27.229707 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:41:27.229711 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:41:27.229716 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:41:27.229720 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:41:27.229725 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:41:27.229729 | orchestrator | 2026-01-10 14:41:27.229734 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-01-10 14:41:27.229738 | orchestrator | Saturday 10 January 2026 14:30:46 +0000 (0:00:01.111) 0:01:06.484 ****** 2026-01-10 14:41:27.229743 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:41:27.229747 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:41:27.229757 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:41:27.229762 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:41:27.229766 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:41:27.229771 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:41:27.229775 | orchestrator | 2026-01-10 14:41:27.229780 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-01-10 14:41:27.229784 | orchestrator | Saturday 10 January 2026 14:30:47 +0000 (0:00:01.215) 0:01:07.700 ****** 2026-01-10 14:41:27.229789 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:41:27.229793 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:41:27.229800 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:41:27.229808 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:41:27.229815 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:41:27.229828 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:41:27.229835 | orchestrator | 2026-01-10 14:41:27.229842 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-01-10 14:41:27.229849 | orchestrator | Saturday 10 January 2026 14:30:49 +0000 (0:00:01.750) 0:01:09.450 ****** 2026-01-10 14:41:27.229853 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:41:27.229857 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:41:27.229861 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:41:27.229865 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:41:27.229870 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:41:27.229874 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:41:27.229878 | orchestrator | 2026-01-10 14:41:27.229886 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-01-10 14:41:27.229890 | orchestrator | Saturday 10 January 2026 14:30:51 +0000 (0:00:01.435) 0:01:10.886 ****** 2026-01-10 14:41:27.229895 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:41:27.229899 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:41:27.229903 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:41:27.229907 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:41:27.229911 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:41:27.229915 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:41:27.229919 | orchestrator | 2026-01-10 14:41:27.229923 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-01-10 14:41:27.229927 | orchestrator | Saturday 10 January 2026 14:30:52 +0000 (0:00:01.104) 0:01:11.991 ****** 2026-01-10 14:41:27.229931 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:41:27.229935 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:41:27.229940 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:41:27.229944 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:41:27.229948 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:41:27.229952 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:41:27.229956 | orchestrator | 2026-01-10 14:41:27.229960 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-01-10 14:41:27.229964 | orchestrator | Saturday 10 January 2026 14:30:53 +0000 (0:00:01.755) 0:01:13.747 ****** 2026-01-10 14:41:27.229968 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:41:27.229972 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:41:27.229976 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:41:27.229980 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:41:27.229984 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:41:27.229988 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:41:27.229992 | orchestrator | 2026-01-10 14:41:27.229997 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-01-10 14:41:27.230001 | orchestrator | Saturday 10 January 2026 14:30:55 +0000 (0:00:02.073) 0:01:15.820 ****** 2026-01-10 14:41:27.230005 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:41:27.230009 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:41:27.230051 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:41:27.230057 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:41:27.230061 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:41:27.230066 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:41:27.230074 | orchestrator | 2026-01-10 14:41:27.230079 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-01-10 14:41:27.230083 | orchestrator | Saturday 10 January 2026 14:30:57 +0000 (0:00:01.076) 0:01:16.896 ****** 2026-01-10 14:41:27.230087 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:41:27.230091 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:41:27.230095 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:41:27.230099 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:41:27.230103 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:41:27.230107 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:41:27.230111 | orchestrator | 2026-01-10 14:41:27.230116 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-01-10 14:41:27.230120 | orchestrator | Saturday 10 January 2026 14:30:58 +0000 (0:00:01.128) 0:01:18.025 ****** 2026-01-10 14:41:27.230124 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:41:27.230128 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:41:27.230132 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:41:27.230136 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:41:27.230140 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:41:27.230144 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:41:27.230148 | orchestrator | 2026-01-10 14:41:27.230152 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-01-10 14:41:27.230156 | orchestrator | Saturday 10 January 2026 14:30:58 +0000 (0:00:00.662) 0:01:18.687 ****** 2026-01-10 14:41:27.230160 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:41:27.230164 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:41:27.230168 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:41:27.230172 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:41:27.230177 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:41:27.230181 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:41:27.230185 | orchestrator | 2026-01-10 14:41:27.230189 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-01-10 14:41:27.230193 | orchestrator | Saturday 10 January 2026 14:30:59 +0000 (0:00:00.777) 0:01:19.465 ****** 2026-01-10 14:41:27.230197 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:41:27.230201 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:41:27.230205 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:41:27.230209 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:41:27.230213 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:41:27.230217 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:41:27.230221 | orchestrator | 2026-01-10 14:41:27.230225 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-01-10 14:41:27.230229 | orchestrator | Saturday 10 January 2026 14:31:00 +0000 (0:00:00.742) 0:01:20.207 ****** 2026-01-10 14:41:27.230233 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:41:27.230237 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:41:27.230241 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:41:27.230245 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:41:27.230249 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:41:27.230253 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:41:27.230257 | orchestrator | 2026-01-10 14:41:27.230261 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-01-10 14:41:27.230265 | orchestrator | Saturday 10 January 2026 14:31:01 +0000 (0:00:00.832) 0:01:21.039 ****** 2026-01-10 14:41:27.230269 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:41:27.230274 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:41:27.230277 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:41:27.230282 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:41:27.230296 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:41:27.230300 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:41:27.230304 | orchestrator | 2026-01-10 14:41:27.230308 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-01-10 14:41:27.230312 | orchestrator | Saturday 10 January 2026 14:31:02 +0000 (0:00:00.820) 0:01:21.860 ****** 2026-01-10 14:41:27.230320 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:41:27.230324 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:41:27.230328 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:41:27.230332 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:41:27.230336 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:41:27.230340 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:41:27.230344 | orchestrator | 2026-01-10 14:41:27.230350 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-01-10 14:41:27.230355 | orchestrator | Saturday 10 January 2026 14:31:03 +0000 (0:00:01.027) 0:01:22.888 ****** 2026-01-10 14:41:27.230359 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:41:27.230363 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:41:27.230367 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:41:27.230371 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:41:27.230375 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:41:27.230378 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:41:27.230382 | orchestrator | 2026-01-10 14:41:27.230386 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-01-10 14:41:27.230391 | orchestrator | Saturday 10 January 2026 14:31:04 +0000 (0:00:01.480) 0:01:24.369 ****** 2026-01-10 14:41:27.230394 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:41:27.230399 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:41:27.230402 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:41:27.230406 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:41:27.230410 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:41:27.230414 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:41:27.230418 | orchestrator | 2026-01-10 14:41:27.230423 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-01-10 14:41:27.230427 | orchestrator | Saturday 10 January 2026 14:31:06 +0000 (0:00:02.125) 0:01:26.495 ****** 2026-01-10 14:41:27.230431 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:41:27.230435 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:41:27.230439 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:41:27.230443 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:41:27.230447 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:41:27.230451 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:41:27.230455 | orchestrator | 2026-01-10 14:41:27.230459 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-01-10 14:41:27.230463 | orchestrator | Saturday 10 January 2026 14:31:08 +0000 (0:00:01.842) 0:01:28.338 ****** 2026-01-10 14:41:27.230467 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:41:27.230471 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:41:27.230475 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:41:27.230479 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:41:27.230483 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:41:27.230487 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:41:27.230491 | orchestrator | 2026-01-10 14:41:27.230495 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-01-10 14:41:27.230517 | orchestrator | Saturday 10 January 2026 14:31:10 +0000 (0:00:02.478) 0:01:30.816 ****** 2026-01-10 14:41:27.230524 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:41:27.230529 | orchestrator | 2026-01-10 14:41:27.230533 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-01-10 14:41:27.230538 | orchestrator | Saturday 10 January 2026 14:31:12 +0000 (0:00:01.254) 0:01:32.071 ****** 2026-01-10 14:41:27.230542 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:41:27.230546 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:41:27.230550 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:41:27.230554 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:41:27.230558 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:41:27.230562 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:41:27.230566 | orchestrator | 2026-01-10 14:41:27.230570 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-01-10 14:41:27.230580 | orchestrator | Saturday 10 January 2026 14:31:12 +0000 (0:00:00.602) 0:01:32.674 ****** 2026-01-10 14:41:27.230603 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:41:27.230607 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:41:27.230611 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:41:27.230615 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:41:27.230619 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:41:27.230623 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:41:27.230628 | orchestrator | 2026-01-10 14:41:27.230632 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-01-10 14:41:27.230637 | orchestrator | Saturday 10 January 2026 14:31:13 +0000 (0:00:00.976) 0:01:33.650 ****** 2026-01-10 14:41:27.230644 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-01-10 14:41:27.230651 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-01-10 14:41:27.230657 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-01-10 14:41:27.230664 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-01-10 14:41:27.230671 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-01-10 14:41:27.230677 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-01-10 14:41:27.230683 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-01-10 14:41:27.230690 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-01-10 14:41:27.230696 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-01-10 14:41:27.230701 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-01-10 14:41:27.230711 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-01-10 14:41:27.230717 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-01-10 14:41:27.230723 | orchestrator | 2026-01-10 14:41:27.230729 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-01-10 14:41:27.230734 | orchestrator | Saturday 10 January 2026 14:31:15 +0000 (0:00:01.550) 0:01:35.200 ****** 2026-01-10 14:41:27.230741 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:41:27.230747 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:41:27.230752 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:41:27.230765 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:41:27.230771 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:41:27.230777 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:41:27.230783 | orchestrator | 2026-01-10 14:41:27.230790 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-01-10 14:41:27.230796 | orchestrator | Saturday 10 January 2026 14:31:16 +0000 (0:00:01.464) 0:01:36.665 ****** 2026-01-10 14:41:27.230802 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:41:27.230808 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:41:27.230814 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:41:27.230821 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:41:27.230827 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:41:27.230834 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:41:27.230840 | orchestrator | 2026-01-10 14:41:27.230846 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-01-10 14:41:27.230852 | orchestrator | Saturday 10 January 2026 14:31:17 +0000 (0:00:00.612) 0:01:37.277 ****** 2026-01-10 14:41:27.230858 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:41:27.230865 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:41:27.230871 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:41:27.230878 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:41:27.230884 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:41:27.230897 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:41:27.230904 | orchestrator | 2026-01-10 14:41:27.230911 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-01-10 14:41:27.230917 | orchestrator | Saturday 10 January 2026 14:31:18 +0000 (0:00:00.689) 0:01:37.967 ****** 2026-01-10 14:41:27.230923 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:41:27.230929 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:41:27.230935 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:41:27.230941 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:41:27.230948 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:41:27.230953 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:41:27.230959 | orchestrator | 2026-01-10 14:41:27.230966 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-01-10 14:41:27.230972 | orchestrator | Saturday 10 January 2026 14:31:18 +0000 (0:00:00.526) 0:01:38.493 ****** 2026-01-10 14:41:27.230979 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:41:27.230985 | orchestrator | 2026-01-10 14:41:27.230992 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-01-10 14:41:27.230999 | orchestrator | Saturday 10 January 2026 14:31:19 +0000 (0:00:01.069) 0:01:39.562 ****** 2026-01-10 14:41:27.231005 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:41:27.231012 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:41:27.231018 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:41:27.231026 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:41:27.231031 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:41:27.231038 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:41:27.231045 | orchestrator | 2026-01-10 14:41:27.231052 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-01-10 14:41:27.231059 | orchestrator | Saturday 10 January 2026 14:32:11 +0000 (0:00:51.354) 0:02:30.917 ****** 2026-01-10 14:41:27.231065 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-01-10 14:41:27.231071 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2026-01-10 14:41:27.231080 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2026-01-10 14:41:27.231085 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:41:27.231089 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-01-10 14:41:27.231093 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2026-01-10 14:41:27.231097 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2026-01-10 14:41:27.231101 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:41:27.231105 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-01-10 14:41:27.231109 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2026-01-10 14:41:27.231114 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2026-01-10 14:41:27.231118 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:41:27.231122 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-01-10 14:41:27.231126 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2026-01-10 14:41:27.231130 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2026-01-10 14:41:27.231134 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:41:27.231138 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-01-10 14:41:27.231142 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2026-01-10 14:41:27.231146 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2026-01-10 14:41:27.231150 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:41:27.231166 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-01-10 14:41:27.231171 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2026-01-10 14:41:27.231175 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2026-01-10 14:41:27.231179 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:41:27.231183 | orchestrator | 2026-01-10 14:41:27.231187 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-01-10 14:41:27.231191 | orchestrator | Saturday 10 January 2026 14:32:11 +0000 (0:00:00.655) 0:02:31.572 ****** 2026-01-10 14:41:27.231195 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:41:27.231200 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:41:27.231207 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:41:27.231211 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:41:27.231216 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:41:27.231220 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:41:27.231224 | orchestrator | 2026-01-10 14:41:27.231228 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-01-10 14:41:27.231232 | orchestrator | Saturday 10 January 2026 14:32:12 +0000 (0:00:00.885) 0:02:32.458 ****** 2026-01-10 14:41:27.231236 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:41:27.231240 | orchestrator | 2026-01-10 14:41:27.231244 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-01-10 14:41:27.231248 | orchestrator | Saturday 10 January 2026 14:32:12 +0000 (0:00:00.149) 0:02:32.608 ****** 2026-01-10 14:41:27.231252 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:41:27.231256 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:41:27.231260 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:41:27.231264 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:41:27.231268 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:41:27.231272 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:41:27.231276 | orchestrator | 2026-01-10 14:41:27.231280 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-01-10 14:41:27.231284 | orchestrator | Saturday 10 January 2026 14:32:13 +0000 (0:00:00.518) 0:02:33.127 ****** 2026-01-10 14:41:27.231288 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:41:27.231292 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:41:27.231296 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:41:27.231300 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:41:27.231304 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:41:27.231308 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:41:27.231312 | orchestrator | 2026-01-10 14:41:27.231317 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-01-10 14:41:27.231321 | orchestrator | Saturday 10 January 2026 14:32:14 +0000 (0:00:00.727) 0:02:33.855 ****** 2026-01-10 14:41:27.231325 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:41:27.231329 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:41:27.231333 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:41:27.231337 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:41:27.231341 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:41:27.231345 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:41:27.231349 | orchestrator | 2026-01-10 14:41:27.231353 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-01-10 14:41:27.231357 | orchestrator | Saturday 10 January 2026 14:32:14 +0000 (0:00:00.576) 0:02:34.431 ****** 2026-01-10 14:41:27.231361 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:41:27.231365 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:41:27.231369 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:41:27.231373 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:41:27.231377 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:41:27.231381 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:41:27.231385 | orchestrator | 2026-01-10 14:41:27.231389 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-01-10 14:41:27.231397 | orchestrator | Saturday 10 January 2026 14:32:16 +0000 (0:00:02.111) 0:02:36.543 ****** 2026-01-10 14:41:27.231401 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:41:27.231405 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:41:27.231409 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:41:27.231413 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:41:27.231417 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:41:27.231421 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:41:27.231425 | orchestrator | 2026-01-10 14:41:27.231429 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-01-10 14:41:27.231434 | orchestrator | Saturday 10 January 2026 14:32:17 +0000 (0:00:00.767) 0:02:37.311 ****** 2026-01-10 14:41:27.231438 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:41:27.231444 | orchestrator | 2026-01-10 14:41:27.231448 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-01-10 14:41:27.231452 | orchestrator | Saturday 10 January 2026 14:32:18 +0000 (0:00:01.334) 0:02:38.645 ****** 2026-01-10 14:41:27.231456 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:41:27.231460 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:41:27.231464 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:41:27.231468 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:41:27.231472 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:41:27.231476 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:41:27.231480 | orchestrator | 2026-01-10 14:41:27.231484 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-01-10 14:41:27.231488 | orchestrator | Saturday 10 January 2026 14:32:19 +0000 (0:00:00.985) 0:02:39.630 ****** 2026-01-10 14:41:27.231492 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:41:27.231496 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:41:27.231546 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:41:27.231550 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:41:27.231555 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:41:27.231559 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:41:27.231563 | orchestrator | 2026-01-10 14:41:27.231567 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-01-10 14:41:27.231571 | orchestrator | Saturday 10 January 2026 14:32:20 +0000 (0:00:00.773) 0:02:40.404 ****** 2026-01-10 14:41:27.231575 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:41:27.231579 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:41:27.231588 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:41:27.231592 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:41:27.231596 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:41:27.231600 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:41:27.231604 | orchestrator | 2026-01-10 14:41:27.231608 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-01-10 14:41:27.231612 | orchestrator | Saturday 10 January 2026 14:32:21 +0000 (0:00:01.037) 0:02:41.442 ****** 2026-01-10 14:41:27.231616 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:41:27.231620 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:41:27.231624 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:41:27.231629 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:41:27.231633 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:41:27.231640 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:41:27.231644 | orchestrator | 2026-01-10 14:41:27.231648 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-01-10 14:41:27.231652 | orchestrator | Saturday 10 January 2026 14:32:22 +0000 (0:00:00.957) 0:02:42.400 ****** 2026-01-10 14:41:27.231656 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:41:27.231661 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:41:27.231665 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:41:27.231669 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:41:27.231673 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:41:27.231681 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:41:27.231685 | orchestrator | 2026-01-10 14:41:27.231690 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-01-10 14:41:27.231694 | orchestrator | Saturday 10 January 2026 14:32:23 +0000 (0:00:00.965) 0:02:43.365 ****** 2026-01-10 14:41:27.231698 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:41:27.231702 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:41:27.231706 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:41:27.231710 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:41:27.231714 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:41:27.231718 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:41:27.231722 | orchestrator | 2026-01-10 14:41:27.231726 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-01-10 14:41:27.231730 | orchestrator | Saturday 10 January 2026 14:32:24 +0000 (0:00:00.767) 0:02:44.133 ****** 2026-01-10 14:41:27.231734 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:41:27.231738 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:41:27.231742 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:41:27.231746 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:41:27.231751 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:41:27.231754 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:41:27.231759 | orchestrator | 2026-01-10 14:41:27.231763 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-01-10 14:41:27.231767 | orchestrator | Saturday 10 January 2026 14:32:25 +0000 (0:00:00.913) 0:02:45.047 ****** 2026-01-10 14:41:27.231771 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:41:27.231775 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:41:27.231779 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:41:27.231783 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:41:27.231787 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:41:27.231791 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:41:27.231795 | orchestrator | 2026-01-10 14:41:27.231799 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-01-10 14:41:27.231803 | orchestrator | Saturday 10 January 2026 14:32:26 +0000 (0:00:00.973) 0:02:46.021 ****** 2026-01-10 14:41:27.231807 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:41:27.231812 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:41:27.231816 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:41:27.231820 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:41:27.231824 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:41:27.231828 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:41:27.231832 | orchestrator | 2026-01-10 14:41:27.231836 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-01-10 14:41:27.231840 | orchestrator | Saturday 10 January 2026 14:32:27 +0000 (0:00:01.567) 0:02:47.588 ****** 2026-01-10 14:41:27.231845 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:41:27.231849 | orchestrator | 2026-01-10 14:41:27.231853 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-01-10 14:41:27.231857 | orchestrator | Saturday 10 January 2026 14:32:29 +0000 (0:00:01.535) 0:02:49.124 ****** 2026-01-10 14:41:27.231861 | orchestrator | changed: [testbed-node-3] => (item=/etc/ceph) 2026-01-10 14:41:27.231866 | orchestrator | changed: [testbed-node-4] => (item=/etc/ceph) 2026-01-10 14:41:27.231870 | orchestrator | changed: [testbed-node-5] => (item=/etc/ceph) 2026-01-10 14:41:27.231874 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph) 2026-01-10 14:41:27.231878 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/) 2026-01-10 14:41:27.231882 | orchestrator | changed: [testbed-node-1] => (item=/etc/ceph) 2026-01-10 14:41:27.231886 | orchestrator | changed: [testbed-node-2] => (item=/etc/ceph) 2026-01-10 14:41:27.231890 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/) 2026-01-10 14:41:27.231894 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/) 2026-01-10 14:41:27.231902 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/) 2026-01-10 14:41:27.231906 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mon) 2026-01-10 14:41:27.231910 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/) 2026-01-10 14:41:27.231914 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/) 2026-01-10 14:41:27.231918 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mon) 2026-01-10 14:41:27.231922 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mon) 2026-01-10 14:41:27.231926 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mon) 2026-01-10 14:41:27.231931 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/osd) 2026-01-10 14:41:27.231935 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mon) 2026-01-10 14:41:27.231941 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mon) 2026-01-10 14:41:27.231945 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/osd) 2026-01-10 14:41:27.231949 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/osd) 2026-01-10 14:41:27.231953 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds) 2026-01-10 14:41:27.231957 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/osd) 2026-01-10 14:41:27.231960 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/osd) 2026-01-10 14:41:27.231964 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/osd) 2026-01-10 14:41:27.231968 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds) 2026-01-10 14:41:27.231975 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2026-01-10 14:41:27.231978 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds) 2026-01-10 14:41:27.231982 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mds) 2026-01-10 14:41:27.231986 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mds) 2026-01-10 14:41:27.231989 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mds) 2026-01-10 14:41:27.231993 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2026-01-10 14:41:27.231997 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2026-01-10 14:41:27.232001 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/crash) 2026-01-10 14:41:27.232004 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2026-01-10 14:41:27.232008 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/crash) 2026-01-10 14:41:27.232012 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2026-01-10 14:41:27.232016 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2026-01-10 14:41:27.232020 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2026-01-10 14:41:27.232023 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/crash) 2026-01-10 14:41:27.232027 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/crash) 2026-01-10 14:41:27.232031 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2026-01-10 14:41:27.232034 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/crash) 2026-01-10 14:41:27.232038 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/crash) 2026-01-10 14:41:27.232042 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2026-01-10 14:41:27.232046 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2026-01-10 14:41:27.232049 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2026-01-10 14:41:27.232053 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2026-01-10 14:41:27.232057 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2026-01-10 14:41:27.232060 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2026-01-10 14:41:27.232064 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2026-01-10 14:41:27.232068 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2026-01-10 14:41:27.232074 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2026-01-10 14:41:27.232078 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2026-01-10 14:41:27.232082 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2026-01-10 14:41:27.232086 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2026-01-10 14:41:27.232089 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2026-01-10 14:41:27.232093 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2026-01-10 14:41:27.232097 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2026-01-10 14:41:27.232100 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2026-01-10 14:41:27.232104 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2026-01-10 14:41:27.232108 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2026-01-10 14:41:27.232111 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2026-01-10 14:41:27.232115 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2026-01-10 14:41:27.232119 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2026-01-10 14:41:27.232122 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2026-01-10 14:41:27.232126 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2026-01-10 14:41:27.232130 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2026-01-10 14:41:27.232133 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2026-01-10 14:41:27.232137 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2026-01-10 14:41:27.232141 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2026-01-10 14:41:27.232145 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-01-10 14:41:27.232148 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2026-01-10 14:41:27.232152 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2026-01-10 14:41:27.232156 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2026-01-10 14:41:27.232159 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2026-01-10 14:41:27.232165 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2026-01-10 14:41:27.232169 | orchestrator | changed: [testbed-node-3] => (item=/var/run/ceph) 2026-01-10 14:41:27.232173 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-01-10 14:41:27.232177 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2026-01-10 14:41:27.232180 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2026-01-10 14:41:27.232184 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-01-10 14:41:27.232188 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-01-10 14:41:27.232200 | orchestrator | changed: [testbed-node-3] => (item=/var/log/ceph) 2026-01-10 14:41:27.232204 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-01-10 14:41:27.232208 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-01-10 14:41:27.232212 | orchestrator | changed: [testbed-node-4] => (item=/var/run/ceph) 2026-01-10 14:41:27.232216 | orchestrator | changed: [testbed-node-5] => (item=/var/run/ceph) 2026-01-10 14:41:27.232219 | orchestrator | changed: [testbed-node-0] => (item=/var/run/ceph) 2026-01-10 14:41:27.232223 | orchestrator | changed: [testbed-node-2] => (item=/var/run/ceph) 2026-01-10 14:41:27.232227 | orchestrator | changed: [testbed-node-4] => (item=/var/log/ceph) 2026-01-10 14:41:27.232230 | orchestrator | changed: [testbed-node-1] => (item=/var/run/ceph) 2026-01-10 14:41:27.232236 | orchestrator | changed: [testbed-node-5] => (item=/var/log/ceph) 2026-01-10 14:41:27.232240 | orchestrator | changed: [testbed-node-0] => (item=/var/log/ceph) 2026-01-10 14:41:27.232244 | orchestrator | changed: [testbed-node-2] => (item=/var/log/ceph) 2026-01-10 14:41:27.232248 | orchestrator | changed: [testbed-node-1] => (item=/var/log/ceph) 2026-01-10 14:41:27.232251 | orchestrator | 2026-01-10 14:41:27.232255 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-01-10 14:41:27.232259 | orchestrator | Saturday 10 January 2026 14:32:36 +0000 (0:00:07.482) 0:02:56.607 ****** 2026-01-10 14:41:27.232262 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:41:27.232266 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:41:27.232270 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:41:27.232274 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-10 14:41:27.232278 | orchestrator | 2026-01-10 14:41:27.232282 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-01-10 14:41:27.232286 | orchestrator | Saturday 10 January 2026 14:32:37 +0000 (0:00:01.025) 0:02:57.632 ****** 2026-01-10 14:41:27.232290 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-01-10 14:41:27.232294 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-01-10 14:41:27.232297 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-01-10 14:41:27.232301 | orchestrator | 2026-01-10 14:41:27.232305 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-01-10 14:41:27.232309 | orchestrator | Saturday 10 January 2026 14:32:39 +0000 (0:00:01.202) 0:02:58.834 ****** 2026-01-10 14:41:27.232312 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-01-10 14:41:27.232316 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-01-10 14:41:27.232320 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-01-10 14:41:27.232324 | orchestrator | 2026-01-10 14:41:27.232327 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-01-10 14:41:27.232331 | orchestrator | Saturday 10 January 2026 14:32:40 +0000 (0:00:01.536) 0:03:00.371 ****** 2026-01-10 14:41:27.232335 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:41:27.232339 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:41:27.232342 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:41:27.232346 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:41:27.232350 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:41:27.232353 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:41:27.232357 | orchestrator | 2026-01-10 14:41:27.232361 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-01-10 14:41:27.232364 | orchestrator | Saturday 10 January 2026 14:32:41 +0000 (0:00:00.954) 0:03:01.325 ****** 2026-01-10 14:41:27.232368 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:41:27.232372 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:41:27.232375 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:41:27.232379 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:41:27.232383 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:41:27.232387 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:41:27.232390 | orchestrator | 2026-01-10 14:41:27.232394 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-01-10 14:41:27.232398 | orchestrator | Saturday 10 January 2026 14:32:42 +0000 (0:00:01.161) 0:03:02.487 ****** 2026-01-10 14:41:27.232402 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:41:27.232408 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:41:27.232411 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:41:27.232415 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:41:27.232419 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:41:27.232423 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:41:27.232426 | orchestrator | 2026-01-10 14:41:27.232433 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-01-10 14:41:27.232437 | orchestrator | Saturday 10 January 2026 14:32:43 +0000 (0:00:01.097) 0:03:03.584 ****** 2026-01-10 14:41:27.232441 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:41:27.232444 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:41:27.232448 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:41:27.232452 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:41:27.232455 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:41:27.232459 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:41:27.232463 | orchestrator | 2026-01-10 14:41:27.232467 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-01-10 14:41:27.232473 | orchestrator | Saturday 10 January 2026 14:32:44 +0000 (0:00:01.193) 0:03:04.778 ****** 2026-01-10 14:41:27.232477 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:41:27.232480 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:41:27.232484 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:41:27.232488 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:41:27.232492 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:41:27.232496 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:41:27.232512 | orchestrator | 2026-01-10 14:41:27.232516 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-01-10 14:41:27.232520 | orchestrator | Saturday 10 January 2026 14:32:45 +0000 (0:00:00.733) 0:03:05.511 ****** 2026-01-10 14:41:27.232524 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:41:27.232528 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:41:27.232531 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:41:27.232535 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:41:27.232539 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:41:27.232543 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:41:27.232546 | orchestrator | 2026-01-10 14:41:27.232550 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-01-10 14:41:27.232554 | orchestrator | Saturday 10 January 2026 14:32:46 +0000 (0:00:01.217) 0:03:06.729 ****** 2026-01-10 14:41:27.232558 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:41:27.232562 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:41:27.232565 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:41:27.232569 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:41:27.232573 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:41:27.232576 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:41:27.232580 | orchestrator | 2026-01-10 14:41:27.232584 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-01-10 14:41:27.232588 | orchestrator | Saturday 10 January 2026 14:32:47 +0000 (0:00:00.870) 0:03:07.600 ****** 2026-01-10 14:41:27.232591 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:41:27.232595 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:41:27.232599 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:41:27.232603 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:41:27.232606 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:41:27.232610 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:41:27.232614 | orchestrator | 2026-01-10 14:41:27.232617 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-01-10 14:41:27.232621 | orchestrator | Saturday 10 January 2026 14:32:48 +0000 (0:00:01.104) 0:03:08.705 ****** 2026-01-10 14:41:27.232625 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:41:27.232629 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:41:27.232636 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:41:27.232640 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:41:27.232644 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:41:27.232647 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:41:27.232651 | orchestrator | 2026-01-10 14:41:27.232655 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-01-10 14:41:27.232659 | orchestrator | Saturday 10 January 2026 14:32:51 +0000 (0:00:02.995) 0:03:11.700 ****** 2026-01-10 14:41:27.232662 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:41:27.232666 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:41:27.232670 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:41:27.232674 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:41:27.232677 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:41:27.232681 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:41:27.232685 | orchestrator | 2026-01-10 14:41:27.232689 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-01-10 14:41:27.232692 | orchestrator | Saturday 10 January 2026 14:32:53 +0000 (0:00:01.345) 0:03:13.045 ****** 2026-01-10 14:41:27.232696 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:41:27.232700 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:41:27.232704 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:41:27.232707 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:41:27.232711 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:41:27.232715 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:41:27.232719 | orchestrator | 2026-01-10 14:41:27.232723 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-01-10 14:41:27.232726 | orchestrator | Saturday 10 January 2026 14:32:54 +0000 (0:00:01.080) 0:03:14.125 ****** 2026-01-10 14:41:27.232730 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:41:27.232734 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:41:27.232737 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:41:27.232741 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:41:27.232745 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:41:27.232749 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:41:27.232753 | orchestrator | 2026-01-10 14:41:27.232756 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-01-10 14:41:27.232760 | orchestrator | Saturday 10 January 2026 14:32:55 +0000 (0:00:01.648) 0:03:15.773 ****** 2026-01-10 14:41:27.232764 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-01-10 14:41:27.232768 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-01-10 14:41:27.232772 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-01-10 14:41:27.232775 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:41:27.232782 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:41:27.232786 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:41:27.232790 | orchestrator | 2026-01-10 14:41:27.232794 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-01-10 14:41:27.232797 | orchestrator | Saturday 10 January 2026 14:32:56 +0000 (0:00:00.727) 0:03:16.501 ****** 2026-01-10 14:41:27.232806 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}])  2026-01-10 14:41:27.232813 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}])  2026-01-10 14:41:27.232818 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:41:27.232825 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}])  2026-01-10 14:41:27.232829 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}])  2026-01-10 14:41:27.232833 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:41:27.232836 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}])  2026-01-10 14:41:27.232840 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}])  2026-01-10 14:41:27.232844 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:41:27.232848 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:41:27.232852 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:41:27.232855 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:41:27.232859 | orchestrator | 2026-01-10 14:41:27.232863 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-01-10 14:41:27.232866 | orchestrator | Saturday 10 January 2026 14:32:57 +0000 (0:00:00.981) 0:03:17.483 ****** 2026-01-10 14:41:27.232870 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:41:27.232874 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:41:27.232878 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:41:27.232881 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:41:27.232885 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:41:27.232889 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:41:27.232893 | orchestrator | 2026-01-10 14:41:27.232897 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-01-10 14:41:27.232900 | orchestrator | Saturday 10 January 2026 14:32:58 +0000 (0:00:00.653) 0:03:18.136 ****** 2026-01-10 14:41:27.232904 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:41:27.232908 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:41:27.232912 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:41:27.232915 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:41:27.232919 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:41:27.232923 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:41:27.232926 | orchestrator | 2026-01-10 14:41:27.232930 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-01-10 14:41:27.232934 | orchestrator | Saturday 10 January 2026 14:32:59 +0000 (0:00:00.996) 0:03:19.132 ****** 2026-01-10 14:41:27.232938 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:41:27.232942 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:41:27.232945 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:41:27.232949 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:41:27.232953 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:41:27.232956 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:41:27.232960 | orchestrator | 2026-01-10 14:41:27.232964 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-01-10 14:41:27.232968 | orchestrator | Saturday 10 January 2026 14:33:00 +0000 (0:00:00.850) 0:03:19.983 ****** 2026-01-10 14:41:27.232972 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:41:27.232979 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:41:27.232982 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:41:27.232986 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:41:27.232990 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:41:27.232994 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:41:27.232997 | orchestrator | 2026-01-10 14:41:27.233001 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-01-10 14:41:27.233007 | orchestrator | Saturday 10 January 2026 14:33:01 +0000 (0:00:01.251) 0:03:21.234 ****** 2026-01-10 14:41:27.233011 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:41:27.233015 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:41:27.233019 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:41:27.233023 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:41:27.233026 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:41:27.233030 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:41:27.233034 | orchestrator | 2026-01-10 14:41:27.233037 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-01-10 14:41:27.233041 | orchestrator | Saturday 10 January 2026 14:33:02 +0000 (0:00:00.781) 0:03:22.016 ****** 2026-01-10 14:41:27.233045 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:41:27.233051 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:41:27.233055 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:41:27.233059 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:41:27.233062 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:41:27.233066 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:41:27.233070 | orchestrator | 2026-01-10 14:41:27.233074 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-01-10 14:41:27.233077 | orchestrator | Saturday 10 January 2026 14:33:02 +0000 (0:00:00.782) 0:03:22.798 ****** 2026-01-10 14:41:27.233081 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-10 14:41:27.233085 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-10 14:41:27.233088 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-10 14:41:27.233092 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:41:27.233096 | orchestrator | 2026-01-10 14:41:27.233100 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-01-10 14:41:27.233103 | orchestrator | Saturday 10 January 2026 14:33:03 +0000 (0:00:00.475) 0:03:23.274 ****** 2026-01-10 14:41:27.233107 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-10 14:41:27.233111 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-10 14:41:27.233115 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-10 14:41:27.233118 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:41:27.233122 | orchestrator | 2026-01-10 14:41:27.233126 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-01-10 14:41:27.233129 | orchestrator | Saturday 10 January 2026 14:33:03 +0000 (0:00:00.382) 0:03:23.656 ****** 2026-01-10 14:41:27.233133 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-10 14:41:27.233137 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-10 14:41:27.233140 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-10 14:41:27.233144 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:41:27.233148 | orchestrator | 2026-01-10 14:41:27.233152 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-01-10 14:41:27.233155 | orchestrator | Saturday 10 January 2026 14:33:04 +0000 (0:00:00.377) 0:03:24.034 ****** 2026-01-10 14:41:27.233159 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:41:27.233163 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:41:27.233166 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:41:27.233170 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:41:27.233174 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:41:27.233178 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:41:27.233181 | orchestrator | 2026-01-10 14:41:27.233185 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-01-10 14:41:27.233192 | orchestrator | Saturday 10 January 2026 14:33:04 +0000 (0:00:00.543) 0:03:24.577 ****** 2026-01-10 14:41:27.233195 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-01-10 14:41:27.233199 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-01-10 14:41:27.233203 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-01-10 14:41:27.233206 | orchestrator | skipping: [testbed-node-0] => (item=0)  2026-01-10 14:41:27.233210 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:41:27.233214 | orchestrator | skipping: [testbed-node-1] => (item=0)  2026-01-10 14:41:27.233218 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:41:27.233221 | orchestrator | skipping: [testbed-node-2] => (item=0)  2026-01-10 14:41:27.233225 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:41:27.233229 | orchestrator | 2026-01-10 14:41:27.233232 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-01-10 14:41:27.233236 | orchestrator | Saturday 10 January 2026 14:33:06 +0000 (0:00:02.172) 0:03:26.750 ****** 2026-01-10 14:41:27.233240 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:41:27.233243 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:41:27.233247 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:41:27.233251 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:41:27.233255 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:41:27.233258 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:41:27.233262 | orchestrator | 2026-01-10 14:41:27.233266 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-01-10 14:41:27.233270 | orchestrator | Saturday 10 January 2026 14:33:10 +0000 (0:00:03.168) 0:03:29.919 ****** 2026-01-10 14:41:27.233273 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:41:27.233277 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:41:27.233281 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:41:27.233285 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:41:27.233288 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:41:27.233292 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:41:27.233296 | orchestrator | 2026-01-10 14:41:27.233299 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-01-10 14:41:27.233303 | orchestrator | Saturday 10 January 2026 14:33:11 +0000 (0:00:01.458) 0:03:31.377 ****** 2026-01-10 14:41:27.233307 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:41:27.233310 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:41:27.233314 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:41:27.233318 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:41:27.233322 | orchestrator | 2026-01-10 14:41:27.233325 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-01-10 14:41:27.233332 | orchestrator | Saturday 10 January 2026 14:33:12 +0000 (0:00:01.135) 0:03:32.513 ****** 2026-01-10 14:41:27.233336 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:41:27.233340 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:41:27.233344 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:41:27.233347 | orchestrator | 2026-01-10 14:41:27.233351 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-01-10 14:41:27.233355 | orchestrator | Saturday 10 January 2026 14:33:13 +0000 (0:00:00.391) 0:03:32.905 ****** 2026-01-10 14:41:27.233358 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:41:27.233362 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:41:27.233366 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:41:27.233370 | orchestrator | 2026-01-10 14:41:27.233373 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-01-10 14:41:27.233379 | orchestrator | Saturday 10 January 2026 14:33:14 +0000 (0:00:01.362) 0:03:34.267 ****** 2026-01-10 14:41:27.233383 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-01-10 14:41:27.233387 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-01-10 14:41:27.233391 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-01-10 14:41:27.233397 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:41:27.233401 | orchestrator | 2026-01-10 14:41:27.233405 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-01-10 14:41:27.233409 | orchestrator | Saturday 10 January 2026 14:33:15 +0000 (0:00:00.827) 0:03:35.095 ****** 2026-01-10 14:41:27.233413 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:41:27.233416 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:41:27.233420 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:41:27.233424 | orchestrator | 2026-01-10 14:41:27.233427 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-01-10 14:41:27.233431 | orchestrator | Saturday 10 January 2026 14:33:15 +0000 (0:00:00.305) 0:03:35.400 ****** 2026-01-10 14:41:27.233435 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:41:27.233439 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:41:27.233442 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:41:27.233446 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-10 14:41:27.233450 | orchestrator | 2026-01-10 14:41:27.233454 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-01-10 14:41:27.233457 | orchestrator | Saturday 10 January 2026 14:33:16 +0000 (0:00:00.898) 0:03:36.299 ****** 2026-01-10 14:41:27.233461 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-10 14:41:27.233465 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-10 14:41:27.233468 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-10 14:41:27.233472 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:41:27.233476 | orchestrator | 2026-01-10 14:41:27.233479 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-01-10 14:41:27.233483 | orchestrator | Saturday 10 January 2026 14:33:16 +0000 (0:00:00.367) 0:03:36.666 ****** 2026-01-10 14:41:27.233487 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:41:27.233490 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:41:27.233494 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:41:27.233511 | orchestrator | 2026-01-10 14:41:27.233515 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-01-10 14:41:27.233519 | orchestrator | Saturday 10 January 2026 14:33:17 +0000 (0:00:00.296) 0:03:36.962 ****** 2026-01-10 14:41:27.233522 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:41:27.233526 | orchestrator | 2026-01-10 14:41:27.233530 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-01-10 14:41:27.233534 | orchestrator | Saturday 10 January 2026 14:33:17 +0000 (0:00:00.191) 0:03:37.154 ****** 2026-01-10 14:41:27.233537 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:41:27.233541 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:41:27.233545 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:41:27.233548 | orchestrator | 2026-01-10 14:41:27.233552 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-01-10 14:41:27.233556 | orchestrator | Saturday 10 January 2026 14:33:17 +0000 (0:00:00.280) 0:03:37.434 ****** 2026-01-10 14:41:27.233559 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:41:27.233563 | orchestrator | 2026-01-10 14:41:27.233567 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-01-10 14:41:27.233570 | orchestrator | Saturday 10 January 2026 14:33:17 +0000 (0:00:00.171) 0:03:37.605 ****** 2026-01-10 14:41:27.233574 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:41:27.233578 | orchestrator | 2026-01-10 14:41:27.233581 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-01-10 14:41:27.233585 | orchestrator | Saturday 10 January 2026 14:33:17 +0000 (0:00:00.203) 0:03:37.809 ****** 2026-01-10 14:41:27.233589 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:41:27.233592 | orchestrator | 2026-01-10 14:41:27.233596 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-01-10 14:41:27.233600 | orchestrator | Saturday 10 January 2026 14:33:18 +0000 (0:00:00.113) 0:03:37.922 ****** 2026-01-10 14:41:27.233606 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:41:27.233610 | orchestrator | 2026-01-10 14:41:27.233614 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-01-10 14:41:27.233617 | orchestrator | Saturday 10 January 2026 14:33:18 +0000 (0:00:00.231) 0:03:38.153 ****** 2026-01-10 14:41:27.233621 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:41:27.233625 | orchestrator | 2026-01-10 14:41:27.233629 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-01-10 14:41:27.233632 | orchestrator | Saturday 10 January 2026 14:33:18 +0000 (0:00:00.558) 0:03:38.712 ****** 2026-01-10 14:41:27.233636 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-10 14:41:27.233640 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-10 14:41:27.233643 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-10 14:41:27.233647 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:41:27.233651 | orchestrator | 2026-01-10 14:41:27.233654 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-01-10 14:41:27.233661 | orchestrator | Saturday 10 January 2026 14:33:19 +0000 (0:00:00.402) 0:03:39.115 ****** 2026-01-10 14:41:27.233665 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:41:27.233668 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:41:27.233672 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:41:27.233676 | orchestrator | 2026-01-10 14:41:27.233679 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-01-10 14:41:27.233683 | orchestrator | Saturday 10 January 2026 14:33:19 +0000 (0:00:00.360) 0:03:39.475 ****** 2026-01-10 14:41:27.233687 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:41:27.233690 | orchestrator | 2026-01-10 14:41:27.233694 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-01-10 14:41:27.233700 | orchestrator | Saturday 10 January 2026 14:33:19 +0000 (0:00:00.244) 0:03:39.720 ****** 2026-01-10 14:41:27.233704 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:41:27.233708 | orchestrator | 2026-01-10 14:41:27.233711 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-01-10 14:41:27.233715 | orchestrator | Saturday 10 January 2026 14:33:20 +0000 (0:00:00.318) 0:03:40.039 ****** 2026-01-10 14:41:27.233719 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:41:27.233722 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:41:27.233726 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:41:27.233730 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-10 14:41:27.233733 | orchestrator | 2026-01-10 14:41:27.233737 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-01-10 14:41:27.233741 | orchestrator | Saturday 10 January 2026 14:33:21 +0000 (0:00:01.218) 0:03:41.257 ****** 2026-01-10 14:41:27.233745 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:41:27.233748 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:41:27.233752 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:41:27.233756 | orchestrator | 2026-01-10 14:41:27.233759 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-01-10 14:41:27.233763 | orchestrator | Saturday 10 January 2026 14:33:21 +0000 (0:00:00.365) 0:03:41.622 ****** 2026-01-10 14:41:27.233767 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:41:27.233771 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:41:27.233774 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:41:27.233778 | orchestrator | 2026-01-10 14:41:27.233782 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-01-10 14:41:27.233785 | orchestrator | Saturday 10 January 2026 14:33:23 +0000 (0:00:01.300) 0:03:42.923 ****** 2026-01-10 14:41:27.233789 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-10 14:41:27.233793 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-10 14:41:27.233797 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-10 14:41:27.233803 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:41:27.233807 | orchestrator | 2026-01-10 14:41:27.233810 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-01-10 14:41:27.233814 | orchestrator | Saturday 10 January 2026 14:33:23 +0000 (0:00:00.890) 0:03:43.814 ****** 2026-01-10 14:41:27.233818 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:41:27.233821 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:41:27.233825 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:41:27.233829 | orchestrator | 2026-01-10 14:41:27.233832 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-01-10 14:41:27.233836 | orchestrator | Saturday 10 January 2026 14:33:24 +0000 (0:00:00.620) 0:03:44.434 ****** 2026-01-10 14:41:27.233840 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:41:27.233843 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:41:27.233847 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:41:27.233851 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-10 14:41:27.233855 | orchestrator | 2026-01-10 14:41:27.233858 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-01-10 14:41:27.233862 | orchestrator | Saturday 10 January 2026 14:33:25 +0000 (0:00:01.100) 0:03:45.534 ****** 2026-01-10 14:41:27.233866 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:41:27.233870 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:41:27.233873 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:41:27.233877 | orchestrator | 2026-01-10 14:41:27.233881 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-01-10 14:41:27.233884 | orchestrator | Saturday 10 January 2026 14:33:26 +0000 (0:00:00.697) 0:03:46.232 ****** 2026-01-10 14:41:27.233888 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:41:27.233892 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:41:27.233895 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:41:27.233899 | orchestrator | 2026-01-10 14:41:27.233903 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-01-10 14:41:27.233907 | orchestrator | Saturday 10 January 2026 14:33:27 +0000 (0:00:01.372) 0:03:47.605 ****** 2026-01-10 14:41:27.233910 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-10 14:41:27.233914 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-10 14:41:27.233918 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-10 14:41:27.233921 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:41:27.233925 | orchestrator | 2026-01-10 14:41:27.233929 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-01-10 14:41:27.233932 | orchestrator | Saturday 10 January 2026 14:33:28 +0000 (0:00:00.663) 0:03:48.268 ****** 2026-01-10 14:41:27.233936 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:41:27.233940 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:41:27.233944 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:41:27.233947 | orchestrator | 2026-01-10 14:41:27.233951 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2026-01-10 14:41:27.233955 | orchestrator | Saturday 10 January 2026 14:33:28 +0000 (0:00:00.341) 0:03:48.610 ****** 2026-01-10 14:41:27.233958 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:41:27.233962 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:41:27.233966 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:41:27.233970 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:41:27.233974 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:41:27.233980 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:41:27.233984 | orchestrator | 2026-01-10 14:41:27.233988 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-01-10 14:41:27.233992 | orchestrator | Saturday 10 January 2026 14:33:29 +0000 (0:00:00.879) 0:03:49.490 ****** 2026-01-10 14:41:27.233995 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:41:27.233999 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:41:27.234006 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:41:27.234010 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-1, testbed-node-0, testbed-node-2 2026-01-10 14:41:27.234046 | orchestrator | 2026-01-10 14:41:27.234054 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-01-10 14:41:27.234058 | orchestrator | Saturday 10 January 2026 14:33:30 +0000 (0:00:00.851) 0:03:50.341 ****** 2026-01-10 14:41:27.234061 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:41:27.234065 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:41:27.234069 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:41:27.234073 | orchestrator | 2026-01-10 14:41:27.234077 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-01-10 14:41:27.234081 | orchestrator | Saturday 10 January 2026 14:33:31 +0000 (0:00:00.794) 0:03:51.135 ****** 2026-01-10 14:41:27.234084 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:41:27.234088 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:41:27.234092 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:41:27.234095 | orchestrator | 2026-01-10 14:41:27.234099 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-01-10 14:41:27.234103 | orchestrator | Saturday 10 January 2026 14:33:32 +0000 (0:00:01.477) 0:03:52.613 ****** 2026-01-10 14:41:27.234107 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-01-10 14:41:27.234111 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-01-10 14:41:27.234115 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-01-10 14:41:27.234118 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:41:27.234122 | orchestrator | 2026-01-10 14:41:27.234126 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-01-10 14:41:27.234129 | orchestrator | Saturday 10 January 2026 14:33:33 +0000 (0:00:00.628) 0:03:53.242 ****** 2026-01-10 14:41:27.234133 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:41:27.234137 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:41:27.234141 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:41:27.234145 | orchestrator | 2026-01-10 14:41:27.234149 | orchestrator | PLAY [Apply role ceph-mon] ***************************************************** 2026-01-10 14:41:27.234152 | orchestrator | 2026-01-10 14:41:27.234156 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-01-10 14:41:27.234160 | orchestrator | Saturday 10 January 2026 14:33:33 +0000 (0:00:00.539) 0:03:53.781 ****** 2026-01-10 14:41:27.234163 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:41:27.234167 | orchestrator | 2026-01-10 14:41:27.234171 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-01-10 14:41:27.234175 | orchestrator | Saturday 10 January 2026 14:33:34 +0000 (0:00:00.659) 0:03:54.441 ****** 2026-01-10 14:41:27.234178 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:41:27.234182 | orchestrator | 2026-01-10 14:41:27.234186 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-01-10 14:41:27.234190 | orchestrator | Saturday 10 January 2026 14:33:35 +0000 (0:00:00.499) 0:03:54.940 ****** 2026-01-10 14:41:27.234193 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:41:27.234197 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:41:27.234201 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:41:27.234204 | orchestrator | 2026-01-10 14:41:27.234208 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-01-10 14:41:27.234212 | orchestrator | Saturday 10 January 2026 14:33:36 +0000 (0:00:01.136) 0:03:56.077 ****** 2026-01-10 14:41:27.234216 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:41:27.234219 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:41:27.234223 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:41:27.234227 | orchestrator | 2026-01-10 14:41:27.234230 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-01-10 14:41:27.234238 | orchestrator | Saturday 10 January 2026 14:33:36 +0000 (0:00:00.373) 0:03:56.450 ****** 2026-01-10 14:41:27.234242 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:41:27.234245 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:41:27.234249 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:41:27.234253 | orchestrator | 2026-01-10 14:41:27.234256 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-01-10 14:41:27.234260 | orchestrator | Saturday 10 January 2026 14:33:36 +0000 (0:00:00.302) 0:03:56.752 ****** 2026-01-10 14:41:27.234264 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:41:27.234268 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:41:27.234271 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:41:27.234275 | orchestrator | 2026-01-10 14:41:27.234279 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-01-10 14:41:27.234283 | orchestrator | Saturday 10 January 2026 14:33:37 +0000 (0:00:00.297) 0:03:57.050 ****** 2026-01-10 14:41:27.234286 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:41:27.234290 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:41:27.234294 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:41:27.234297 | orchestrator | 2026-01-10 14:41:27.234301 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-01-10 14:41:27.234305 | orchestrator | Saturday 10 January 2026 14:33:38 +0000 (0:00:01.110) 0:03:58.160 ****** 2026-01-10 14:41:27.234309 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:41:27.234312 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:41:27.234316 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:41:27.234320 | orchestrator | 2026-01-10 14:41:27.234323 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-01-10 14:41:27.234327 | orchestrator | Saturday 10 January 2026 14:33:38 +0000 (0:00:00.364) 0:03:58.525 ****** 2026-01-10 14:41:27.234340 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:41:27.234344 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:41:27.234347 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:41:27.234351 | orchestrator | 2026-01-10 14:41:27.234355 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-01-10 14:41:27.234359 | orchestrator | Saturday 10 January 2026 14:33:39 +0000 (0:00:00.311) 0:03:58.837 ****** 2026-01-10 14:41:27.234362 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:41:27.234366 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:41:27.234370 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:41:27.234374 | orchestrator | 2026-01-10 14:41:27.234377 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-01-10 14:41:27.234383 | orchestrator | Saturday 10 January 2026 14:33:39 +0000 (0:00:00.783) 0:03:59.620 ****** 2026-01-10 14:41:27.234387 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:41:27.234391 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:41:27.234394 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:41:27.234398 | orchestrator | 2026-01-10 14:41:27.234402 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-01-10 14:41:27.234406 | orchestrator | Saturday 10 January 2026 14:33:40 +0000 (0:00:00.679) 0:04:00.300 ****** 2026-01-10 14:41:27.234409 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:41:27.234413 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:41:27.234417 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:41:27.234420 | orchestrator | 2026-01-10 14:41:27.234424 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-01-10 14:41:27.234428 | orchestrator | Saturday 10 January 2026 14:33:40 +0000 (0:00:00.469) 0:04:00.769 ****** 2026-01-10 14:41:27.234432 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:41:27.234435 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:41:27.234439 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:41:27.234443 | orchestrator | 2026-01-10 14:41:27.234446 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-01-10 14:41:27.234450 | orchestrator | Saturday 10 January 2026 14:33:41 +0000 (0:00:00.339) 0:04:01.109 ****** 2026-01-10 14:41:27.234457 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:41:27.234461 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:41:27.234464 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:41:27.234468 | orchestrator | 2026-01-10 14:41:27.234472 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-01-10 14:41:27.234475 | orchestrator | Saturday 10 January 2026 14:33:41 +0000 (0:00:00.274) 0:04:01.384 ****** 2026-01-10 14:41:27.234479 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:41:27.234483 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:41:27.234486 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:41:27.234490 | orchestrator | 2026-01-10 14:41:27.234494 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-01-10 14:41:27.234526 | orchestrator | Saturday 10 January 2026 14:33:41 +0000 (0:00:00.303) 0:04:01.687 ****** 2026-01-10 14:41:27.234531 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:41:27.234535 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:41:27.234539 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:41:27.234542 | orchestrator | 2026-01-10 14:41:27.234546 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-01-10 14:41:27.234550 | orchestrator | Saturday 10 January 2026 14:33:42 +0000 (0:00:00.556) 0:04:02.243 ****** 2026-01-10 14:41:27.234553 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:41:27.234557 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:41:27.234561 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:41:27.234564 | orchestrator | 2026-01-10 14:41:27.234568 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-01-10 14:41:27.234572 | orchestrator | Saturday 10 January 2026 14:33:42 +0000 (0:00:00.329) 0:04:02.573 ****** 2026-01-10 14:41:27.234575 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:41:27.234579 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:41:27.234583 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:41:27.234586 | orchestrator | 2026-01-10 14:41:27.234590 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-01-10 14:41:27.234594 | orchestrator | Saturday 10 January 2026 14:33:43 +0000 (0:00:00.308) 0:04:02.881 ****** 2026-01-10 14:41:27.234597 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:41:27.234601 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:41:27.234605 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:41:27.234608 | orchestrator | 2026-01-10 14:41:27.234612 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-01-10 14:41:27.234616 | orchestrator | Saturday 10 January 2026 14:33:43 +0000 (0:00:00.646) 0:04:03.527 ****** 2026-01-10 14:41:27.234619 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:41:27.234623 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:41:27.234627 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:41:27.234630 | orchestrator | 2026-01-10 14:41:27.234634 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-01-10 14:41:27.234638 | orchestrator | Saturday 10 January 2026 14:33:44 +0000 (0:00:00.579) 0:04:04.106 ****** 2026-01-10 14:41:27.234641 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:41:27.234645 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:41:27.234649 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:41:27.234652 | orchestrator | 2026-01-10 14:41:27.234656 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2026-01-10 14:41:27.234660 | orchestrator | Saturday 10 January 2026 14:33:44 +0000 (0:00:00.623) 0:04:04.730 ****** 2026-01-10 14:41:27.234663 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:41:27.234667 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:41:27.234671 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:41:27.234675 | orchestrator | 2026-01-10 14:41:27.234678 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2026-01-10 14:41:27.234682 | orchestrator | Saturday 10 January 2026 14:33:45 +0000 (0:00:00.394) 0:04:05.124 ****** 2026-01-10 14:41:27.234686 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:41:27.234694 | orchestrator | 2026-01-10 14:41:27.234698 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2026-01-10 14:41:27.234702 | orchestrator | Saturday 10 January 2026 14:33:46 +0000 (0:00:00.757) 0:04:05.882 ****** 2026-01-10 14:41:27.234705 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:41:27.234709 | orchestrator | 2026-01-10 14:41:27.234716 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2026-01-10 14:41:27.234720 | orchestrator | Saturday 10 January 2026 14:33:46 +0000 (0:00:00.131) 0:04:06.013 ****** 2026-01-10 14:41:27.234723 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-01-10 14:41:27.234727 | orchestrator | 2026-01-10 14:41:27.234731 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2026-01-10 14:41:27.234734 | orchestrator | Saturday 10 January 2026 14:33:47 +0000 (0:00:01.082) 0:04:07.096 ****** 2026-01-10 14:41:27.234738 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:41:27.234742 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:41:27.234745 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:41:27.234749 | orchestrator | 2026-01-10 14:41:27.234755 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2026-01-10 14:41:27.234759 | orchestrator | Saturday 10 January 2026 14:33:47 +0000 (0:00:00.371) 0:04:07.467 ****** 2026-01-10 14:41:27.234763 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:41:27.234767 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:41:27.234770 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:41:27.234774 | orchestrator | 2026-01-10 14:41:27.234778 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2026-01-10 14:41:27.234781 | orchestrator | Saturday 10 January 2026 14:33:48 +0000 (0:00:00.589) 0:04:08.057 ****** 2026-01-10 14:41:27.234785 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:41:27.234789 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:41:27.234792 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:41:27.234796 | orchestrator | 2026-01-10 14:41:27.234800 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2026-01-10 14:41:27.234804 | orchestrator | Saturday 10 January 2026 14:33:49 +0000 (0:00:01.409) 0:04:09.467 ****** 2026-01-10 14:41:27.234807 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:41:27.234811 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:41:27.234815 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:41:27.234818 | orchestrator | 2026-01-10 14:41:27.234822 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2026-01-10 14:41:27.234826 | orchestrator | Saturday 10 January 2026 14:33:50 +0000 (0:00:00.809) 0:04:10.276 ****** 2026-01-10 14:41:27.234829 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:41:27.234833 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:41:27.234837 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:41:27.234840 | orchestrator | 2026-01-10 14:41:27.234844 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2026-01-10 14:41:27.234848 | orchestrator | Saturday 10 January 2026 14:33:51 +0000 (0:00:00.676) 0:04:10.952 ****** 2026-01-10 14:41:27.234851 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:41:27.234855 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:41:27.234861 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:41:27.234866 | orchestrator | 2026-01-10 14:41:27.234872 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2026-01-10 14:41:27.234878 | orchestrator | Saturday 10 January 2026 14:33:51 +0000 (0:00:00.679) 0:04:11.632 ****** 2026-01-10 14:41:27.234884 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:41:27.234891 | orchestrator | 2026-01-10 14:41:27.234896 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2026-01-10 14:41:27.234903 | orchestrator | Saturday 10 January 2026 14:33:53 +0000 (0:00:01.488) 0:04:13.120 ****** 2026-01-10 14:41:27.234909 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:41:27.234915 | orchestrator | 2026-01-10 14:41:27.234920 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2026-01-10 14:41:27.234933 | orchestrator | Saturday 10 January 2026 14:33:54 +0000 (0:00:01.473) 0:04:14.594 ****** 2026-01-10 14:41:27.234939 | orchestrator | changed: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-10 14:41:27.234944 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-10 14:41:27.234950 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-01-10 14:41:27.234956 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-01-10 14:41:27.234963 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-01-10 14:41:27.234968 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-01-10 14:41:27.234974 | orchestrator | changed: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-01-10 14:41:27.234979 | orchestrator | changed: [testbed-node-1 -> {{ item }}] 2026-01-10 14:41:27.234985 | orchestrator | ok: [testbed-node-2] => (item=None) 2026-01-10 14:41:27.234991 | orchestrator | ok: [testbed-node-2 -> {{ item }}] 2026-01-10 14:41:27.234997 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-01-10 14:41:27.235004 | orchestrator | ok: [testbed-node-0 -> {{ item }}] 2026-01-10 14:41:27.235009 | orchestrator | 2026-01-10 14:41:27.235013 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2026-01-10 14:41:27.235017 | orchestrator | Saturday 10 January 2026 14:33:58 +0000 (0:00:03.658) 0:04:18.252 ****** 2026-01-10 14:41:27.235020 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:41:27.235024 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:41:27.235028 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:41:27.235032 | orchestrator | 2026-01-10 14:41:27.235035 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2026-01-10 14:41:27.235039 | orchestrator | Saturday 10 January 2026 14:33:59 +0000 (0:00:01.276) 0:04:19.529 ****** 2026-01-10 14:41:27.235043 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:41:27.235046 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:41:27.235050 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:41:27.235054 | orchestrator | 2026-01-10 14:41:27.235058 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2026-01-10 14:41:27.235061 | orchestrator | Saturday 10 January 2026 14:34:00 +0000 (0:00:00.413) 0:04:19.942 ****** 2026-01-10 14:41:27.235065 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:41:27.235069 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:41:27.235072 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:41:27.235076 | orchestrator | 2026-01-10 14:41:27.235079 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2026-01-10 14:41:27.235083 | orchestrator | Saturday 10 January 2026 14:34:00 +0000 (0:00:00.661) 0:04:20.604 ****** 2026-01-10 14:41:27.235087 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:41:27.235095 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:41:27.235099 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:41:27.235103 | orchestrator | 2026-01-10 14:41:27.235106 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2026-01-10 14:41:27.235110 | orchestrator | Saturday 10 January 2026 14:34:03 +0000 (0:00:02.357) 0:04:22.961 ****** 2026-01-10 14:41:27.235114 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:41:27.235117 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:41:27.235121 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:41:27.235125 | orchestrator | 2026-01-10 14:41:27.235128 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2026-01-10 14:41:27.235135 | orchestrator | Saturday 10 January 2026 14:34:04 +0000 (0:00:01.296) 0:04:24.258 ****** 2026-01-10 14:41:27.235139 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:41:27.235143 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:41:27.235146 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:41:27.235150 | orchestrator | 2026-01-10 14:41:27.235154 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2026-01-10 14:41:27.235161 | orchestrator | Saturday 10 January 2026 14:34:04 +0000 (0:00:00.428) 0:04:24.686 ****** 2026-01-10 14:41:27.235166 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:41:27.235169 | orchestrator | 2026-01-10 14:41:27.235173 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2026-01-10 14:41:27.235177 | orchestrator | Saturday 10 January 2026 14:34:05 +0000 (0:00:00.620) 0:04:25.307 ****** 2026-01-10 14:41:27.235180 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:41:27.235184 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:41:27.235188 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:41:27.235191 | orchestrator | 2026-01-10 14:41:27.235195 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2026-01-10 14:41:27.235199 | orchestrator | Saturday 10 January 2026 14:34:05 +0000 (0:00:00.267) 0:04:25.575 ****** 2026-01-10 14:41:27.235202 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:41:27.235206 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:41:27.235210 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:41:27.235213 | orchestrator | 2026-01-10 14:41:27.235217 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2026-01-10 14:41:27.235221 | orchestrator | Saturday 10 January 2026 14:34:06 +0000 (0:00:00.399) 0:04:25.974 ****** 2026-01-10 14:41:27.235224 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:41:27.235228 | orchestrator | 2026-01-10 14:41:27.235232 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2026-01-10 14:41:27.235235 | orchestrator | Saturday 10 January 2026 14:34:07 +0000 (0:00:00.917) 0:04:26.892 ****** 2026-01-10 14:41:27.235239 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:41:27.235243 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:41:27.235246 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:41:27.235250 | orchestrator | 2026-01-10 14:41:27.235254 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2026-01-10 14:41:27.235257 | orchestrator | Saturday 10 January 2026 14:34:10 +0000 (0:00:03.022) 0:04:29.915 ****** 2026-01-10 14:41:27.235261 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:41:27.235265 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:41:27.235268 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:41:27.235272 | orchestrator | 2026-01-10 14:41:27.235276 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2026-01-10 14:41:27.235280 | orchestrator | Saturday 10 January 2026 14:34:11 +0000 (0:00:01.222) 0:04:31.137 ****** 2026-01-10 14:41:27.235283 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:41:27.235287 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:41:27.235291 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:41:27.235294 | orchestrator | 2026-01-10 14:41:27.235298 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2026-01-10 14:41:27.235302 | orchestrator | Saturday 10 January 2026 14:34:12 +0000 (0:00:01.684) 0:04:32.822 ****** 2026-01-10 14:41:27.235305 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:41:27.235309 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:41:27.235313 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:41:27.235316 | orchestrator | 2026-01-10 14:41:27.235320 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2026-01-10 14:41:27.235324 | orchestrator | Saturday 10 January 2026 14:34:15 +0000 (0:00:02.065) 0:04:34.887 ****** 2026-01-10 14:41:27.235327 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:41:27.235331 | orchestrator | 2026-01-10 14:41:27.235335 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2026-01-10 14:41:27.235338 | orchestrator | Saturday 10 January 2026 14:34:15 +0000 (0:00:00.671) 0:04:35.559 ****** 2026-01-10 14:41:27.235342 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for the monitor(s) to form the quorum... (10 retries left). 2026-01-10 14:41:27.235348 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:41:27.235352 | orchestrator | 2026-01-10 14:41:27.235356 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2026-01-10 14:41:27.235360 | orchestrator | Saturday 10 January 2026 14:34:37 +0000 (0:00:21.882) 0:04:57.441 ****** 2026-01-10 14:41:27.235363 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:41:27.235367 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:41:27.235371 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:41:27.235374 | orchestrator | 2026-01-10 14:41:27.235378 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2026-01-10 14:41:27.235382 | orchestrator | Saturday 10 January 2026 14:34:47 +0000 (0:00:09.858) 0:05:07.300 ****** 2026-01-10 14:41:27.235385 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:41:27.235389 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:41:27.235393 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:41:27.235396 | orchestrator | 2026-01-10 14:41:27.235400 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2026-01-10 14:41:27.235406 | orchestrator | Saturday 10 January 2026 14:34:48 +0000 (0:00:00.570) 0:05:07.870 ****** 2026-01-10 14:41:27.235412 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__0da394407a47e82de5f99ac0b7bb8de1b0740870'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-01-10 14:41:27.235420 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__0da394407a47e82de5f99ac0b7bb8de1b0740870'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-01-10 14:41:27.235426 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__0da394407a47e82de5f99ac0b7bb8de1b0740870'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-01-10 14:41:27.235431 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__0da394407a47e82de5f99ac0b7bb8de1b0740870'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-01-10 14:41:27.235436 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__0da394407a47e82de5f99ac0b7bb8de1b0740870'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-01-10 14:41:27.235441 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__0da394407a47e82de5f99ac0b7bb8de1b0740870'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__0da394407a47e82de5f99ac0b7bb8de1b0740870'}])  2026-01-10 14:41:27.235446 | orchestrator | 2026-01-10 14:41:27.235449 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-01-10 14:41:27.235453 | orchestrator | Saturday 10 January 2026 14:35:03 +0000 (0:00:15.384) 0:05:23.255 ****** 2026-01-10 14:41:27.235457 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:41:27.235460 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:41:27.235467 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:41:27.235471 | orchestrator | 2026-01-10 14:41:27.235474 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-01-10 14:41:27.235478 | orchestrator | Saturday 10 January 2026 14:35:03 +0000 (0:00:00.450) 0:05:23.705 ****** 2026-01-10 14:41:27.235482 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:41:27.235486 | orchestrator | 2026-01-10 14:41:27.235489 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-01-10 14:41:27.235493 | orchestrator | Saturday 10 January 2026 14:35:04 +0000 (0:00:00.803) 0:05:24.509 ****** 2026-01-10 14:41:27.235497 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:41:27.235518 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:41:27.235523 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:41:27.235526 | orchestrator | 2026-01-10 14:41:27.235530 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-01-10 14:41:27.235534 | orchestrator | Saturday 10 January 2026 14:35:05 +0000 (0:00:00.325) 0:05:24.834 ****** 2026-01-10 14:41:27.235537 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:41:27.235541 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:41:27.235545 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:41:27.235548 | orchestrator | 2026-01-10 14:41:27.235552 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-01-10 14:41:27.235556 | orchestrator | Saturday 10 January 2026 14:35:05 +0000 (0:00:00.414) 0:05:25.248 ****** 2026-01-10 14:41:27.235559 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-01-10 14:41:27.235563 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-01-10 14:41:27.235567 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-01-10 14:41:27.235570 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:41:27.235574 | orchestrator | 2026-01-10 14:41:27.235578 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-01-10 14:41:27.235581 | orchestrator | Saturday 10 January 2026 14:35:06 +0000 (0:00:00.943) 0:05:26.191 ****** 2026-01-10 14:41:27.235585 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:41:27.235589 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:41:27.235598 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:41:27.235605 | orchestrator | 2026-01-10 14:41:27.235611 | orchestrator | PLAY [Apply role ceph-mgr] ***************************************************** 2026-01-10 14:41:27.235616 | orchestrator | 2026-01-10 14:41:27.235622 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-01-10 14:41:27.235627 | orchestrator | Saturday 10 January 2026 14:35:07 +0000 (0:00:00.988) 0:05:27.180 ****** 2026-01-10 14:41:27.235633 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:41:27.235639 | orchestrator | 2026-01-10 14:41:27.235649 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-01-10 14:41:27.235656 | orchestrator | Saturday 10 January 2026 14:35:07 +0000 (0:00:00.554) 0:05:27.735 ****** 2026-01-10 14:41:27.235663 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:41:27.235669 | orchestrator | 2026-01-10 14:41:27.235675 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-01-10 14:41:27.235681 | orchestrator | Saturday 10 January 2026 14:35:08 +0000 (0:00:00.710) 0:05:28.446 ****** 2026-01-10 14:41:27.235687 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:41:27.235693 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:41:27.235696 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:41:27.235700 | orchestrator | 2026-01-10 14:41:27.235704 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-01-10 14:41:27.235708 | orchestrator | Saturday 10 January 2026 14:35:09 +0000 (0:00:00.751) 0:05:29.197 ****** 2026-01-10 14:41:27.235711 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:41:27.235719 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:41:27.235723 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:41:27.235726 | orchestrator | 2026-01-10 14:41:27.235730 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-01-10 14:41:27.235734 | orchestrator | Saturday 10 January 2026 14:35:09 +0000 (0:00:00.273) 0:05:29.470 ****** 2026-01-10 14:41:27.235737 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:41:27.235741 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:41:27.235745 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:41:27.235748 | orchestrator | 2026-01-10 14:41:27.235752 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-01-10 14:41:27.235756 | orchestrator | Saturday 10 January 2026 14:35:10 +0000 (0:00:00.408) 0:05:29.879 ****** 2026-01-10 14:41:27.235759 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:41:27.235763 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:41:27.235767 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:41:27.235770 | orchestrator | 2026-01-10 14:41:27.235774 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-01-10 14:41:27.235778 | orchestrator | Saturday 10 January 2026 14:35:10 +0000 (0:00:00.312) 0:05:30.191 ****** 2026-01-10 14:41:27.235782 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:41:27.235785 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:41:27.235789 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:41:27.235793 | orchestrator | 2026-01-10 14:41:27.235796 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-01-10 14:41:27.235800 | orchestrator | Saturday 10 January 2026 14:35:11 +0000 (0:00:00.699) 0:05:30.891 ****** 2026-01-10 14:41:27.235804 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:41:27.235807 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:41:27.235811 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:41:27.235815 | orchestrator | 2026-01-10 14:41:27.235818 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-01-10 14:41:27.235822 | orchestrator | Saturday 10 January 2026 14:35:11 +0000 (0:00:00.376) 0:05:31.267 ****** 2026-01-10 14:41:27.235826 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:41:27.235829 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:41:27.235833 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:41:27.235837 | orchestrator | 2026-01-10 14:41:27.235840 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-01-10 14:41:27.235844 | orchestrator | Saturday 10 January 2026 14:35:11 +0000 (0:00:00.322) 0:05:31.590 ****** 2026-01-10 14:41:27.235848 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:41:27.235851 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:41:27.235855 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:41:27.235859 | orchestrator | 2026-01-10 14:41:27.235862 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-01-10 14:41:27.235866 | orchestrator | Saturday 10 January 2026 14:35:12 +0000 (0:00:01.208) 0:05:32.798 ****** 2026-01-10 14:41:27.235870 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:41:27.235874 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:41:27.235877 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:41:27.235881 | orchestrator | 2026-01-10 14:41:27.235885 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-01-10 14:41:27.235888 | orchestrator | Saturday 10 January 2026 14:35:13 +0000 (0:00:00.837) 0:05:33.635 ****** 2026-01-10 14:41:27.235892 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:41:27.235896 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:41:27.235899 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:41:27.235903 | orchestrator | 2026-01-10 14:41:27.235907 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-01-10 14:41:27.235910 | orchestrator | Saturday 10 January 2026 14:35:14 +0000 (0:00:00.351) 0:05:33.986 ****** 2026-01-10 14:41:27.235914 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:41:27.235918 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:41:27.235924 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:41:27.235928 | orchestrator | 2026-01-10 14:41:27.235932 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-01-10 14:41:27.235936 | orchestrator | Saturday 10 January 2026 14:35:14 +0000 (0:00:00.383) 0:05:34.370 ****** 2026-01-10 14:41:27.235939 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:41:27.235943 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:41:27.235947 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:41:27.235950 | orchestrator | 2026-01-10 14:41:27.235954 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-01-10 14:41:27.235961 | orchestrator | Saturday 10 January 2026 14:35:15 +0000 (0:00:00.558) 0:05:34.929 ****** 2026-01-10 14:41:27.235964 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:41:27.235968 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:41:27.235972 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:41:27.235975 | orchestrator | 2026-01-10 14:41:27.235979 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-01-10 14:41:27.235983 | orchestrator | Saturday 10 January 2026 14:35:15 +0000 (0:00:00.337) 0:05:35.267 ****** 2026-01-10 14:41:27.235986 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:41:27.235990 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:41:27.235994 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:41:27.235997 | orchestrator | 2026-01-10 14:41:27.236001 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-01-10 14:41:27.236008 | orchestrator | Saturday 10 January 2026 14:35:15 +0000 (0:00:00.348) 0:05:35.615 ****** 2026-01-10 14:41:27.236012 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:41:27.236016 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:41:27.236019 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:41:27.236023 | orchestrator | 2026-01-10 14:41:27.236027 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-01-10 14:41:27.236030 | orchestrator | Saturday 10 January 2026 14:35:16 +0000 (0:00:00.351) 0:05:35.967 ****** 2026-01-10 14:41:27.236034 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:41:27.236038 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:41:27.236041 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:41:27.236045 | orchestrator | 2026-01-10 14:41:27.236049 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-01-10 14:41:27.236053 | orchestrator | Saturday 10 January 2026 14:35:16 +0000 (0:00:00.586) 0:05:36.553 ****** 2026-01-10 14:41:27.236056 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:41:27.236060 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:41:27.236064 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:41:27.236068 | orchestrator | 2026-01-10 14:41:27.236071 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-01-10 14:41:27.236075 | orchestrator | Saturday 10 January 2026 14:35:17 +0000 (0:00:00.338) 0:05:36.892 ****** 2026-01-10 14:41:27.236079 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:41:27.236082 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:41:27.236086 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:41:27.236090 | orchestrator | 2026-01-10 14:41:27.236093 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-01-10 14:41:27.236097 | orchestrator | Saturday 10 January 2026 14:35:17 +0000 (0:00:00.421) 0:05:37.313 ****** 2026-01-10 14:41:27.236101 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:41:27.236104 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:41:27.236108 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:41:27.236112 | orchestrator | 2026-01-10 14:41:27.236115 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-01-10 14:41:27.236119 | orchestrator | Saturday 10 January 2026 14:35:18 +0000 (0:00:00.930) 0:05:38.244 ****** 2026-01-10 14:41:27.236123 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-01-10 14:41:27.236126 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-10 14:41:27.236130 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-10 14:41:27.236138 | orchestrator | 2026-01-10 14:41:27.236141 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-01-10 14:41:27.236145 | orchestrator | Saturday 10 January 2026 14:35:18 +0000 (0:00:00.573) 0:05:38.817 ****** 2026-01-10 14:41:27.236149 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:41:27.236152 | orchestrator | 2026-01-10 14:41:27.236156 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2026-01-10 14:41:27.236160 | orchestrator | Saturday 10 January 2026 14:35:19 +0000 (0:00:00.484) 0:05:39.301 ****** 2026-01-10 14:41:27.236163 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:41:27.236167 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:41:27.236171 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:41:27.236174 | orchestrator | 2026-01-10 14:41:27.236178 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2026-01-10 14:41:27.236182 | orchestrator | Saturday 10 January 2026 14:35:20 +0000 (0:00:00.661) 0:05:39.963 ****** 2026-01-10 14:41:27.236185 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:41:27.236189 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:41:27.236193 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:41:27.236196 | orchestrator | 2026-01-10 14:41:27.236200 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2026-01-10 14:41:27.236204 | orchestrator | Saturday 10 January 2026 14:35:20 +0000 (0:00:00.474) 0:05:40.437 ****** 2026-01-10 14:41:27.236208 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-01-10 14:41:27.236211 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-01-10 14:41:27.236215 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-01-10 14:41:27.236218 | orchestrator | changed: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2026-01-10 14:41:27.236222 | orchestrator | 2026-01-10 14:41:27.236226 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2026-01-10 14:41:27.236229 | orchestrator | Saturday 10 January 2026 14:35:31 +0000 (0:00:10.722) 0:05:51.160 ****** 2026-01-10 14:41:27.236233 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:41:27.236237 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:41:27.236240 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:41:27.236244 | orchestrator | 2026-01-10 14:41:27.236248 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2026-01-10 14:41:27.236251 | orchestrator | Saturday 10 January 2026 14:35:31 +0000 (0:00:00.429) 0:05:51.589 ****** 2026-01-10 14:41:27.236255 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-01-10 14:41:27.236259 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-01-10 14:41:27.236262 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-01-10 14:41:27.236266 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-10 14:41:27.236270 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-01-10 14:41:27.236276 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-10 14:41:27.236280 | orchestrator | 2026-01-10 14:41:27.236284 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2026-01-10 14:41:27.236288 | orchestrator | Saturday 10 January 2026 14:35:34 +0000 (0:00:02.617) 0:05:54.206 ****** 2026-01-10 14:41:27.236291 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-01-10 14:41:27.236295 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-01-10 14:41:27.236299 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-01-10 14:41:27.236302 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-01-10 14:41:27.236306 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-01-10 14:41:27.236312 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-01-10 14:41:27.236316 | orchestrator | 2026-01-10 14:41:27.236320 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2026-01-10 14:41:27.236327 | orchestrator | Saturday 10 January 2026 14:35:35 +0000 (0:00:01.166) 0:05:55.373 ****** 2026-01-10 14:41:27.236331 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:41:27.236334 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:41:27.236338 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:41:27.236342 | orchestrator | 2026-01-10 14:41:27.236345 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2026-01-10 14:41:27.236349 | orchestrator | Saturday 10 January 2026 14:35:36 +0000 (0:00:01.206) 0:05:56.580 ****** 2026-01-10 14:41:27.236353 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:41:27.236356 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:41:27.236360 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:41:27.236364 | orchestrator | 2026-01-10 14:41:27.236367 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-01-10 14:41:27.236371 | orchestrator | Saturday 10 January 2026 14:35:37 +0000 (0:00:00.352) 0:05:56.932 ****** 2026-01-10 14:41:27.236375 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:41:27.236378 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:41:27.236382 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:41:27.236386 | orchestrator | 2026-01-10 14:41:27.236389 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-01-10 14:41:27.236393 | orchestrator | Saturday 10 January 2026 14:35:37 +0000 (0:00:00.358) 0:05:57.290 ****** 2026-01-10 14:41:27.236397 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:41:27.236401 | orchestrator | 2026-01-10 14:41:27.236404 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2026-01-10 14:41:27.236408 | orchestrator | Saturday 10 January 2026 14:35:38 +0000 (0:00:00.777) 0:05:58.067 ****** 2026-01-10 14:41:27.236411 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:41:27.236415 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:41:27.236419 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:41:27.236423 | orchestrator | 2026-01-10 14:41:27.236426 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2026-01-10 14:41:27.236430 | orchestrator | Saturday 10 January 2026 14:35:38 +0000 (0:00:00.374) 0:05:58.442 ****** 2026-01-10 14:41:27.236434 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:41:27.236437 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:41:27.236441 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:41:27.236445 | orchestrator | 2026-01-10 14:41:27.236448 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2026-01-10 14:41:27.236452 | orchestrator | Saturday 10 January 2026 14:35:38 +0000 (0:00:00.365) 0:05:58.807 ****** 2026-01-10 14:41:27.236456 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:41:27.236460 | orchestrator | 2026-01-10 14:41:27.236463 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2026-01-10 14:41:27.236467 | orchestrator | Saturday 10 January 2026 14:35:39 +0000 (0:00:00.553) 0:05:59.361 ****** 2026-01-10 14:41:27.236471 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:41:27.236475 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:41:27.236478 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:41:27.236482 | orchestrator | 2026-01-10 14:41:27.236486 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2026-01-10 14:41:27.236489 | orchestrator | Saturday 10 January 2026 14:35:41 +0000 (0:00:01.703) 0:06:01.064 ****** 2026-01-10 14:41:27.236493 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:41:27.236497 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:41:27.236536 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:41:27.236540 | orchestrator | 2026-01-10 14:41:27.236543 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2026-01-10 14:41:27.236547 | orchestrator | Saturday 10 January 2026 14:35:42 +0000 (0:00:01.282) 0:06:02.347 ****** 2026-01-10 14:41:27.236551 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:41:27.236559 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:41:27.236563 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:41:27.236567 | orchestrator | 2026-01-10 14:41:27.236571 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2026-01-10 14:41:27.236574 | orchestrator | Saturday 10 January 2026 14:35:44 +0000 (0:00:01.964) 0:06:04.311 ****** 2026-01-10 14:41:27.236578 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:41:27.236582 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:41:27.236585 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:41:27.236589 | orchestrator | 2026-01-10 14:41:27.236592 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-01-10 14:41:27.236596 | orchestrator | Saturday 10 January 2026 14:35:46 +0000 (0:00:02.076) 0:06:06.388 ****** 2026-01-10 14:41:27.236600 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:41:27.236604 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:41:27.236607 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2026-01-10 14:41:27.236611 | orchestrator | 2026-01-10 14:41:27.236615 | orchestrator | TASK [ceph-mgr : Wait for all mgr to be up] ************************************ 2026-01-10 14:41:27.236618 | orchestrator | Saturday 10 January 2026 14:35:47 +0000 (0:00:00.584) 0:06:06.973 ****** 2026-01-10 14:41:27.236625 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (30 retries left). 2026-01-10 14:41:27.236629 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (29 retries left). 2026-01-10 14:41:27.236633 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (28 retries left). 2026-01-10 14:41:27.236636 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (27 retries left). 2026-01-10 14:41:27.236643 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (26 retries left). 2026-01-10 14:41:27.236647 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (25 retries left). 2026-01-10 14:41:27.236651 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-01-10 14:41:27.236654 | orchestrator | 2026-01-10 14:41:27.236658 | orchestrator | TASK [ceph-mgr : Get enabled modules from ceph-mgr] **************************** 2026-01-10 14:41:27.236662 | orchestrator | Saturday 10 January 2026 14:36:23 +0000 (0:00:36.207) 0:06:43.180 ****** 2026-01-10 14:41:27.236666 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-01-10 14:41:27.236669 | orchestrator | 2026-01-10 14:41:27.236673 | orchestrator | TASK [ceph-mgr : Set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2026-01-10 14:41:27.236679 | orchestrator | Saturday 10 January 2026 14:36:24 +0000 (0:00:01.322) 0:06:44.503 ****** 2026-01-10 14:41:27.236685 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:41:27.236691 | orchestrator | 2026-01-10 14:41:27.236697 | orchestrator | TASK [ceph-mgr : Set _disabled_ceph_mgr_modules fact] ************************** 2026-01-10 14:41:27.236703 | orchestrator | Saturday 10 January 2026 14:36:25 +0000 (0:00:00.342) 0:06:44.846 ****** 2026-01-10 14:41:27.236709 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:41:27.236715 | orchestrator | 2026-01-10 14:41:27.236720 | orchestrator | TASK [ceph-mgr : Disable ceph mgr enabled modules] ***************************** 2026-01-10 14:41:27.236725 | orchestrator | Saturday 10 January 2026 14:36:25 +0000 (0:00:00.147) 0:06:44.993 ****** 2026-01-10 14:41:27.236731 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=iostat) 2026-01-10 14:41:27.236736 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=nfs) 2026-01-10 14:41:27.236742 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=restful) 2026-01-10 14:41:27.236747 | orchestrator | 2026-01-10 14:41:27.236752 | orchestrator | TASK [ceph-mgr : Add modules to ceph-mgr] ************************************** 2026-01-10 14:41:27.236759 | orchestrator | Saturday 10 January 2026 14:36:31 +0000 (0:00:06.582) 0:06:51.576 ****** 2026-01-10 14:41:27.236765 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2026-01-10 14:41:27.236777 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=dashboard) 2026-01-10 14:41:27.236783 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=prometheus) 2026-01-10 14:41:27.236789 | orchestrator | skipping: [testbed-node-2] => (item=status)  2026-01-10 14:41:27.236794 | orchestrator | 2026-01-10 14:41:27.236799 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-01-10 14:41:27.236805 | orchestrator | Saturday 10 January 2026 14:36:37 +0000 (0:00:05.631) 0:06:57.207 ****** 2026-01-10 14:41:27.236811 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:41:27.236817 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:41:27.236822 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:41:27.236828 | orchestrator | 2026-01-10 14:41:27.236833 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-01-10 14:41:27.236839 | orchestrator | Saturday 10 January 2026 14:36:38 +0000 (0:00:00.669) 0:06:57.877 ****** 2026-01-10 14:41:27.236845 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:41:27.236851 | orchestrator | 2026-01-10 14:41:27.236857 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-01-10 14:41:27.236863 | orchestrator | Saturday 10 January 2026 14:36:38 +0000 (0:00:00.547) 0:06:58.424 ****** 2026-01-10 14:41:27.236869 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:41:27.236875 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:41:27.236881 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:41:27.236887 | orchestrator | 2026-01-10 14:41:27.236893 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-01-10 14:41:27.236899 | orchestrator | Saturday 10 January 2026 14:36:39 +0000 (0:00:00.573) 0:06:58.997 ****** 2026-01-10 14:41:27.236905 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:41:27.236912 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:41:27.236918 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:41:27.236924 | orchestrator | 2026-01-10 14:41:27.236930 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-01-10 14:41:27.236936 | orchestrator | Saturday 10 January 2026 14:36:40 +0000 (0:00:01.306) 0:07:00.304 ****** 2026-01-10 14:41:27.236943 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-01-10 14:41:27.236949 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-01-10 14:41:27.236955 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-01-10 14:41:27.236961 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:41:27.236967 | orchestrator | 2026-01-10 14:41:27.236974 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-01-10 14:41:27.236978 | orchestrator | Saturday 10 January 2026 14:36:41 +0000 (0:00:00.644) 0:07:00.948 ****** 2026-01-10 14:41:27.236984 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:41:27.236990 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:41:27.236996 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:41:27.237002 | orchestrator | 2026-01-10 14:41:27.237008 | orchestrator | PLAY [Apply role ceph-osd] ***************************************************** 2026-01-10 14:41:27.237015 | orchestrator | 2026-01-10 14:41:27.237021 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-01-10 14:41:27.237033 | orchestrator | Saturday 10 January 2026 14:36:41 +0000 (0:00:00.827) 0:07:01.775 ****** 2026-01-10 14:41:27.237040 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-10 14:41:27.237047 | orchestrator | 2026-01-10 14:41:27.237053 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-01-10 14:41:27.237059 | orchestrator | Saturday 10 January 2026 14:36:42 +0000 (0:00:00.555) 0:07:02.330 ****** 2026-01-10 14:41:27.237070 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-10 14:41:27.237081 | orchestrator | 2026-01-10 14:41:27.237088 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-01-10 14:41:27.237094 | orchestrator | Saturday 10 January 2026 14:36:43 +0000 (0:00:00.756) 0:07:03.087 ****** 2026-01-10 14:41:27.237100 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:41:27.237106 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:41:27.237112 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:41:27.237118 | orchestrator | 2026-01-10 14:41:27.237125 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-01-10 14:41:27.237131 | orchestrator | Saturday 10 January 2026 14:36:43 +0000 (0:00:00.320) 0:07:03.407 ****** 2026-01-10 14:41:27.237137 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:41:27.237143 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:41:27.237149 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:41:27.237155 | orchestrator | 2026-01-10 14:41:27.237162 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-01-10 14:41:27.237168 | orchestrator | Saturday 10 January 2026 14:36:44 +0000 (0:00:00.746) 0:07:04.154 ****** 2026-01-10 14:41:27.237174 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:41:27.237180 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:41:27.237186 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:41:27.237192 | orchestrator | 2026-01-10 14:41:27.237198 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-01-10 14:41:27.237204 | orchestrator | Saturday 10 January 2026 14:36:45 +0000 (0:00:00.784) 0:07:04.938 ****** 2026-01-10 14:41:27.237211 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:41:27.237217 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:41:27.237223 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:41:27.237228 | orchestrator | 2026-01-10 14:41:27.237235 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-01-10 14:41:27.237241 | orchestrator | Saturday 10 January 2026 14:36:45 +0000 (0:00:00.744) 0:07:05.683 ****** 2026-01-10 14:41:27.237247 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:41:27.237253 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:41:27.237260 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:41:27.237266 | orchestrator | 2026-01-10 14:41:27.237272 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-01-10 14:41:27.237278 | orchestrator | Saturday 10 January 2026 14:36:46 +0000 (0:00:00.675) 0:07:06.358 ****** 2026-01-10 14:41:27.237284 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:41:27.237291 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:41:27.237297 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:41:27.237303 | orchestrator | 2026-01-10 14:41:27.237309 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-01-10 14:41:27.237316 | orchestrator | Saturday 10 January 2026 14:36:46 +0000 (0:00:00.317) 0:07:06.676 ****** 2026-01-10 14:41:27.237322 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:41:27.237328 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:41:27.237334 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:41:27.237340 | orchestrator | 2026-01-10 14:41:27.237344 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-01-10 14:41:27.237348 | orchestrator | Saturday 10 January 2026 14:36:47 +0000 (0:00:00.355) 0:07:07.031 ****** 2026-01-10 14:41:27.237351 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:41:27.237355 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:41:27.237359 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:41:27.237362 | orchestrator | 2026-01-10 14:41:27.237366 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-01-10 14:41:27.237370 | orchestrator | Saturday 10 January 2026 14:36:47 +0000 (0:00:00.728) 0:07:07.759 ****** 2026-01-10 14:41:27.237373 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:41:27.237377 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:41:27.237381 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:41:27.237384 | orchestrator | 2026-01-10 14:41:27.237388 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-01-10 14:41:27.237396 | orchestrator | Saturday 10 January 2026 14:36:49 +0000 (0:00:01.140) 0:07:08.900 ****** 2026-01-10 14:41:27.237400 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:41:27.237403 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:41:27.237407 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:41:27.237411 | orchestrator | 2026-01-10 14:41:27.237415 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-01-10 14:41:27.237418 | orchestrator | Saturday 10 January 2026 14:36:49 +0000 (0:00:00.320) 0:07:09.220 ****** 2026-01-10 14:41:27.237422 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:41:27.237426 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:41:27.237429 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:41:27.237433 | orchestrator | 2026-01-10 14:41:27.237436 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-01-10 14:41:27.237440 | orchestrator | Saturday 10 January 2026 14:36:49 +0000 (0:00:00.306) 0:07:09.527 ****** 2026-01-10 14:41:27.237444 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:41:27.237448 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:41:27.237451 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:41:27.237455 | orchestrator | 2026-01-10 14:41:27.237459 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-01-10 14:41:27.237463 | orchestrator | Saturday 10 January 2026 14:36:50 +0000 (0:00:00.337) 0:07:09.864 ****** 2026-01-10 14:41:27.237466 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:41:27.237470 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:41:27.237474 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:41:27.237477 | orchestrator | 2026-01-10 14:41:27.237481 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-01-10 14:41:27.237488 | orchestrator | Saturday 10 January 2026 14:36:50 +0000 (0:00:00.660) 0:07:10.525 ****** 2026-01-10 14:41:27.237492 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:41:27.237496 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:41:27.237519 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:41:27.237524 | orchestrator | 2026-01-10 14:41:27.237528 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-01-10 14:41:27.237532 | orchestrator | Saturday 10 January 2026 14:36:51 +0000 (0:00:00.470) 0:07:10.995 ****** 2026-01-10 14:41:27.237536 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:41:27.237539 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:41:27.237543 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:41:27.237547 | orchestrator | 2026-01-10 14:41:27.237554 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-01-10 14:41:27.237557 | orchestrator | Saturday 10 January 2026 14:36:51 +0000 (0:00:00.320) 0:07:11.315 ****** 2026-01-10 14:41:27.237561 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:41:27.237565 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:41:27.237569 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:41:27.237572 | orchestrator | 2026-01-10 14:41:27.237576 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-01-10 14:41:27.237580 | orchestrator | Saturday 10 January 2026 14:36:51 +0000 (0:00:00.290) 0:07:11.606 ****** 2026-01-10 14:41:27.237584 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:41:27.237587 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:41:27.237591 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:41:27.237595 | orchestrator | 2026-01-10 14:41:27.237598 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-01-10 14:41:27.237602 | orchestrator | Saturday 10 January 2026 14:36:52 +0000 (0:00:00.585) 0:07:12.191 ****** 2026-01-10 14:41:27.237606 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:41:27.237610 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:41:27.237613 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:41:27.237617 | orchestrator | 2026-01-10 14:41:27.237621 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-01-10 14:41:27.237625 | orchestrator | Saturday 10 January 2026 14:36:52 +0000 (0:00:00.341) 0:07:12.533 ****** 2026-01-10 14:41:27.237632 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:41:27.237636 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:41:27.237640 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:41:27.237643 | orchestrator | 2026-01-10 14:41:27.237647 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2026-01-10 14:41:27.237651 | orchestrator | Saturday 10 January 2026 14:36:53 +0000 (0:00:00.532) 0:07:13.065 ****** 2026-01-10 14:41:27.237654 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:41:27.237658 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:41:27.237662 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:41:27.237665 | orchestrator | 2026-01-10 14:41:27.237669 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2026-01-10 14:41:27.237673 | orchestrator | Saturday 10 January 2026 14:36:53 +0000 (0:00:00.644) 0:07:13.710 ****** 2026-01-10 14:41:27.237676 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-10 14:41:27.237680 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-10 14:41:27.237684 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-10 14:41:27.237688 | orchestrator | 2026-01-10 14:41:27.237692 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2026-01-10 14:41:27.237695 | orchestrator | Saturday 10 January 2026 14:36:54 +0000 (0:00:00.672) 0:07:14.382 ****** 2026-01-10 14:41:27.237699 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-10 14:41:27.237703 | orchestrator | 2026-01-10 14:41:27.237707 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2026-01-10 14:41:27.237710 | orchestrator | Saturday 10 January 2026 14:36:55 +0000 (0:00:00.545) 0:07:14.928 ****** 2026-01-10 14:41:27.237714 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:41:27.237718 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:41:27.237722 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:41:27.237725 | orchestrator | 2026-01-10 14:41:27.237729 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2026-01-10 14:41:27.237733 | orchestrator | Saturday 10 January 2026 14:36:55 +0000 (0:00:00.578) 0:07:15.506 ****** 2026-01-10 14:41:27.237736 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:41:27.237740 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:41:27.237744 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:41:27.237747 | orchestrator | 2026-01-10 14:41:27.237751 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2026-01-10 14:41:27.237755 | orchestrator | Saturday 10 January 2026 14:36:56 +0000 (0:00:00.376) 0:07:15.882 ****** 2026-01-10 14:41:27.237758 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:41:27.237762 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:41:27.237766 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:41:27.237769 | orchestrator | 2026-01-10 14:41:27.237773 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2026-01-10 14:41:27.237777 | orchestrator | Saturday 10 January 2026 14:36:56 +0000 (0:00:00.735) 0:07:16.618 ****** 2026-01-10 14:41:27.237780 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:41:27.237784 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:41:27.237788 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:41:27.237791 | orchestrator | 2026-01-10 14:41:27.237795 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2026-01-10 14:41:27.237799 | orchestrator | Saturday 10 January 2026 14:36:57 +0000 (0:00:00.456) 0:07:17.074 ****** 2026-01-10 14:41:27.237802 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-01-10 14:41:27.237807 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-01-10 14:41:27.237810 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-01-10 14:41:27.237814 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-01-10 14:41:27.237824 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-01-10 14:41:27.237828 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-01-10 14:41:27.237831 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-01-10 14:41:27.237835 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-01-10 14:41:27.237839 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-01-10 14:41:27.237845 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-01-10 14:41:27.237849 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-01-10 14:41:27.237853 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-01-10 14:41:27.237857 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-01-10 14:41:27.237860 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-01-10 14:41:27.237864 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-01-10 14:41:27.237868 | orchestrator | 2026-01-10 14:41:27.237871 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2026-01-10 14:41:27.237875 | orchestrator | Saturday 10 January 2026 14:37:00 +0000 (0:00:03.623) 0:07:20.698 ****** 2026-01-10 14:41:27.237879 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:41:27.237883 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:41:27.237886 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:41:27.237890 | orchestrator | 2026-01-10 14:41:27.237894 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2026-01-10 14:41:27.237898 | orchestrator | Saturday 10 January 2026 14:37:01 +0000 (0:00:00.352) 0:07:21.050 ****** 2026-01-10 14:41:27.237901 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-10 14:41:27.237905 | orchestrator | 2026-01-10 14:41:27.237909 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2026-01-10 14:41:27.237912 | orchestrator | Saturday 10 January 2026 14:37:01 +0000 (0:00:00.500) 0:07:21.551 ****** 2026-01-10 14:41:27.237916 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2026-01-10 14:41:27.237920 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2026-01-10 14:41:27.237924 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2026-01-10 14:41:27.237928 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2026-01-10 14:41:27.237932 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2026-01-10 14:41:27.237935 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2026-01-10 14:41:27.237939 | orchestrator | 2026-01-10 14:41:27.237943 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2026-01-10 14:41:27.237946 | orchestrator | Saturday 10 January 2026 14:37:03 +0000 (0:00:01.315) 0:07:22.867 ****** 2026-01-10 14:41:27.237950 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-10 14:41:27.237954 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-01-10 14:41:27.237958 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-01-10 14:41:27.237961 | orchestrator | 2026-01-10 14:41:27.237965 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2026-01-10 14:41:27.237969 | orchestrator | Saturday 10 January 2026 14:37:05 +0000 (0:00:02.174) 0:07:25.041 ****** 2026-01-10 14:41:27.237972 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-01-10 14:41:27.237976 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-01-10 14:41:27.237980 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:41:27.237987 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-01-10 14:41:27.237991 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-01-10 14:41:27.237995 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:41:27.237999 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-01-10 14:41:27.238002 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-01-10 14:41:27.238007 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:41:27.238010 | orchestrator | 2026-01-10 14:41:27.238115 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2026-01-10 14:41:27.238124 | orchestrator | Saturday 10 January 2026 14:37:06 +0000 (0:00:01.445) 0:07:26.486 ****** 2026-01-10 14:41:27.238128 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-01-10 14:41:27.238132 | orchestrator | 2026-01-10 14:41:27.238136 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2026-01-10 14:41:27.238139 | orchestrator | Saturday 10 January 2026 14:37:08 +0000 (0:00:02.178) 0:07:28.665 ****** 2026-01-10 14:41:27.238143 | orchestrator | included: /ansible/roles/ceph-osd/tasks/scenarios/lvm.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-10 14:41:27.238147 | orchestrator | 2026-01-10 14:41:27.238151 | orchestrator | TASK [ceph-osd : Use ceph-volume to create osds] ******************************* 2026-01-10 14:41:27.238154 | orchestrator | Saturday 10 January 2026 14:37:09 +0000 (0:00:00.574) 0:07:29.239 ****** 2026-01-10 14:41:27.238158 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-6bac10f4-8703-5b93-90a3-91ba865f27b3', 'data_vg': 'ceph-6bac10f4-8703-5b93-90a3-91ba865f27b3'}) 2026-01-10 14:41:27.238163 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-4cb3fc90-004d-5443-9ae7-f5eff9c4438f', 'data_vg': 'ceph-4cb3fc90-004d-5443-9ae7-f5eff9c4438f'}) 2026-01-10 14:41:27.238170 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-0fad3856-f6d1-50e2-a5cb-d9f4a0859299', 'data_vg': 'ceph-0fad3856-f6d1-50e2-a5cb-d9f4a0859299'}) 2026-01-10 14:41:27.238174 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-ef830303-d908-5775-964e-bef8687288a6', 'data_vg': 'ceph-ef830303-d908-5775-964e-bef8687288a6'}) 2026-01-10 14:41:27.238177 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-dec76364-a7ee-5469-8bc3-2dcf5060f83e', 'data_vg': 'ceph-dec76364-a7ee-5469-8bc3-2dcf5060f83e'}) 2026-01-10 14:41:27.238184 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-39355231-3192-5ff7-9e27-947e8968f1e9', 'data_vg': 'ceph-39355231-3192-5ff7-9e27-947e8968f1e9'}) 2026-01-10 14:41:27.238188 | orchestrator | 2026-01-10 14:41:27.238192 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2026-01-10 14:41:27.238195 | orchestrator | Saturday 10 January 2026 14:37:54 +0000 (0:00:45.032) 0:08:14.272 ****** 2026-01-10 14:41:27.238199 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:41:27.238203 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:41:27.238206 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:41:27.238212 | orchestrator | 2026-01-10 14:41:27.238218 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2026-01-10 14:41:27.238223 | orchestrator | Saturday 10 January 2026 14:37:54 +0000 (0:00:00.330) 0:08:14.603 ****** 2026-01-10 14:41:27.238229 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-10 14:41:27.238236 | orchestrator | 2026-01-10 14:41:27.238242 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2026-01-10 14:41:27.238247 | orchestrator | Saturday 10 January 2026 14:37:55 +0000 (0:00:00.531) 0:08:15.134 ****** 2026-01-10 14:41:27.238253 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:41:27.238259 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:41:27.238264 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:41:27.238270 | orchestrator | 2026-01-10 14:41:27.238276 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2026-01-10 14:41:27.238282 | orchestrator | Saturday 10 January 2026 14:37:56 +0000 (0:00:01.047) 0:08:16.181 ****** 2026-01-10 14:41:27.238293 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:41:27.238299 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:41:27.238305 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:41:27.238311 | orchestrator | 2026-01-10 14:41:27.238317 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2026-01-10 14:41:27.238323 | orchestrator | Saturday 10 January 2026 14:37:59 +0000 (0:00:02.659) 0:08:18.841 ****** 2026-01-10 14:41:27.238329 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-10 14:41:27.238335 | orchestrator | 2026-01-10 14:41:27.238339 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2026-01-10 14:41:27.238343 | orchestrator | Saturday 10 January 2026 14:37:59 +0000 (0:00:00.554) 0:08:19.395 ****** 2026-01-10 14:41:27.238347 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:41:27.238350 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:41:27.238354 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:41:27.238358 | orchestrator | 2026-01-10 14:41:27.238361 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2026-01-10 14:41:27.238365 | orchestrator | Saturday 10 January 2026 14:38:01 +0000 (0:00:01.530) 0:08:20.926 ****** 2026-01-10 14:41:27.238368 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:41:27.238372 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:41:27.238376 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:41:27.238379 | orchestrator | 2026-01-10 14:41:27.238383 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2026-01-10 14:41:27.238387 | orchestrator | Saturday 10 January 2026 14:38:02 +0000 (0:00:01.221) 0:08:22.148 ****** 2026-01-10 14:41:27.238390 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:41:27.238394 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:41:27.238398 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:41:27.238401 | orchestrator | 2026-01-10 14:41:27.238405 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2026-01-10 14:41:27.238409 | orchestrator | Saturday 10 January 2026 14:38:04 +0000 (0:00:01.854) 0:08:24.002 ****** 2026-01-10 14:41:27.238412 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:41:27.238416 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:41:27.238419 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:41:27.238423 | orchestrator | 2026-01-10 14:41:27.238427 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2026-01-10 14:41:27.238430 | orchestrator | Saturday 10 January 2026 14:38:04 +0000 (0:00:00.325) 0:08:24.328 ****** 2026-01-10 14:41:27.238434 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:41:27.238438 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:41:27.238441 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:41:27.238445 | orchestrator | 2026-01-10 14:41:27.238449 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2026-01-10 14:41:27.238452 | orchestrator | Saturday 10 January 2026 14:38:05 +0000 (0:00:00.615) 0:08:24.943 ****** 2026-01-10 14:41:27.238456 | orchestrator | ok: [testbed-node-3] => (item=3) 2026-01-10 14:41:27.238460 | orchestrator | ok: [testbed-node-4] => (item=2) 2026-01-10 14:41:27.238463 | orchestrator | ok: [testbed-node-5] => (item=4) 2026-01-10 14:41:27.238467 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-01-10 14:41:27.238471 | orchestrator | ok: [testbed-node-4] => (item=5) 2026-01-10 14:41:27.238474 | orchestrator | ok: [testbed-node-5] => (item=1) 2026-01-10 14:41:27.238478 | orchestrator | 2026-01-10 14:41:27.238482 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2026-01-10 14:41:27.238486 | orchestrator | Saturday 10 January 2026 14:38:06 +0000 (0:00:01.123) 0:08:26.066 ****** 2026-01-10 14:41:27.238489 | orchestrator | changed: [testbed-node-3] => (item=3) 2026-01-10 14:41:27.238493 | orchestrator | changed: [testbed-node-4] => (item=2) 2026-01-10 14:41:27.238518 | orchestrator | changed: [testbed-node-5] => (item=4) 2026-01-10 14:41:27.238525 | orchestrator | changed: [testbed-node-3] => (item=0) 2026-01-10 14:41:27.238536 | orchestrator | changed: [testbed-node-5] => (item=1) 2026-01-10 14:41:27.238542 | orchestrator | changed: [testbed-node-4] => (item=5) 2026-01-10 14:41:27.238548 | orchestrator | 2026-01-10 14:41:27.238554 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2026-01-10 14:41:27.238560 | orchestrator | Saturday 10 January 2026 14:38:08 +0000 (0:00:02.251) 0:08:28.318 ****** 2026-01-10 14:41:27.238566 | orchestrator | changed: [testbed-node-3] => (item=3) 2026-01-10 14:41:27.238572 | orchestrator | changed: [testbed-node-5] => (item=4) 2026-01-10 14:41:27.238578 | orchestrator | changed: [testbed-node-4] => (item=2) 2026-01-10 14:41:27.238588 | orchestrator | changed: [testbed-node-3] => (item=0) 2026-01-10 14:41:27.238594 | orchestrator | changed: [testbed-node-5] => (item=1) 2026-01-10 14:41:27.238600 | orchestrator | changed: [testbed-node-4] => (item=5) 2026-01-10 14:41:27.238606 | orchestrator | 2026-01-10 14:41:27.238612 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2026-01-10 14:41:27.238616 | orchestrator | Saturday 10 January 2026 14:38:12 +0000 (0:00:03.967) 0:08:32.285 ****** 2026-01-10 14:41:27.238620 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:41:27.238624 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:41:27.238627 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-01-10 14:41:27.238631 | orchestrator | 2026-01-10 14:41:27.238635 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2026-01-10 14:41:27.238638 | orchestrator | Saturday 10 January 2026 14:38:15 +0000 (0:00:03.314) 0:08:35.599 ****** 2026-01-10 14:41:27.238642 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:41:27.238646 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:41:27.238649 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Wait for all osd to be up (60 retries left). 2026-01-10 14:41:27.238653 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-01-10 14:41:27.238657 | orchestrator | 2026-01-10 14:41:27.238660 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2026-01-10 14:41:27.238664 | orchestrator | Saturday 10 January 2026 14:38:28 +0000 (0:00:12.533) 0:08:48.133 ****** 2026-01-10 14:41:27.238668 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:41:27.238671 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:41:27.238675 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:41:27.238679 | orchestrator | 2026-01-10 14:41:27.238683 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-01-10 14:41:27.238686 | orchestrator | Saturday 10 January 2026 14:38:29 +0000 (0:00:01.136) 0:08:49.269 ****** 2026-01-10 14:41:27.238690 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:41:27.238694 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:41:27.238697 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:41:27.238701 | orchestrator | 2026-01-10 14:41:27.238705 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-01-10 14:41:27.238708 | orchestrator | Saturday 10 January 2026 14:38:29 +0000 (0:00:00.388) 0:08:49.658 ****** 2026-01-10 14:41:27.238712 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-10 14:41:27.238716 | orchestrator | 2026-01-10 14:41:27.238720 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-01-10 14:41:27.238723 | orchestrator | Saturday 10 January 2026 14:38:30 +0000 (0:00:00.538) 0:08:50.196 ****** 2026-01-10 14:41:27.238727 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-10 14:41:27.238731 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-10 14:41:27.238734 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-10 14:41:27.238738 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:41:27.238742 | orchestrator | 2026-01-10 14:41:27.238746 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-01-10 14:41:27.238749 | orchestrator | Saturday 10 January 2026 14:38:31 +0000 (0:00:00.730) 0:08:50.927 ****** 2026-01-10 14:41:27.238761 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:41:27.238764 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:41:27.238768 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:41:27.238772 | orchestrator | 2026-01-10 14:41:27.238775 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-01-10 14:41:27.238779 | orchestrator | Saturday 10 January 2026 14:38:31 +0000 (0:00:00.630) 0:08:51.558 ****** 2026-01-10 14:41:27.238783 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:41:27.238786 | orchestrator | 2026-01-10 14:41:27.238790 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-01-10 14:41:27.238794 | orchestrator | Saturday 10 January 2026 14:38:31 +0000 (0:00:00.228) 0:08:51.787 ****** 2026-01-10 14:41:27.238797 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:41:27.238801 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:41:27.238805 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:41:27.238808 | orchestrator | 2026-01-10 14:41:27.238812 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-01-10 14:41:27.238816 | orchestrator | Saturday 10 January 2026 14:38:32 +0000 (0:00:00.349) 0:08:52.136 ****** 2026-01-10 14:41:27.238819 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:41:27.238823 | orchestrator | 2026-01-10 14:41:27.238827 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-01-10 14:41:27.238830 | orchestrator | Saturday 10 January 2026 14:38:32 +0000 (0:00:00.224) 0:08:52.361 ****** 2026-01-10 14:41:27.238834 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:41:27.238838 | orchestrator | 2026-01-10 14:41:27.238842 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-01-10 14:41:27.238845 | orchestrator | Saturday 10 January 2026 14:38:32 +0000 (0:00:00.275) 0:08:52.637 ****** 2026-01-10 14:41:27.238849 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:41:27.238853 | orchestrator | 2026-01-10 14:41:27.238857 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-01-10 14:41:27.238863 | orchestrator | Saturday 10 January 2026 14:38:32 +0000 (0:00:00.148) 0:08:52.785 ****** 2026-01-10 14:41:27.238874 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:41:27.238880 | orchestrator | 2026-01-10 14:41:27.238886 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-01-10 14:41:27.238893 | orchestrator | Saturday 10 January 2026 14:38:33 +0000 (0:00:00.239) 0:08:53.025 ****** 2026-01-10 14:41:27.238899 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:41:27.238905 | orchestrator | 2026-01-10 14:41:27.238911 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-01-10 14:41:27.238918 | orchestrator | Saturday 10 January 2026 14:38:33 +0000 (0:00:00.224) 0:08:53.250 ****** 2026-01-10 14:41:27.238923 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-10 14:41:27.238930 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-10 14:41:27.238934 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-10 14:41:27.238938 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:41:27.238942 | orchestrator | 2026-01-10 14:41:27.238945 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-01-10 14:41:27.238949 | orchestrator | Saturday 10 January 2026 14:38:34 +0000 (0:00:01.046) 0:08:54.296 ****** 2026-01-10 14:41:27.238953 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:41:27.238956 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:41:27.238960 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:41:27.238964 | orchestrator | 2026-01-10 14:41:27.238967 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-01-10 14:41:27.238971 | orchestrator | Saturday 10 January 2026 14:38:34 +0000 (0:00:00.339) 0:08:54.636 ****** 2026-01-10 14:41:27.238975 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:41:27.238978 | orchestrator | 2026-01-10 14:41:27.238982 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-01-10 14:41:27.238990 | orchestrator | Saturday 10 January 2026 14:38:35 +0000 (0:00:00.244) 0:08:54.880 ****** 2026-01-10 14:41:27.238995 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:41:27.239000 | orchestrator | 2026-01-10 14:41:27.239006 | orchestrator | PLAY [Apply role ceph-crash] *************************************************** 2026-01-10 14:41:27.239012 | orchestrator | 2026-01-10 14:41:27.239018 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-01-10 14:41:27.239025 | orchestrator | Saturday 10 January 2026 14:38:35 +0000 (0:00:00.692) 0:08:55.573 ****** 2026-01-10 14:41:27.239031 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:41:27.239038 | orchestrator | 2026-01-10 14:41:27.239043 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-01-10 14:41:27.239049 | orchestrator | Saturday 10 January 2026 14:38:37 +0000 (0:00:01.311) 0:08:56.885 ****** 2026-01-10 14:41:27.239054 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:41:27.239060 | orchestrator | 2026-01-10 14:41:27.239066 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-01-10 14:41:27.239071 | orchestrator | Saturday 10 January 2026 14:38:38 +0000 (0:00:01.354) 0:08:58.239 ****** 2026-01-10 14:41:27.239077 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:41:27.239082 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:41:27.239087 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:41:27.239092 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:41:27.239098 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:41:27.239103 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:41:27.239111 | orchestrator | 2026-01-10 14:41:27.239117 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-01-10 14:41:27.239123 | orchestrator | Saturday 10 January 2026 14:38:39 +0000 (0:00:01.276) 0:08:59.516 ****** 2026-01-10 14:41:27.239129 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:41:27.239134 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:41:27.239140 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:41:27.239147 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:41:27.239154 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:41:27.239160 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:41:27.239166 | orchestrator | 2026-01-10 14:41:27.239172 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-01-10 14:41:27.239178 | orchestrator | Saturday 10 January 2026 14:38:40 +0000 (0:00:00.778) 0:09:00.295 ****** 2026-01-10 14:41:27.239181 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:41:27.239185 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:41:27.239189 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:41:27.239192 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:41:27.239196 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:41:27.239200 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:41:27.239203 | orchestrator | 2026-01-10 14:41:27.239207 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-01-10 14:41:27.239211 | orchestrator | Saturday 10 January 2026 14:38:41 +0000 (0:00:01.011) 0:09:01.306 ****** 2026-01-10 14:41:27.239214 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:41:27.239218 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:41:27.239222 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:41:27.239225 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:41:27.239229 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:41:27.239233 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:41:27.239237 | orchestrator | 2026-01-10 14:41:27.239241 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-01-10 14:41:27.239244 | orchestrator | Saturday 10 January 2026 14:38:42 +0000 (0:00:00.697) 0:09:02.004 ****** 2026-01-10 14:41:27.239248 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:41:27.239256 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:41:27.239260 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:41:27.239264 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:41:27.239267 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:41:27.239271 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:41:27.239275 | orchestrator | 2026-01-10 14:41:27.239278 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-01-10 14:41:27.239286 | orchestrator | Saturday 10 January 2026 14:38:43 +0000 (0:00:01.349) 0:09:03.353 ****** 2026-01-10 14:41:27.239290 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:41:27.239293 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:41:27.239297 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:41:27.239301 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:41:27.239305 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:41:27.239308 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:41:27.239312 | orchestrator | 2026-01-10 14:41:27.239316 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-01-10 14:41:27.239319 | orchestrator | Saturday 10 January 2026 14:38:44 +0000 (0:00:00.654) 0:09:04.007 ****** 2026-01-10 14:41:27.239323 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:41:27.239327 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:41:27.239333 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:41:27.239337 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:41:27.239341 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:41:27.239344 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:41:27.239348 | orchestrator | 2026-01-10 14:41:27.239352 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-01-10 14:41:27.239355 | orchestrator | Saturday 10 January 2026 14:38:45 +0000 (0:00:00.876) 0:09:04.884 ****** 2026-01-10 14:41:27.239359 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:41:27.239363 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:41:27.239366 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:41:27.239370 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:41:27.239374 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:41:27.239377 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:41:27.239381 | orchestrator | 2026-01-10 14:41:27.239385 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-01-10 14:41:27.239389 | orchestrator | Saturday 10 January 2026 14:38:46 +0000 (0:00:01.051) 0:09:05.935 ****** 2026-01-10 14:41:27.239392 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:41:27.239396 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:41:27.239399 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:41:27.239403 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:41:27.239406 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:41:27.239410 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:41:27.239414 | orchestrator | 2026-01-10 14:41:27.239417 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-01-10 14:41:27.239421 | orchestrator | Saturday 10 January 2026 14:38:47 +0000 (0:00:01.406) 0:09:07.342 ****** 2026-01-10 14:41:27.239425 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:41:27.239429 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:41:27.239432 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:41:27.239436 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:41:27.239440 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:41:27.239443 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:41:27.239447 | orchestrator | 2026-01-10 14:41:27.239451 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-01-10 14:41:27.239454 | orchestrator | Saturday 10 January 2026 14:38:48 +0000 (0:00:00.605) 0:09:07.947 ****** 2026-01-10 14:41:27.239458 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:41:27.239462 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:41:27.239465 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:41:27.239469 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:41:27.239473 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:41:27.239481 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:41:27.239484 | orchestrator | 2026-01-10 14:41:27.239488 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-01-10 14:41:27.239492 | orchestrator | Saturday 10 January 2026 14:38:49 +0000 (0:00:00.904) 0:09:08.851 ****** 2026-01-10 14:41:27.239496 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:41:27.239539 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:41:27.239544 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:41:27.239548 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:41:27.239551 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:41:27.239555 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:41:27.239559 | orchestrator | 2026-01-10 14:41:27.239562 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-01-10 14:41:27.239566 | orchestrator | Saturday 10 January 2026 14:38:49 +0000 (0:00:00.631) 0:09:09.483 ****** 2026-01-10 14:41:27.239570 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:41:27.239574 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:41:27.239577 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:41:27.239581 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:41:27.239585 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:41:27.239588 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:41:27.239592 | orchestrator | 2026-01-10 14:41:27.239596 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-01-10 14:41:27.239600 | orchestrator | Saturday 10 January 2026 14:38:50 +0000 (0:00:00.929) 0:09:10.412 ****** 2026-01-10 14:41:27.239603 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:41:27.239607 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:41:27.239611 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:41:27.239614 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:41:27.239618 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:41:27.239622 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:41:27.239626 | orchestrator | 2026-01-10 14:41:27.239629 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-01-10 14:41:27.239633 | orchestrator | Saturday 10 January 2026 14:38:51 +0000 (0:00:00.633) 0:09:11.046 ****** 2026-01-10 14:41:27.239637 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:41:27.239640 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:41:27.239644 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:41:27.239648 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:41:27.239652 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:41:27.239655 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:41:27.239659 | orchestrator | 2026-01-10 14:41:27.239663 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-01-10 14:41:27.239667 | orchestrator | Saturday 10 January 2026 14:38:52 +0000 (0:00:00.845) 0:09:11.891 ****** 2026-01-10 14:41:27.239670 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:41:27.239674 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:41:27.239678 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:41:27.239681 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:41:27.239685 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:41:27.239689 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:41:27.239693 | orchestrator | 2026-01-10 14:41:27.239696 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-01-10 14:41:27.239704 | orchestrator | Saturday 10 January 2026 14:38:52 +0000 (0:00:00.606) 0:09:12.498 ****** 2026-01-10 14:41:27.239708 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:41:27.239711 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:41:27.239715 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:41:27.239719 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:41:27.239722 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:41:27.239726 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:41:27.239730 | orchestrator | 2026-01-10 14:41:27.239734 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-01-10 14:41:27.239737 | orchestrator | Saturday 10 January 2026 14:38:53 +0000 (0:00:00.909) 0:09:13.408 ****** 2026-01-10 14:41:27.239745 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:41:27.239752 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:41:27.239756 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:41:27.239760 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:41:27.239764 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:41:27.239767 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:41:27.239771 | orchestrator | 2026-01-10 14:41:27.239775 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-01-10 14:41:27.239778 | orchestrator | Saturday 10 January 2026 14:38:54 +0000 (0:00:00.674) 0:09:14.083 ****** 2026-01-10 14:41:27.239782 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:41:27.239786 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:41:27.239789 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:41:27.239793 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:41:27.239798 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:41:27.239804 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:41:27.239810 | orchestrator | 2026-01-10 14:41:27.239816 | orchestrator | TASK [ceph-crash : Create client.crash keyring] ******************************** 2026-01-10 14:41:27.239830 | orchestrator | Saturday 10 January 2026 14:38:55 +0000 (0:00:01.290) 0:09:15.373 ****** 2026-01-10 14:41:27.239837 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-01-10 14:41:27.239843 | orchestrator | 2026-01-10 14:41:27.239850 | orchestrator | TASK [ceph-crash : Get keys from monitors] ************************************* 2026-01-10 14:41:27.239855 | orchestrator | Saturday 10 January 2026 14:38:59 +0000 (0:00:04.219) 0:09:19.593 ****** 2026-01-10 14:41:27.239861 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-01-10 14:41:27.239868 | orchestrator | 2026-01-10 14:41:27.239875 | orchestrator | TASK [ceph-crash : Copy ceph key(s) if needed] ********************************* 2026-01-10 14:41:27.239881 | orchestrator | Saturday 10 January 2026 14:39:02 +0000 (0:00:02.749) 0:09:22.342 ****** 2026-01-10 14:41:27.239887 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:41:27.239894 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:41:27.239900 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:41:27.239907 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:41:27.239914 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:41:27.239921 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:41:27.239927 | orchestrator | 2026-01-10 14:41:27.239934 | orchestrator | TASK [ceph-crash : Create /var/lib/ceph/crash/posted] ************************** 2026-01-10 14:41:27.239940 | orchestrator | Saturday 10 January 2026 14:39:04 +0000 (0:00:02.153) 0:09:24.496 ****** 2026-01-10 14:41:27.239947 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:41:27.239951 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:41:27.239954 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:41:27.239958 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:41:27.239962 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:41:27.239965 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:41:27.239969 | orchestrator | 2026-01-10 14:41:27.239973 | orchestrator | TASK [ceph-crash : Include_tasks systemd.yml] ********************************** 2026-01-10 14:41:27.239977 | orchestrator | Saturday 10 January 2026 14:39:05 +0000 (0:00:01.017) 0:09:25.513 ****** 2026-01-10 14:41:27.239981 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:41:27.239987 | orchestrator | 2026-01-10 14:41:27.239991 | orchestrator | TASK [ceph-crash : Generate systemd unit file for ceph-crash container] ******** 2026-01-10 14:41:27.239994 | orchestrator | Saturday 10 January 2026 14:39:07 +0000 (0:00:01.332) 0:09:26.846 ****** 2026-01-10 14:41:27.239998 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:41:27.240001 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:41:27.240005 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:41:27.240009 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:41:27.240012 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:41:27.240016 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:41:27.240027 | orchestrator | 2026-01-10 14:41:27.240031 | orchestrator | TASK [ceph-crash : Start the ceph-crash service] ******************************* 2026-01-10 14:41:27.240035 | orchestrator | Saturday 10 January 2026 14:39:09 +0000 (0:00:02.003) 0:09:28.849 ****** 2026-01-10 14:41:27.240038 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:41:27.240042 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:41:27.240046 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:41:27.240049 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:41:27.240053 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:41:27.240057 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:41:27.240061 | orchestrator | 2026-01-10 14:41:27.240064 | orchestrator | RUNNING HANDLER [ceph-handler : Ceph crash handler] **************************** 2026-01-10 14:41:27.240068 | orchestrator | Saturday 10 January 2026 14:39:12 +0000 (0:00:03.417) 0:09:32.267 ****** 2026-01-10 14:41:27.240072 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_crash.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:41:27.240076 | orchestrator | 2026-01-10 14:41:27.240080 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called before restart] ****** 2026-01-10 14:41:27.240083 | orchestrator | Saturday 10 January 2026 14:39:13 +0000 (0:00:01.394) 0:09:33.661 ****** 2026-01-10 14:41:27.240087 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:41:27.240091 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:41:27.240094 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:41:27.240098 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:41:27.240102 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:41:27.240105 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:41:27.240109 | orchestrator | 2026-01-10 14:41:27.240113 | orchestrator | RUNNING HANDLER [ceph-handler : Restart the ceph-crash service] **************** 2026-01-10 14:41:27.240121 | orchestrator | Saturday 10 January 2026 14:39:14 +0000 (0:00:00.887) 0:09:34.548 ****** 2026-01-10 14:41:27.240124 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:41:27.240128 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:41:27.240132 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:41:27.240136 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:41:27.240139 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:41:27.240143 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:41:27.240147 | orchestrator | 2026-01-10 14:41:27.240150 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called after restart] ******* 2026-01-10 14:41:27.240154 | orchestrator | Saturday 10 January 2026 14:39:17 +0000 (0:00:02.485) 0:09:37.034 ****** 2026-01-10 14:41:27.240160 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:41:27.240164 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:41:27.240168 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:41:27.240172 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:41:27.240175 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:41:27.240179 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:41:27.240183 | orchestrator | 2026-01-10 14:41:27.240186 | orchestrator | PLAY [Apply role ceph-mds] ***************************************************** 2026-01-10 14:41:27.240190 | orchestrator | 2026-01-10 14:41:27.240194 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-01-10 14:41:27.240197 | orchestrator | Saturday 10 January 2026 14:39:18 +0000 (0:00:01.133) 0:09:38.168 ****** 2026-01-10 14:41:27.240201 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-10 14:41:27.240205 | orchestrator | 2026-01-10 14:41:27.240209 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-01-10 14:41:27.240212 | orchestrator | Saturday 10 January 2026 14:39:18 +0000 (0:00:00.514) 0:09:38.682 ****** 2026-01-10 14:41:27.240216 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-10 14:41:27.240220 | orchestrator | 2026-01-10 14:41:27.240224 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-01-10 14:41:27.240230 | orchestrator | Saturday 10 January 2026 14:39:19 +0000 (0:00:00.759) 0:09:39.442 ****** 2026-01-10 14:41:27.240234 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:41:27.240238 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:41:27.240242 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:41:27.240245 | orchestrator | 2026-01-10 14:41:27.240249 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-01-10 14:41:27.240253 | orchestrator | Saturday 10 January 2026 14:39:19 +0000 (0:00:00.344) 0:09:39.786 ****** 2026-01-10 14:41:27.240256 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:41:27.240260 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:41:27.240264 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:41:27.240267 | orchestrator | 2026-01-10 14:41:27.240271 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-01-10 14:41:27.240275 | orchestrator | Saturday 10 January 2026 14:39:20 +0000 (0:00:00.723) 0:09:40.510 ****** 2026-01-10 14:41:27.240278 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:41:27.240282 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:41:27.240286 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:41:27.240289 | orchestrator | 2026-01-10 14:41:27.240293 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-01-10 14:41:27.240297 | orchestrator | Saturday 10 January 2026 14:39:21 +0000 (0:00:01.075) 0:09:41.586 ****** 2026-01-10 14:41:27.240301 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:41:27.240304 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:41:27.240308 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:41:27.240312 | orchestrator | 2026-01-10 14:41:27.240315 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-01-10 14:41:27.240319 | orchestrator | Saturday 10 January 2026 14:39:22 +0000 (0:00:00.734) 0:09:42.320 ****** 2026-01-10 14:41:27.240323 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:41:27.240326 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:41:27.240330 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:41:27.240334 | orchestrator | 2026-01-10 14:41:27.240337 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-01-10 14:41:27.240341 | orchestrator | Saturday 10 January 2026 14:39:22 +0000 (0:00:00.333) 0:09:42.654 ****** 2026-01-10 14:41:27.240345 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:41:27.240348 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:41:27.240352 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:41:27.240356 | orchestrator | 2026-01-10 14:41:27.240359 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-01-10 14:41:27.240363 | orchestrator | Saturday 10 January 2026 14:39:23 +0000 (0:00:00.332) 0:09:42.987 ****** 2026-01-10 14:41:27.240367 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:41:27.240370 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:41:27.240374 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:41:27.240378 | orchestrator | 2026-01-10 14:41:27.240381 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-01-10 14:41:27.240385 | orchestrator | Saturday 10 January 2026 14:39:23 +0000 (0:00:00.624) 0:09:43.611 ****** 2026-01-10 14:41:27.240389 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:41:27.240392 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:41:27.240396 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:41:27.240400 | orchestrator | 2026-01-10 14:41:27.240403 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-01-10 14:41:27.240407 | orchestrator | Saturday 10 January 2026 14:39:24 +0000 (0:00:00.777) 0:09:44.388 ****** 2026-01-10 14:41:27.240411 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:41:27.240414 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:41:27.240418 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:41:27.240422 | orchestrator | 2026-01-10 14:41:27.240425 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-01-10 14:41:27.240429 | orchestrator | Saturday 10 January 2026 14:39:25 +0000 (0:00:00.776) 0:09:45.164 ****** 2026-01-10 14:41:27.240433 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:41:27.240440 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:41:27.240444 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:41:27.240447 | orchestrator | 2026-01-10 14:41:27.240451 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-01-10 14:41:27.240457 | orchestrator | Saturday 10 January 2026 14:39:25 +0000 (0:00:00.331) 0:09:45.496 ****** 2026-01-10 14:41:27.240461 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:41:27.240465 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:41:27.240468 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:41:27.240472 | orchestrator | 2026-01-10 14:41:27.240476 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-01-10 14:41:27.240480 | orchestrator | Saturday 10 January 2026 14:39:26 +0000 (0:00:00.596) 0:09:46.093 ****** 2026-01-10 14:41:27.240483 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:41:27.240487 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:41:27.240491 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:41:27.240494 | orchestrator | 2026-01-10 14:41:27.240523 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-01-10 14:41:27.240528 | orchestrator | Saturday 10 January 2026 14:39:26 +0000 (0:00:00.364) 0:09:46.458 ****** 2026-01-10 14:41:27.240532 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:41:27.240535 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:41:27.240539 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:41:27.240543 | orchestrator | 2026-01-10 14:41:27.240546 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-01-10 14:41:27.240550 | orchestrator | Saturday 10 January 2026 14:39:27 +0000 (0:00:00.388) 0:09:46.846 ****** 2026-01-10 14:41:27.240554 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:41:27.240558 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:41:27.240561 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:41:27.240565 | orchestrator | 2026-01-10 14:41:27.240568 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-01-10 14:41:27.240572 | orchestrator | Saturday 10 January 2026 14:39:27 +0000 (0:00:00.336) 0:09:47.182 ****** 2026-01-10 14:41:27.240576 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:41:27.240580 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:41:27.240583 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:41:27.240587 | orchestrator | 2026-01-10 14:41:27.240590 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-01-10 14:41:27.240594 | orchestrator | Saturday 10 January 2026 14:39:27 +0000 (0:00:00.601) 0:09:47.783 ****** 2026-01-10 14:41:27.240598 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:41:27.240602 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:41:27.240605 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:41:27.240609 | orchestrator | 2026-01-10 14:41:27.240613 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-01-10 14:41:27.240616 | orchestrator | Saturday 10 January 2026 14:39:28 +0000 (0:00:00.327) 0:09:48.110 ****** 2026-01-10 14:41:27.240620 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:41:27.240624 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:41:27.240627 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:41:27.240631 | orchestrator | 2026-01-10 14:41:27.240635 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-01-10 14:41:27.240638 | orchestrator | Saturday 10 January 2026 14:39:28 +0000 (0:00:00.341) 0:09:48.451 ****** 2026-01-10 14:41:27.240642 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:41:27.240646 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:41:27.240649 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:41:27.240653 | orchestrator | 2026-01-10 14:41:27.240657 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-01-10 14:41:27.240660 | orchestrator | Saturday 10 January 2026 14:39:28 +0000 (0:00:00.338) 0:09:48.790 ****** 2026-01-10 14:41:27.240664 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:41:27.240668 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:41:27.240682 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:41:27.240686 | orchestrator | 2026-01-10 14:41:27.240689 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2026-01-10 14:41:27.240693 | orchestrator | Saturday 10 January 2026 14:39:29 +0000 (0:00:00.846) 0:09:49.636 ****** 2026-01-10 14:41:27.240697 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:41:27.240700 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:41:27.240704 | orchestrator | included: /ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml for testbed-node-3 2026-01-10 14:41:27.240708 | orchestrator | 2026-01-10 14:41:27.240712 | orchestrator | TASK [ceph-facts : Get current default crush rule details] ********************* 2026-01-10 14:41:27.240716 | orchestrator | Saturday 10 January 2026 14:39:30 +0000 (0:00:00.566) 0:09:50.203 ****** 2026-01-10 14:41:27.240719 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-01-10 14:41:27.240723 | orchestrator | 2026-01-10 14:41:27.240727 | orchestrator | TASK [ceph-facts : Get current default crush rule name] ************************ 2026-01-10 14:41:27.240730 | orchestrator | Saturday 10 January 2026 14:39:32 +0000 (0:00:02.350) 0:09:52.553 ****** 2026-01-10 14:41:27.240736 | orchestrator | skipping: [testbed-node-3] => (item={'rule_id': 0, 'rule_name': 'replicated_rule', 'type': 1, 'steps': [{'op': 'take', 'item': -1, 'item_name': 'default'}, {'op': 'chooseleaf_firstn', 'num': 0, 'type': 'host'}, {'op': 'emit'}]})  2026-01-10 14:41:27.240742 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:41:27.240745 | orchestrator | 2026-01-10 14:41:27.240749 | orchestrator | TASK [ceph-mds : Create filesystem pools] ************************************** 2026-01-10 14:41:27.240753 | orchestrator | Saturday 10 January 2026 14:39:32 +0000 (0:00:00.209) 0:09:52.763 ****** 2026-01-10 14:41:27.240758 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_data', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-01-10 14:41:27.240764 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_metadata', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-01-10 14:41:27.240768 | orchestrator | 2026-01-10 14:41:27.240774 | orchestrator | TASK [ceph-mds : Create ceph filesystem] *************************************** 2026-01-10 14:41:27.240778 | orchestrator | Saturday 10 January 2026 14:39:41 +0000 (0:00:08.653) 0:10:01.417 ****** 2026-01-10 14:41:27.240782 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-01-10 14:41:27.240786 | orchestrator | 2026-01-10 14:41:27.240789 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2026-01-10 14:41:27.240793 | orchestrator | Saturday 10 January 2026 14:39:45 +0000 (0:00:04.057) 0:10:05.474 ****** 2026-01-10 14:41:27.240797 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-10 14:41:27.240801 | orchestrator | 2026-01-10 14:41:27.240807 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2026-01-10 14:41:27.240811 | orchestrator | Saturday 10 January 2026 14:39:46 +0000 (0:00:00.640) 0:10:06.115 ****** 2026-01-10 14:41:27.240814 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2026-01-10 14:41:27.240818 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2026-01-10 14:41:27.240822 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2026-01-10 14:41:27.240826 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2026-01-10 14:41:27.240829 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2026-01-10 14:41:27.240833 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2026-01-10 14:41:27.240837 | orchestrator | 2026-01-10 14:41:27.240840 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2026-01-10 14:41:27.240849 | orchestrator | Saturday 10 January 2026 14:39:47 +0000 (0:00:01.178) 0:10:07.294 ****** 2026-01-10 14:41:27.240853 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-10 14:41:27.240857 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-01-10 14:41:27.240861 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-01-10 14:41:27.240865 | orchestrator | 2026-01-10 14:41:27.240868 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2026-01-10 14:41:27.240872 | orchestrator | Saturday 10 January 2026 14:39:49 +0000 (0:00:02.498) 0:10:09.793 ****** 2026-01-10 14:41:27.240876 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-01-10 14:41:27.240880 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-01-10 14:41:27.240883 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:41:27.240887 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-01-10 14:41:27.240891 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-01-10 14:41:27.240898 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-01-10 14:41:27.240904 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:41:27.240910 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-01-10 14:41:27.240915 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:41:27.240921 | orchestrator | 2026-01-10 14:41:27.240926 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2026-01-10 14:41:27.240936 | orchestrator | Saturday 10 January 2026 14:39:51 +0000 (0:00:01.597) 0:10:11.391 ****** 2026-01-10 14:41:27.240943 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:41:27.240949 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:41:27.240954 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:41:27.240959 | orchestrator | 2026-01-10 14:41:27.240965 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2026-01-10 14:41:27.240971 | orchestrator | Saturday 10 January 2026 14:39:54 +0000 (0:00:02.764) 0:10:14.156 ****** 2026-01-10 14:41:27.240976 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:41:27.240982 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:41:27.240988 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:41:27.240994 | orchestrator | 2026-01-10 14:41:27.241000 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2026-01-10 14:41:27.241006 | orchestrator | Saturday 10 January 2026 14:39:54 +0000 (0:00:00.420) 0:10:14.576 ****** 2026-01-10 14:41:27.241012 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-10 14:41:27.241018 | orchestrator | 2026-01-10 14:41:27.241024 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2026-01-10 14:41:27.241030 | orchestrator | Saturday 10 January 2026 14:39:55 +0000 (0:00:00.824) 0:10:15.401 ****** 2026-01-10 14:41:27.241036 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-10 14:41:27.241043 | orchestrator | 2026-01-10 14:41:27.241047 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2026-01-10 14:41:27.241050 | orchestrator | Saturday 10 January 2026 14:39:56 +0000 (0:00:00.550) 0:10:15.951 ****** 2026-01-10 14:41:27.241054 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:41:27.241058 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:41:27.241062 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:41:27.241065 | orchestrator | 2026-01-10 14:41:27.241069 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2026-01-10 14:41:27.241073 | orchestrator | Saturday 10 January 2026 14:39:57 +0000 (0:00:01.350) 0:10:17.302 ****** 2026-01-10 14:41:27.241076 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:41:27.241080 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:41:27.241084 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:41:27.241087 | orchestrator | 2026-01-10 14:41:27.241091 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2026-01-10 14:41:27.241099 | orchestrator | Saturday 10 January 2026 14:39:59 +0000 (0:00:01.620) 0:10:18.923 ****** 2026-01-10 14:41:27.241103 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:41:27.241107 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:41:27.241110 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:41:27.241114 | orchestrator | 2026-01-10 14:41:27.241118 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2026-01-10 14:41:27.241125 | orchestrator | Saturday 10 January 2026 14:40:01 +0000 (0:00:01.976) 0:10:20.900 ****** 2026-01-10 14:41:27.241129 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:41:27.241132 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:41:27.241136 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:41:27.241140 | orchestrator | 2026-01-10 14:41:27.241143 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2026-01-10 14:41:27.241148 | orchestrator | Saturday 10 January 2026 14:40:03 +0000 (0:00:02.563) 0:10:23.463 ****** 2026-01-10 14:41:27.241155 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:41:27.241159 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:41:27.241163 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:41:27.241166 | orchestrator | 2026-01-10 14:41:27.241173 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-01-10 14:41:27.241177 | orchestrator | Saturday 10 January 2026 14:40:05 +0000 (0:00:02.081) 0:10:25.545 ****** 2026-01-10 14:41:27.241181 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:41:27.241185 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:41:27.241188 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:41:27.241192 | orchestrator | 2026-01-10 14:41:27.241196 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-01-10 14:41:27.241199 | orchestrator | Saturday 10 January 2026 14:40:06 +0000 (0:00:01.216) 0:10:26.762 ****** 2026-01-10 14:41:27.241203 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-4, testbed-node-3, testbed-node-5 2026-01-10 14:41:27.241207 | orchestrator | 2026-01-10 14:41:27.241210 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-01-10 14:41:27.241214 | orchestrator | Saturday 10 January 2026 14:40:08 +0000 (0:00:01.516) 0:10:28.278 ****** 2026-01-10 14:41:27.241218 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:41:27.241221 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:41:27.241225 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:41:27.241229 | orchestrator | 2026-01-10 14:41:27.241232 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-01-10 14:41:27.241236 | orchestrator | Saturday 10 January 2026 14:40:08 +0000 (0:00:00.495) 0:10:28.773 ****** 2026-01-10 14:41:27.241240 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:41:27.241243 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:41:27.241247 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:41:27.241251 | orchestrator | 2026-01-10 14:41:27.241254 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-01-10 14:41:27.241258 | orchestrator | Saturday 10 January 2026 14:40:10 +0000 (0:00:01.405) 0:10:30.179 ****** 2026-01-10 14:41:27.241262 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-10 14:41:27.241265 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-10 14:41:27.241269 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-10 14:41:27.241273 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:41:27.241276 | orchestrator | 2026-01-10 14:41:27.241280 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-01-10 14:41:27.241284 | orchestrator | Saturday 10 January 2026 14:40:11 +0000 (0:00:01.006) 0:10:31.185 ****** 2026-01-10 14:41:27.241287 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:41:27.241291 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:41:27.241295 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:41:27.241298 | orchestrator | 2026-01-10 14:41:27.241302 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2026-01-10 14:41:27.241309 | orchestrator | 2026-01-10 14:41:27.241313 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-01-10 14:41:27.241317 | orchestrator | Saturday 10 January 2026 14:40:12 +0000 (0:00:01.177) 0:10:32.362 ****** 2026-01-10 14:41:27.241320 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-10 14:41:27.241324 | orchestrator | 2026-01-10 14:41:27.241328 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-01-10 14:41:27.241332 | orchestrator | Saturday 10 January 2026 14:40:13 +0000 (0:00:00.729) 0:10:33.092 ****** 2026-01-10 14:41:27.241335 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-10 14:41:27.241339 | orchestrator | 2026-01-10 14:41:27.241343 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-01-10 14:41:27.241346 | orchestrator | Saturday 10 January 2026 14:40:14 +0000 (0:00:00.989) 0:10:34.081 ****** 2026-01-10 14:41:27.241350 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:41:27.241354 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:41:27.241357 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:41:27.241361 | orchestrator | 2026-01-10 14:41:27.241365 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-01-10 14:41:27.241369 | orchestrator | Saturday 10 January 2026 14:40:14 +0000 (0:00:00.389) 0:10:34.471 ****** 2026-01-10 14:41:27.241372 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:41:27.241376 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:41:27.241380 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:41:27.241383 | orchestrator | 2026-01-10 14:41:27.241387 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-01-10 14:41:27.241390 | orchestrator | Saturday 10 January 2026 14:40:15 +0000 (0:00:00.628) 0:10:35.099 ****** 2026-01-10 14:41:27.241394 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:41:27.241398 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:41:27.241401 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:41:27.241405 | orchestrator | 2026-01-10 14:41:27.241409 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-01-10 14:41:27.241412 | orchestrator | Saturday 10 January 2026 14:40:15 +0000 (0:00:00.666) 0:10:35.766 ****** 2026-01-10 14:41:27.241416 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:41:27.241420 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:41:27.241424 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:41:27.241427 | orchestrator | 2026-01-10 14:41:27.241431 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-01-10 14:41:27.241435 | orchestrator | Saturday 10 January 2026 14:40:16 +0000 (0:00:01.060) 0:10:36.826 ****** 2026-01-10 14:41:27.241438 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:41:27.241444 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:41:27.241448 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:41:27.241452 | orchestrator | 2026-01-10 14:41:27.241455 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-01-10 14:41:27.241459 | orchestrator | Saturday 10 January 2026 14:40:17 +0000 (0:00:00.328) 0:10:37.155 ****** 2026-01-10 14:41:27.241463 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:41:27.241466 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:41:27.241470 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:41:27.241474 | orchestrator | 2026-01-10 14:41:27.241478 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-01-10 14:41:27.241488 | orchestrator | Saturday 10 January 2026 14:40:17 +0000 (0:00:00.274) 0:10:37.429 ****** 2026-01-10 14:41:27.241494 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:41:27.241516 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:41:27.241522 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:41:27.241528 | orchestrator | 2026-01-10 14:41:27.241533 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-01-10 14:41:27.241539 | orchestrator | Saturday 10 January 2026 14:40:17 +0000 (0:00:00.260) 0:10:37.690 ****** 2026-01-10 14:41:27.241550 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:41:27.241556 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:41:27.241563 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:41:27.241568 | orchestrator | 2026-01-10 14:41:27.241574 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-01-10 14:41:27.241580 | orchestrator | Saturday 10 January 2026 14:40:18 +0000 (0:00:00.823) 0:10:38.513 ****** 2026-01-10 14:41:27.241586 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:41:27.241591 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:41:27.241597 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:41:27.241603 | orchestrator | 2026-01-10 14:41:27.241609 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-01-10 14:41:27.241615 | orchestrator | Saturday 10 January 2026 14:40:19 +0000 (0:00:00.674) 0:10:39.188 ****** 2026-01-10 14:41:27.241621 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:41:27.241627 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:41:27.241631 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:41:27.241635 | orchestrator | 2026-01-10 14:41:27.241639 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-01-10 14:41:27.241642 | orchestrator | Saturday 10 January 2026 14:40:19 +0000 (0:00:00.305) 0:10:39.493 ****** 2026-01-10 14:41:27.241646 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:41:27.241650 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:41:27.241653 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:41:27.241657 | orchestrator | 2026-01-10 14:41:27.241661 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-01-10 14:41:27.241664 | orchestrator | Saturday 10 January 2026 14:40:19 +0000 (0:00:00.311) 0:10:39.805 ****** 2026-01-10 14:41:27.241668 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:41:27.241672 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:41:27.241675 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:41:27.241679 | orchestrator | 2026-01-10 14:41:27.241683 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-01-10 14:41:27.241686 | orchestrator | Saturday 10 January 2026 14:40:20 +0000 (0:00:00.618) 0:10:40.423 ****** 2026-01-10 14:41:27.241690 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:41:27.241694 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:41:27.241697 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:41:27.241701 | orchestrator | 2026-01-10 14:41:27.241705 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-01-10 14:41:27.241708 | orchestrator | Saturday 10 January 2026 14:40:20 +0000 (0:00:00.330) 0:10:40.753 ****** 2026-01-10 14:41:27.241712 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:41:27.241716 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:41:27.241719 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:41:27.241723 | orchestrator | 2026-01-10 14:41:27.241727 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-01-10 14:41:27.241730 | orchestrator | Saturday 10 January 2026 14:40:21 +0000 (0:00:00.341) 0:10:41.095 ****** 2026-01-10 14:41:27.241734 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:41:27.241738 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:41:27.241742 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:41:27.241745 | orchestrator | 2026-01-10 14:41:27.241749 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-01-10 14:41:27.241755 | orchestrator | Saturday 10 January 2026 14:40:21 +0000 (0:00:00.324) 0:10:41.419 ****** 2026-01-10 14:41:27.241761 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:41:27.241766 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:41:27.241772 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:41:27.241778 | orchestrator | 2026-01-10 14:41:27.241784 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-01-10 14:41:27.241790 | orchestrator | Saturday 10 January 2026 14:40:22 +0000 (0:00:00.715) 0:10:42.135 ****** 2026-01-10 14:41:27.241796 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:41:27.241808 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:41:27.241815 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:41:27.241821 | orchestrator | 2026-01-10 14:41:27.241827 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-01-10 14:41:27.241832 | orchestrator | Saturday 10 January 2026 14:40:22 +0000 (0:00:00.321) 0:10:42.457 ****** 2026-01-10 14:41:27.241839 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:41:27.241844 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:41:27.241847 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:41:27.241851 | orchestrator | 2026-01-10 14:41:27.241855 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-01-10 14:41:27.241859 | orchestrator | Saturday 10 January 2026 14:40:22 +0000 (0:00:00.336) 0:10:42.794 ****** 2026-01-10 14:41:27.241862 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:41:27.241866 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:41:27.241870 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:41:27.241873 | orchestrator | 2026-01-10 14:41:27.241877 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2026-01-10 14:41:27.241881 | orchestrator | Saturday 10 January 2026 14:40:23 +0000 (0:00:00.846) 0:10:43.640 ****** 2026-01-10 14:41:27.241888 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-10 14:41:27.241892 | orchestrator | 2026-01-10 14:41:27.241896 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-01-10 14:41:27.241899 | orchestrator | Saturday 10 January 2026 14:40:24 +0000 (0:00:00.544) 0:10:44.185 ****** 2026-01-10 14:41:27.241903 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-10 14:41:27.241907 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-01-10 14:41:27.241910 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-01-10 14:41:27.241914 | orchestrator | 2026-01-10 14:41:27.241918 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-01-10 14:41:27.241924 | orchestrator | Saturday 10 January 2026 14:40:26 +0000 (0:00:02.332) 0:10:46.517 ****** 2026-01-10 14:41:27.241928 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-01-10 14:41:27.241932 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-01-10 14:41:27.241936 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:41:27.241939 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-01-10 14:41:27.241943 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-01-10 14:41:27.241947 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-01-10 14:41:27.241951 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-01-10 14:41:27.241954 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:41:27.241958 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:41:27.241961 | orchestrator | 2026-01-10 14:41:27.241965 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2026-01-10 14:41:27.241969 | orchestrator | Saturday 10 January 2026 14:40:28 +0000 (0:00:01.442) 0:10:47.959 ****** 2026-01-10 14:41:27.241973 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:41:27.241976 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:41:27.241980 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:41:27.241984 | orchestrator | 2026-01-10 14:41:27.241987 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2026-01-10 14:41:27.241991 | orchestrator | Saturday 10 January 2026 14:40:28 +0000 (0:00:00.352) 0:10:48.312 ****** 2026-01-10 14:41:27.241995 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-10 14:41:27.241999 | orchestrator | 2026-01-10 14:41:27.242002 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2026-01-10 14:41:27.242006 | orchestrator | Saturday 10 January 2026 14:40:29 +0000 (0:00:00.538) 0:10:48.850 ****** 2026-01-10 14:41:27.242010 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-01-10 14:41:27.242050 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-01-10 14:41:27.242054 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-01-10 14:41:27.242058 | orchestrator | 2026-01-10 14:41:27.242062 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2026-01-10 14:41:27.242066 | orchestrator | Saturday 10 January 2026 14:40:30 +0000 (0:00:01.528) 0:10:50.378 ****** 2026-01-10 14:41:27.242069 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-10 14:41:27.242073 | orchestrator | changed: [testbed-node-3 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-01-10 14:41:27.242077 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-10 14:41:27.242083 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-01-10 14:41:27.242089 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-10 14:41:27.242095 | orchestrator | changed: [testbed-node-4 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-01-10 14:41:27.242101 | orchestrator | 2026-01-10 14:41:27.242108 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-01-10 14:41:27.242114 | orchestrator | Saturday 10 January 2026 14:40:35 +0000 (0:00:04.643) 0:10:55.022 ****** 2026-01-10 14:41:27.242121 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-10 14:41:27.242127 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-01-10 14:41:27.242133 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-10 14:41:27.242140 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2026-01-10 14:41:27.242144 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-10 14:41:27.242147 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-01-10 14:41:27.242151 | orchestrator | 2026-01-10 14:41:27.242155 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-01-10 14:41:27.242158 | orchestrator | Saturday 10 January 2026 14:40:37 +0000 (0:00:02.367) 0:10:57.390 ****** 2026-01-10 14:41:27.242162 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-01-10 14:41:27.242166 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:41:27.242169 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-01-10 14:41:27.242173 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:41:27.242177 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-01-10 14:41:27.242181 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:41:27.242184 | orchestrator | 2026-01-10 14:41:27.242192 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2026-01-10 14:41:27.242196 | orchestrator | Saturday 10 January 2026 14:40:38 +0000 (0:00:01.267) 0:10:58.657 ****** 2026-01-10 14:41:27.242201 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2026-01-10 14:41:27.242206 | orchestrator | 2026-01-10 14:41:27.242213 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2026-01-10 14:41:27.242219 | orchestrator | Saturday 10 January 2026 14:40:39 +0000 (0:00:00.200) 0:10:58.858 ****** 2026-01-10 14:41:27.242228 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-10 14:41:27.242236 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-10 14:41:27.242247 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-10 14:41:27.242254 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-10 14:41:27.242260 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-10 14:41:27.242266 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:41:27.242271 | orchestrator | 2026-01-10 14:41:27.242277 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2026-01-10 14:41:27.242283 | orchestrator | Saturday 10 January 2026 14:40:40 +0000 (0:00:01.029) 0:10:59.887 ****** 2026-01-10 14:41:27.242289 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-10 14:41:27.242295 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-10 14:41:27.242300 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-10 14:41:27.242306 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-10 14:41:27.242312 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-10 14:41:27.242318 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:41:27.242324 | orchestrator | 2026-01-10 14:41:27.242330 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2026-01-10 14:41:27.242337 | orchestrator | Saturday 10 January 2026 14:40:40 +0000 (0:00:00.672) 0:11:00.559 ****** 2026-01-10 14:41:27.242343 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-01-10 14:41:27.242347 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-01-10 14:41:27.242351 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-01-10 14:41:27.242354 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-01-10 14:41:27.242358 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-01-10 14:41:27.242362 | orchestrator | 2026-01-10 14:41:27.242366 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2026-01-10 14:41:27.242370 | orchestrator | Saturday 10 January 2026 14:41:10 +0000 (0:00:30.226) 0:11:30.786 ****** 2026-01-10 14:41:27.242373 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:41:27.242377 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:41:27.242381 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:41:27.242385 | orchestrator | 2026-01-10 14:41:27.242388 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2026-01-10 14:41:27.242392 | orchestrator | Saturday 10 January 2026 14:41:11 +0000 (0:00:00.338) 0:11:31.124 ****** 2026-01-10 14:41:27.242396 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:41:27.242399 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:41:27.242403 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:41:27.242407 | orchestrator | 2026-01-10 14:41:27.242411 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2026-01-10 14:41:27.242414 | orchestrator | Saturday 10 January 2026 14:41:11 +0000 (0:00:00.312) 0:11:31.437 ****** 2026-01-10 14:41:27.242422 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-10 14:41:27.242426 | orchestrator | 2026-01-10 14:41:27.242429 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2026-01-10 14:41:27.242433 | orchestrator | Saturday 10 January 2026 14:41:12 +0000 (0:00:00.825) 0:11:32.262 ****** 2026-01-10 14:41:27.242441 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-10 14:41:27.242444 | orchestrator | 2026-01-10 14:41:27.242448 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2026-01-10 14:41:27.242452 | orchestrator | Saturday 10 January 2026 14:41:12 +0000 (0:00:00.565) 0:11:32.827 ****** 2026-01-10 14:41:27.242456 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:41:27.242459 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:41:27.242463 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:41:27.242467 | orchestrator | 2026-01-10 14:41:27.242470 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2026-01-10 14:41:27.242474 | orchestrator | Saturday 10 January 2026 14:41:14 +0000 (0:00:01.274) 0:11:34.101 ****** 2026-01-10 14:41:27.242483 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:41:27.242487 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:41:27.242491 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:41:27.242494 | orchestrator | 2026-01-10 14:41:27.242532 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2026-01-10 14:41:27.242537 | orchestrator | Saturday 10 January 2026 14:41:15 +0000 (0:00:01.542) 0:11:35.644 ****** 2026-01-10 14:41:27.242541 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:41:27.242544 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:41:27.242548 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:41:27.242552 | orchestrator | 2026-01-10 14:41:27.242555 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2026-01-10 14:41:27.242559 | orchestrator | Saturday 10 January 2026 14:41:17 +0000 (0:00:02.175) 0:11:37.819 ****** 2026-01-10 14:41:27.242563 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-01-10 14:41:27.242567 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-01-10 14:41:27.242570 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-01-10 14:41:27.242574 | orchestrator | 2026-01-10 14:41:27.242578 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-01-10 14:41:27.242581 | orchestrator | Saturday 10 January 2026 14:41:21 +0000 (0:00:03.036) 0:11:40.855 ****** 2026-01-10 14:41:27.242585 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:41:27.242589 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:41:27.242592 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:41:27.242596 | orchestrator | 2026-01-10 14:41:27.242600 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-01-10 14:41:27.242604 | orchestrator | Saturday 10 January 2026 14:41:21 +0000 (0:00:00.398) 0:11:41.254 ****** 2026-01-10 14:41:27.242607 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-10 14:41:27.242611 | orchestrator | 2026-01-10 14:41:27.242615 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-01-10 14:41:27.242618 | orchestrator | Saturday 10 January 2026 14:41:21 +0000 (0:00:00.549) 0:11:41.804 ****** 2026-01-10 14:41:27.242622 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:41:27.242626 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:41:27.242629 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:41:27.242633 | orchestrator | 2026-01-10 14:41:27.242637 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-01-10 14:41:27.242646 | orchestrator | Saturday 10 January 2026 14:41:22 +0000 (0:00:00.658) 0:11:42.462 ****** 2026-01-10 14:41:27.242650 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:41:27.242654 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:41:27.242657 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:41:27.242661 | orchestrator | 2026-01-10 14:41:27.242665 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-01-10 14:41:27.242668 | orchestrator | Saturday 10 January 2026 14:41:22 +0000 (0:00:00.341) 0:11:42.804 ****** 2026-01-10 14:41:27.242672 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-10 14:41:27.242676 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-10 14:41:27.242679 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-10 14:41:27.242683 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:41:27.242687 | orchestrator | 2026-01-10 14:41:27.242691 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-01-10 14:41:27.242694 | orchestrator | Saturday 10 January 2026 14:41:23 +0000 (0:00:00.672) 0:11:43.476 ****** 2026-01-10 14:41:27.242698 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:41:27.242702 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:41:27.242705 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:41:27.242709 | orchestrator | 2026-01-10 14:41:27.242713 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-10 14:41:27.242716 | orchestrator | testbed-node-0 : ok=134  changed=34  unreachable=0 failed=0 skipped=125  rescued=0 ignored=0 2026-01-10 14:41:27.242721 | orchestrator | testbed-node-1 : ok=127  changed=32  unreachable=0 failed=0 skipped=120  rescued=0 ignored=0 2026-01-10 14:41:27.242725 | orchestrator | testbed-node-2 : ok=134  changed=33  unreachable=0 failed=0 skipped=119  rescued=0 ignored=0 2026-01-10 14:41:27.242728 | orchestrator | testbed-node-3 : ok=193  changed=45  unreachable=0 failed=0 skipped=162  rescued=0 ignored=0 2026-01-10 14:41:27.242732 | orchestrator | testbed-node-4 : ok=175  changed=40  unreachable=0 failed=0 skipped=123  rescued=0 ignored=0 2026-01-10 14:41:27.242740 | orchestrator | testbed-node-5 : ok=177  changed=41  unreachable=0 failed=0 skipped=121  rescued=0 ignored=0 2026-01-10 14:41:27.242743 | orchestrator | 2026-01-10 14:41:27.242747 | orchestrator | 2026-01-10 14:41:27.242751 | orchestrator | 2026-01-10 14:41:27.242755 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-10 14:41:27.242758 | orchestrator | Saturday 10 January 2026 14:41:23 +0000 (0:00:00.253) 0:11:43.729 ****** 2026-01-10 14:41:27.242762 | orchestrator | =============================================================================== 2026-01-10 14:41:27.242766 | orchestrator | ceph-container-common : Pulling Ceph container image ------------------- 51.35s 2026-01-10 14:41:27.242773 | orchestrator | ceph-osd : Use ceph-volume to create osds ------------------------------ 45.03s 2026-01-10 14:41:27.242777 | orchestrator | ceph-mgr : Wait for all mgr to be up ----------------------------------- 36.21s 2026-01-10 14:41:27.242781 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 30.23s 2026-01-10 14:41:27.242784 | orchestrator | ceph-mon : Waiting for the monitor(s) to form the quorum... ------------ 21.88s 2026-01-10 14:41:27.242788 | orchestrator | ceph-mon : Set cluster configs ----------------------------------------- 15.38s 2026-01-10 14:41:27.242792 | orchestrator | ceph-osd : Wait for all osd to be up ----------------------------------- 12.53s 2026-01-10 14:41:27.242796 | orchestrator | ceph-mgr : Create ceph mgr keyring(s) on a mon node -------------------- 10.72s 2026-01-10 14:41:27.242799 | orchestrator | ceph-mon : Fetch ceph initial keys -------------------------------------- 9.86s 2026-01-10 14:41:27.242803 | orchestrator | ceph-mds : Create filesystem pools -------------------------------------- 8.65s 2026-01-10 14:41:27.242813 | orchestrator | ceph-config : Create ceph initial directories --------------------------- 7.48s 2026-01-10 14:41:27.242816 | orchestrator | ceph-mgr : Disable ceph mgr enabled modules ----------------------------- 6.58s 2026-01-10 14:41:27.242820 | orchestrator | ceph-mgr : Add modules to ceph-mgr -------------------------------------- 5.63s 2026-01-10 14:41:27.242824 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 5.55s 2026-01-10 14:41:27.242828 | orchestrator | ceph-rgw : Create rgw keyrings ------------------------------------------ 4.64s 2026-01-10 14:41:27.242831 | orchestrator | ceph-crash : Create client.crash keyring -------------------------------- 4.22s 2026-01-10 14:41:27.242835 | orchestrator | ceph-mds : Create ceph filesystem --------------------------------------- 4.06s 2026-01-10 14:41:27.242839 | orchestrator | ceph-osd : Systemd start osd -------------------------------------------- 3.97s 2026-01-10 14:41:27.242843 | orchestrator | ceph-mon : Copy admin keyring over to mons ------------------------------ 3.66s 2026-01-10 14:41:27.242846 | orchestrator | ceph-osd : Apply operating system tuning -------------------------------- 3.62s 2026-01-10 14:41:27.242850 | orchestrator | 2026-01-10 14:41:27 | INFO  | Task e1c1f69f-b10d-408b-abed-be39393f1ae8 is in state STARTED 2026-01-10 14:41:27.242854 | orchestrator | 2026-01-10 14:41:27 | INFO  | Task d11af9e0-773e-46f4-9ea7-f4be64669380 is in state STARTED 2026-01-10 14:41:27.242858 | orchestrator | 2026-01-10 14:41:27 | INFO  | Task b6367eb5-2686-46ac-9949-20af3bf840c4 is in state STARTED 2026-01-10 14:41:27.242862 | orchestrator | 2026-01-10 14:41:27 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:41:30.275741 | orchestrator | 2026-01-10 14:41:30 | INFO  | Task e1c1f69f-b10d-408b-abed-be39393f1ae8 is in state STARTED 2026-01-10 14:41:30.277778 | orchestrator | 2026-01-10 14:41:30 | INFO  | Task d11af9e0-773e-46f4-9ea7-f4be64669380 is in state STARTED 2026-01-10 14:41:30.278867 | orchestrator | 2026-01-10 14:41:30 | INFO  | Task b6367eb5-2686-46ac-9949-20af3bf840c4 is in state STARTED 2026-01-10 14:41:30.278909 | orchestrator | 2026-01-10 14:41:30 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:41:33.334487 | orchestrator | 2026-01-10 14:41:33 | INFO  | Task e1c1f69f-b10d-408b-abed-be39393f1ae8 is in state STARTED 2026-01-10 14:41:33.335724 | orchestrator | 2026-01-10 14:41:33 | INFO  | Task d11af9e0-773e-46f4-9ea7-f4be64669380 is in state STARTED 2026-01-10 14:41:33.338153 | orchestrator | 2026-01-10 14:41:33 | INFO  | Task b6367eb5-2686-46ac-9949-20af3bf840c4 is in state STARTED 2026-01-10 14:41:33.338218 | orchestrator | 2026-01-10 14:41:33 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:41:36.398429 | orchestrator | 2026-01-10 14:41:36 | INFO  | Task e1c1f69f-b10d-408b-abed-be39393f1ae8 is in state STARTED 2026-01-10 14:41:36.402258 | orchestrator | 2026-01-10 14:41:36 | INFO  | Task d11af9e0-773e-46f4-9ea7-f4be64669380 is in state STARTED 2026-01-10 14:41:36.406787 | orchestrator | 2026-01-10 14:41:36 | INFO  | Task b6367eb5-2686-46ac-9949-20af3bf840c4 is in state STARTED 2026-01-10 14:41:36.406868 | orchestrator | 2026-01-10 14:41:36 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:41:39.451759 | orchestrator | 2026-01-10 14:41:39 | INFO  | Task e1c1f69f-b10d-408b-abed-be39393f1ae8 is in state STARTED 2026-01-10 14:41:39.453246 | orchestrator | 2026-01-10 14:41:39 | INFO  | Task d11af9e0-773e-46f4-9ea7-f4be64669380 is in state STARTED 2026-01-10 14:41:39.454546 | orchestrator | 2026-01-10 14:41:39 | INFO  | Task b6367eb5-2686-46ac-9949-20af3bf840c4 is in state STARTED 2026-01-10 14:41:39.455319 | orchestrator | 2026-01-10 14:41:39 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:41:42.500685 | orchestrator | 2026-01-10 14:41:42 | INFO  | Task e1c1f69f-b10d-408b-abed-be39393f1ae8 is in state STARTED 2026-01-10 14:41:42.502393 | orchestrator | 2026-01-10 14:41:42 | INFO  | Task d11af9e0-773e-46f4-9ea7-f4be64669380 is in state STARTED 2026-01-10 14:41:42.504228 | orchestrator | 2026-01-10 14:41:42 | INFO  | Task b6367eb5-2686-46ac-9949-20af3bf840c4 is in state STARTED 2026-01-10 14:41:42.504365 | orchestrator | 2026-01-10 14:41:42 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:41:45.564772 | orchestrator | 2026-01-10 14:41:45 | INFO  | Task e1c1f69f-b10d-408b-abed-be39393f1ae8 is in state STARTED 2026-01-10 14:41:45.567873 | orchestrator | 2026-01-10 14:41:45 | INFO  | Task d11af9e0-773e-46f4-9ea7-f4be64669380 is in state STARTED 2026-01-10 14:41:45.570590 | orchestrator | 2026-01-10 14:41:45 | INFO  | Task b6367eb5-2686-46ac-9949-20af3bf840c4 is in state STARTED 2026-01-10 14:41:45.570706 | orchestrator | 2026-01-10 14:41:45 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:41:48.608696 | orchestrator | 2026-01-10 14:41:48 | INFO  | Task e1c1f69f-b10d-408b-abed-be39393f1ae8 is in state STARTED 2026-01-10 14:41:48.619097 | orchestrator | 2026-01-10 14:41:48 | INFO  | Task d11af9e0-773e-46f4-9ea7-f4be64669380 is in state STARTED 2026-01-10 14:41:48.621681 | orchestrator | 2026-01-10 14:41:48 | INFO  | Task b6367eb5-2686-46ac-9949-20af3bf840c4 is in state STARTED 2026-01-10 14:41:48.621722 | orchestrator | 2026-01-10 14:41:48 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:41:51.675435 | orchestrator | 2026-01-10 14:41:51 | INFO  | Task e1c1f69f-b10d-408b-abed-be39393f1ae8 is in state STARTED 2026-01-10 14:41:51.677075 | orchestrator | 2026-01-10 14:41:51 | INFO  | Task d11af9e0-773e-46f4-9ea7-f4be64669380 is in state STARTED 2026-01-10 14:41:51.678799 | orchestrator | 2026-01-10 14:41:51 | INFO  | Task b6367eb5-2686-46ac-9949-20af3bf840c4 is in state STARTED 2026-01-10 14:41:51.678867 | orchestrator | 2026-01-10 14:41:51 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:41:54.725842 | orchestrator | 2026-01-10 14:41:54 | INFO  | Task e1c1f69f-b10d-408b-abed-be39393f1ae8 is in state STARTED 2026-01-10 14:41:54.727696 | orchestrator | 2026-01-10 14:41:54 | INFO  | Task d11af9e0-773e-46f4-9ea7-f4be64669380 is in state STARTED 2026-01-10 14:41:54.731975 | orchestrator | 2026-01-10 14:41:54 | INFO  | Task b6367eb5-2686-46ac-9949-20af3bf840c4 is in state STARTED 2026-01-10 14:41:54.732056 | orchestrator | 2026-01-10 14:41:54 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:41:57.780039 | orchestrator | 2026-01-10 14:41:57 | INFO  | Task e1c1f69f-b10d-408b-abed-be39393f1ae8 is in state STARTED 2026-01-10 14:41:57.782792 | orchestrator | 2026-01-10 14:41:57 | INFO  | Task d11af9e0-773e-46f4-9ea7-f4be64669380 is in state STARTED 2026-01-10 14:41:57.784899 | orchestrator | 2026-01-10 14:41:57 | INFO  | Task b6367eb5-2686-46ac-9949-20af3bf840c4 is in state STARTED 2026-01-10 14:41:57.784950 | orchestrator | 2026-01-10 14:41:57 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:42:00.834244 | orchestrator | 2026-01-10 14:42:00 | INFO  | Task e1c1f69f-b10d-408b-abed-be39393f1ae8 is in state STARTED 2026-01-10 14:42:00.836272 | orchestrator | 2026-01-10 14:42:00 | INFO  | Task d11af9e0-773e-46f4-9ea7-f4be64669380 is in state STARTED 2026-01-10 14:42:00.838373 | orchestrator | 2026-01-10 14:42:00 | INFO  | Task b6367eb5-2686-46ac-9949-20af3bf840c4 is in state STARTED 2026-01-10 14:42:00.838417 | orchestrator | 2026-01-10 14:42:00 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:42:03.882883 | orchestrator | 2026-01-10 14:42:03 | INFO  | Task e1c1f69f-b10d-408b-abed-be39393f1ae8 is in state STARTED 2026-01-10 14:42:03.885805 | orchestrator | 2026-01-10 14:42:03 | INFO  | Task d11af9e0-773e-46f4-9ea7-f4be64669380 is in state STARTED 2026-01-10 14:42:03.887609 | orchestrator | 2026-01-10 14:42:03 | INFO  | Task b6367eb5-2686-46ac-9949-20af3bf840c4 is in state STARTED 2026-01-10 14:42:03.887657 | orchestrator | 2026-01-10 14:42:03 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:42:06.930337 | orchestrator | 2026-01-10 14:42:06 | INFO  | Task e1c1f69f-b10d-408b-abed-be39393f1ae8 is in state STARTED 2026-01-10 14:42:06.931075 | orchestrator | 2026-01-10 14:42:06 | INFO  | Task d11af9e0-773e-46f4-9ea7-f4be64669380 is in state STARTED 2026-01-10 14:42:06.932754 | orchestrator | 2026-01-10 14:42:06 | INFO  | Task b6367eb5-2686-46ac-9949-20af3bf840c4 is in state STARTED 2026-01-10 14:42:06.932788 | orchestrator | 2026-01-10 14:42:06 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:42:09.979036 | orchestrator | 2026-01-10 14:42:09 | INFO  | Task e1c1f69f-b10d-408b-abed-be39393f1ae8 is in state STARTED 2026-01-10 14:42:09.981989 | orchestrator | 2026-01-10 14:42:09 | INFO  | Task d11af9e0-773e-46f4-9ea7-f4be64669380 is in state STARTED 2026-01-10 14:42:09.984967 | orchestrator | 2026-01-10 14:42:09 | INFO  | Task b6367eb5-2686-46ac-9949-20af3bf840c4 is in state STARTED 2026-01-10 14:42:09.985033 | orchestrator | 2026-01-10 14:42:09 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:42:13.035899 | orchestrator | 2026-01-10 14:42:13 | INFO  | Task e1c1f69f-b10d-408b-abed-be39393f1ae8 is in state STARTED 2026-01-10 14:42:13.038103 | orchestrator | 2026-01-10 14:42:13 | INFO  | Task d11af9e0-773e-46f4-9ea7-f4be64669380 is in state STARTED 2026-01-10 14:42:13.039311 | orchestrator | 2026-01-10 14:42:13 | INFO  | Task b6367eb5-2686-46ac-9949-20af3bf840c4 is in state STARTED 2026-01-10 14:42:13.039538 | orchestrator | 2026-01-10 14:42:13 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:42:16.085013 | orchestrator | 2026-01-10 14:42:16 | INFO  | Task e1c1f69f-b10d-408b-abed-be39393f1ae8 is in state STARTED 2026-01-10 14:42:16.086417 | orchestrator | 2026-01-10 14:42:16 | INFO  | Task d11af9e0-773e-46f4-9ea7-f4be64669380 is in state STARTED 2026-01-10 14:42:16.088850 | orchestrator | 2026-01-10 14:42:16 | INFO  | Task b6367eb5-2686-46ac-9949-20af3bf840c4 is in state STARTED 2026-01-10 14:42:16.088956 | orchestrator | 2026-01-10 14:42:16 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:42:19.135642 | orchestrator | 2026-01-10 14:42:19 | INFO  | Task e1c1f69f-b10d-408b-abed-be39393f1ae8 is in state STARTED 2026-01-10 14:42:19.135728 | orchestrator | 2026-01-10 14:42:19 | INFO  | Task d11af9e0-773e-46f4-9ea7-f4be64669380 is in state STARTED 2026-01-10 14:42:19.138609 | orchestrator | 2026-01-10 14:42:19 | INFO  | Task b6367eb5-2686-46ac-9949-20af3bf840c4 is in state STARTED 2026-01-10 14:42:19.138669 | orchestrator | 2026-01-10 14:42:19 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:42:22.211392 | orchestrator | 2026-01-10 14:42:22 | INFO  | Task e1c1f69f-b10d-408b-abed-be39393f1ae8 is in state SUCCESS 2026-01-10 14:42:22.213477 | orchestrator | 2026-01-10 14:42:22.213581 | orchestrator | 2026-01-10 14:42:22.213590 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-10 14:42:22.213639 | orchestrator | 2026-01-10 14:42:22.213643 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-10 14:42:22.213648 | orchestrator | Saturday 10 January 2026 14:39:33 +0000 (0:00:00.262) 0:00:00.262 ****** 2026-01-10 14:42:22.213652 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:42:22.213657 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:42:22.213661 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:42:22.213664 | orchestrator | 2026-01-10 14:42:22.213668 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-10 14:42:22.213692 | orchestrator | Saturday 10 January 2026 14:39:34 +0000 (0:00:00.321) 0:00:00.583 ****** 2026-01-10 14:42:22.213771 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2026-01-10 14:42:22.213777 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2026-01-10 14:42:22.213781 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2026-01-10 14:42:22.213793 | orchestrator | 2026-01-10 14:42:22.213797 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2026-01-10 14:42:22.213800 | orchestrator | 2026-01-10 14:42:22.213804 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-01-10 14:42:22.213808 | orchestrator | Saturday 10 January 2026 14:39:34 +0000 (0:00:00.455) 0:00:01.038 ****** 2026-01-10 14:42:22.213812 | orchestrator | included: /ansible/roles/opensearch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:42:22.213816 | orchestrator | 2026-01-10 14:42:22.213820 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2026-01-10 14:42:22.213824 | orchestrator | Saturday 10 January 2026 14:39:35 +0000 (0:00:00.515) 0:00:01.554 ****** 2026-01-10 14:42:22.213827 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-01-10 14:42:22.213831 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-01-10 14:42:22.213835 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-01-10 14:42:22.213839 | orchestrator | 2026-01-10 14:42:22.213842 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2026-01-10 14:42:22.213846 | orchestrator | Saturday 10 January 2026 14:39:36 +0000 (0:00:01.660) 0:00:03.215 ****** 2026-01-10 14:42:22.213864 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-01-10 14:42:22.213871 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-01-10 14:42:22.213885 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-01-10 14:42:22.213896 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-01-10 14:42:22.213904 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-01-10 14:42:22.213909 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-01-10 14:42:22.213913 | orchestrator | 2026-01-10 14:42:22.213917 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-01-10 14:42:22.213924 | orchestrator | Saturday 10 January 2026 14:39:38 +0000 (0:00:02.079) 0:00:05.294 ****** 2026-01-10 14:42:22.213928 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:42:22.213931 | orchestrator | 2026-01-10 14:42:22.213935 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2026-01-10 14:42:22.213943 | orchestrator | Saturday 10 January 2026 14:39:39 +0000 (0:00:00.582) 0:00:05.877 ****** 2026-01-10 14:42:22.213947 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-01-10 14:42:22.213952 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-01-10 14:42:22.213959 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-01-10 14:42:22.213963 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-01-10 14:42:22.213975 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-01-10 14:42:22.213979 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-01-10 14:42:22.213984 | orchestrator | 2026-01-10 14:42:22.213987 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2026-01-10 14:42:22.214055 | orchestrator | Saturday 10 January 2026 14:39:42 +0000 (0:00:02.920) 0:00:08.797 ****** 2026-01-10 14:42:22.214065 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-01-10 14:42:22.214071 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-01-10 14:42:22.214083 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-01-10 14:42:22.214088 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:42:22.214092 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-01-10 14:42:22.214096 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:42:22.214103 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-01-10 14:42:22.214114 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-01-10 14:42:22.214118 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:42:22.214122 | orchestrator | 2026-01-10 14:42:22.214126 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2026-01-10 14:42:22.214130 | orchestrator | Saturday 10 January 2026 14:39:43 +0000 (0:00:01.304) 0:00:10.102 ****** 2026-01-10 14:42:22.214134 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-01-10 14:42:22.214140 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-01-10 14:42:22.214145 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:42:22.214148 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-01-10 14:42:22.214160 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-01-10 14:42:22.214164 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-01-10 14:42:22.214171 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:42:22.214181 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-01-10 14:42:22.214187 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:42:22.214205 | orchestrator | 2026-01-10 14:42:22.214214 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2026-01-10 14:42:22.214220 | orchestrator | Saturday 10 January 2026 14:39:44 +0000 (0:00:00.950) 0:00:11.053 ****** 2026-01-10 14:42:22.214226 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-01-10 14:42:22.214240 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-01-10 14:42:22.214246 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-01-10 14:42:22.214256 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-01-10 14:42:22.214269 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-01-10 14:42:22.214280 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-01-10 14:42:22.214286 | orchestrator | 2026-01-10 14:42:22.214293 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2026-01-10 14:42:22.214299 | orchestrator | Saturday 10 January 2026 14:39:47 +0000 (0:00:02.626) 0:00:13.680 ****** 2026-01-10 14:42:22.214305 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:42:22.214311 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:42:22.214316 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:42:22.214322 | orchestrator | 2026-01-10 14:42:22.214327 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2026-01-10 14:42:22.214333 | orchestrator | Saturday 10 January 2026 14:39:49 +0000 (0:00:02.572) 0:00:16.252 ****** 2026-01-10 14:42:22.214339 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:42:22.214345 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:42:22.214352 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:42:22.214358 | orchestrator | 2026-01-10 14:42:22.214363 | orchestrator | TASK [service-check-containers : opensearch | Check containers] **************** 2026-01-10 14:42:22.214367 | orchestrator | Saturday 10 January 2026 14:39:52 +0000 (0:00:02.639) 0:00:18.891 ****** 2026-01-10 14:42:22.214371 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-01-10 14:42:22.214384 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-01-10 14:42:22.214388 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-01-10 14:42:22.214396 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-01-10 14:42:22.214400 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-01-10 14:42:22.214411 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-01-10 14:42:22.214415 | orchestrator | 2026-01-10 14:42:22.214419 | orchestrator | TASK [service-check-containers : opensearch | Notify handlers to restart containers] *** 2026-01-10 14:42:22.214423 | orchestrator | Saturday 10 January 2026 14:39:54 +0000 (0:00:02.211) 0:00:21.102 ****** 2026-01-10 14:42:22.214427 | orchestrator | changed: [testbed-node-0] => { 2026-01-10 14:42:22.214431 | orchestrator |  "msg": "Notifying handlers" 2026-01-10 14:42:22.214435 | orchestrator | } 2026-01-10 14:42:22.214439 | orchestrator | changed: [testbed-node-1] => { 2026-01-10 14:42:22.214444 | orchestrator |  "msg": "Notifying handlers" 2026-01-10 14:42:22.214530 | orchestrator | } 2026-01-10 14:42:22.214540 | orchestrator | changed: [testbed-node-2] => { 2026-01-10 14:42:22.214546 | orchestrator |  "msg": "Notifying handlers" 2026-01-10 14:42:22.214552 | orchestrator | } 2026-01-10 14:42:22.214557 | orchestrator | 2026-01-10 14:42:22.214563 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-01-10 14:42:22.214575 | orchestrator | Saturday 10 January 2026 14:39:54 +0000 (0:00:00.339) 0:00:21.442 ****** 2026-01-10 14:42:22.214582 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-01-10 14:42:22.214589 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-01-10 14:42:22.214602 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:42:22.214615 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-01-10 14:42:22.214622 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-01-10 14:42:22.214627 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:42:22.214631 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-01-10 14:42:22.214635 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-01-10 14:42:22.214643 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:42:22.214647 | orchestrator | 2026-01-10 14:42:22.214650 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-01-10 14:42:22.214654 | orchestrator | Saturday 10 January 2026 14:39:57 +0000 (0:00:02.081) 0:00:23.523 ****** 2026-01-10 14:42:22.214658 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:42:22.214664 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:42:22.214669 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:42:22.214673 | orchestrator | 2026-01-10 14:42:22.214678 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-01-10 14:42:22.214682 | orchestrator | Saturday 10 January 2026 14:39:57 +0000 (0:00:00.375) 0:00:23.898 ****** 2026-01-10 14:42:22.214686 | orchestrator | 2026-01-10 14:42:22.214690 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-01-10 14:42:22.214694 | orchestrator | Saturday 10 January 2026 14:39:57 +0000 (0:00:00.143) 0:00:24.042 ****** 2026-01-10 14:42:22.214699 | orchestrator | 2026-01-10 14:42:22.214703 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-01-10 14:42:22.214708 | orchestrator | Saturday 10 January 2026 14:39:57 +0000 (0:00:00.081) 0:00:24.123 ****** 2026-01-10 14:42:22.214712 | orchestrator | 2026-01-10 14:42:22.214716 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2026-01-10 14:42:22.214720 | orchestrator | Saturday 10 January 2026 14:39:57 +0000 (0:00:00.070) 0:00:24.194 ****** 2026-01-10 14:42:22.214724 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:42:22.214729 | orchestrator | 2026-01-10 14:42:22.214733 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2026-01-10 14:42:22.214737 | orchestrator | Saturday 10 January 2026 14:39:57 +0000 (0:00:00.241) 0:00:24.435 ****** 2026-01-10 14:42:22.214742 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:42:22.214746 | orchestrator | 2026-01-10 14:42:22.214750 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2026-01-10 14:42:22.214754 | orchestrator | Saturday 10 January 2026 14:39:58 +0000 (0:00:00.196) 0:00:24.632 ****** 2026-01-10 14:42:22.214759 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:42:22.214763 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:42:22.214767 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:42:22.214771 | orchestrator | 2026-01-10 14:42:22.214776 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2026-01-10 14:42:22.214780 | orchestrator | Saturday 10 January 2026 14:40:51 +0000 (0:00:53.490) 0:01:18.122 ****** 2026-01-10 14:42:22.214784 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:42:22.214789 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:42:22.214793 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:42:22.214797 | orchestrator | 2026-01-10 14:42:22.214801 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-01-10 14:42:22.214806 | orchestrator | Saturday 10 January 2026 14:42:04 +0000 (0:01:13.019) 0:02:31.142 ****** 2026-01-10 14:42:22.214817 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:42:22.214822 | orchestrator | 2026-01-10 14:42:22.214826 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2026-01-10 14:42:22.214830 | orchestrator | Saturday 10 January 2026 14:42:05 +0000 (0:00:00.525) 0:02:31.667 ****** 2026-01-10 14:42:22.214835 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:42:22.214839 | orchestrator | 2026-01-10 14:42:22.214843 | orchestrator | TASK [opensearch : Wait for OpenSearch cluster to become healthy] ************** 2026-01-10 14:42:22.214850 | orchestrator | Saturday 10 January 2026 14:42:07 +0000 (0:00:02.665) 0:02:34.332 ****** 2026-01-10 14:42:22.214855 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:42:22.214860 | orchestrator | 2026-01-10 14:42:22.214864 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2026-01-10 14:42:22.214869 | orchestrator | Saturday 10 January 2026 14:42:10 +0000 (0:00:02.387) 0:02:36.720 ****** 2026-01-10 14:42:22.214873 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:42:22.214877 | orchestrator | 2026-01-10 14:42:22.214881 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2026-01-10 14:42:22.214885 | orchestrator | Saturday 10 January 2026 14:42:13 +0000 (0:00:03.023) 0:02:39.744 ****** 2026-01-10 14:42:22.214889 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:42:22.214893 | orchestrator | 2026-01-10 14:42:22.214898 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2026-01-10 14:42:22.214902 | orchestrator | Saturday 10 January 2026 14:42:16 +0000 (0:00:02.934) 0:02:42.678 ****** 2026-01-10 14:42:22.214912 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:42:22.214917 | orchestrator | 2026-01-10 14:42:22.214921 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-10 14:42:22.214926 | orchestrator | testbed-node-0 : ok=20  changed=12  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-01-10 14:42:22.214931 | orchestrator | testbed-node-1 : ok=15  changed=10  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-01-10 14:42:22.214936 | orchestrator | testbed-node-2 : ok=15  changed=10  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-01-10 14:42:22.214940 | orchestrator | 2026-01-10 14:42:22.214944 | orchestrator | 2026-01-10 14:42:22.214949 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-10 14:42:22.214953 | orchestrator | Saturday 10 January 2026 14:42:18 +0000 (0:00:02.638) 0:02:45.316 ****** 2026-01-10 14:42:22.214957 | orchestrator | =============================================================================== 2026-01-10 14:42:22.214961 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 73.02s 2026-01-10 14:42:22.214966 | orchestrator | opensearch : Restart opensearch container ------------------------------ 53.49s 2026-01-10 14:42:22.214970 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 3.02s 2026-01-10 14:42:22.214974 | orchestrator | opensearch : Create new log retention policy ---------------------------- 2.93s 2026-01-10 14:42:22.214978 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 2.92s 2026-01-10 14:42:22.214983 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 2.67s 2026-01-10 14:42:22.214989 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 2.64s 2026-01-10 14:42:22.214994 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 2.64s 2026-01-10 14:42:22.214998 | orchestrator | opensearch : Copying over config.json files for services ---------------- 2.63s 2026-01-10 14:42:22.215002 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 2.57s 2026-01-10 14:42:22.215006 | orchestrator | opensearch : Wait for OpenSearch cluster to become healthy -------------- 2.39s 2026-01-10 14:42:22.215010 | orchestrator | service-check-containers : opensearch | Check containers ---------------- 2.21s 2026-01-10 14:42:22.215021 | orchestrator | service-check-containers : Include tasks -------------------------------- 2.08s 2026-01-10 14:42:22.215026 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 2.08s 2026-01-10 14:42:22.215030 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 1.66s 2026-01-10 14:42:22.215034 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 1.30s 2026-01-10 14:42:22.215038 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 0.95s 2026-01-10 14:42:22.215043 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.58s 2026-01-10 14:42:22.215047 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.53s 2026-01-10 14:42:22.215051 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.52s 2026-01-10 14:42:22.215055 | orchestrator | 2026-01-10 14:42:22 | INFO  | Task d11af9e0-773e-46f4-9ea7-f4be64669380 is in state STARTED 2026-01-10 14:42:22.216009 | orchestrator | 2026-01-10 14:42:22 | INFO  | Task b6367eb5-2686-46ac-9949-20af3bf840c4 is in state STARTED 2026-01-10 14:42:22.216029 | orchestrator | 2026-01-10 14:42:22 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:42:25.270689 | orchestrator | 2026-01-10 14:42:25 | INFO  | Task d11af9e0-773e-46f4-9ea7-f4be64669380 is in state STARTED 2026-01-10 14:42:25.271207 | orchestrator | 2026-01-10 14:42:25 | INFO  | Task b6367eb5-2686-46ac-9949-20af3bf840c4 is in state STARTED 2026-01-10 14:42:25.271264 | orchestrator | 2026-01-10 14:42:25 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:42:28.324184 | orchestrator | 2026-01-10 14:42:28 | INFO  | Task d11af9e0-773e-46f4-9ea7-f4be64669380 is in state STARTED 2026-01-10 14:42:28.327004 | orchestrator | 2026-01-10 14:42:28 | INFO  | Task b6367eb5-2686-46ac-9949-20af3bf840c4 is in state STARTED 2026-01-10 14:42:28.327061 | orchestrator | 2026-01-10 14:42:28 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:42:31.369963 | orchestrator | 2026-01-10 14:42:31 | INFO  | Task d11af9e0-773e-46f4-9ea7-f4be64669380 is in state STARTED 2026-01-10 14:42:31.372812 | orchestrator | 2026-01-10 14:42:31 | INFO  | Task b6367eb5-2686-46ac-9949-20af3bf840c4 is in state STARTED 2026-01-10 14:42:31.372899 | orchestrator | 2026-01-10 14:42:31 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:42:34.409728 | orchestrator | 2026-01-10 14:42:34 | INFO  | Task d11af9e0-773e-46f4-9ea7-f4be64669380 is in state STARTED 2026-01-10 14:42:34.413258 | orchestrator | 2026-01-10 14:42:34 | INFO  | Task b6367eb5-2686-46ac-9949-20af3bf840c4 is in state STARTED 2026-01-10 14:42:34.413328 | orchestrator | 2026-01-10 14:42:34 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:42:37.464650 | orchestrator | 2026-01-10 14:42:37 | INFO  | Task d11af9e0-773e-46f4-9ea7-f4be64669380 is in state STARTED 2026-01-10 14:42:37.465906 | orchestrator | 2026-01-10 14:42:37 | INFO  | Task b6367eb5-2686-46ac-9949-20af3bf840c4 is in state STARTED 2026-01-10 14:42:37.466314 | orchestrator | 2026-01-10 14:42:37 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:42:40.518629 | orchestrator | 2026-01-10 14:42:40 | INFO  | Task d11af9e0-773e-46f4-9ea7-f4be64669380 is in state STARTED 2026-01-10 14:42:40.519065 | orchestrator | 2026-01-10 14:42:40 | INFO  | Task b6367eb5-2686-46ac-9949-20af3bf840c4 is in state STARTED 2026-01-10 14:42:40.519301 | orchestrator | 2026-01-10 14:42:40 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:42:43.573410 | orchestrator | 2026-01-10 14:42:43 | INFO  | Task d11af9e0-773e-46f4-9ea7-f4be64669380 is in state STARTED 2026-01-10 14:42:43.576884 | orchestrator | 2026-01-10 14:42:43 | INFO  | Task b6367eb5-2686-46ac-9949-20af3bf840c4 is in state STARTED 2026-01-10 14:42:43.576970 | orchestrator | 2026-01-10 14:42:43 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:42:46.618635 | orchestrator | 2026-01-10 14:42:46 | INFO  | Task d11af9e0-773e-46f4-9ea7-f4be64669380 is in state STARTED 2026-01-10 14:42:46.622556 | orchestrator | 2026-01-10 14:42:46 | INFO  | Task b6367eb5-2686-46ac-9949-20af3bf840c4 is in state STARTED 2026-01-10 14:42:46.622675 | orchestrator | 2026-01-10 14:42:46 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:42:49.671161 | orchestrator | 2026-01-10 14:42:49 | INFO  | Task d11af9e0-773e-46f4-9ea7-f4be64669380 is in state STARTED 2026-01-10 14:42:49.671798 | orchestrator | 2026-01-10 14:42:49 | INFO  | Task b6367eb5-2686-46ac-9949-20af3bf840c4 is in state STARTED 2026-01-10 14:42:49.671837 | orchestrator | 2026-01-10 14:42:49 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:42:52.721750 | orchestrator | 2026-01-10 14:42:52 | INFO  | Task d11af9e0-773e-46f4-9ea7-f4be64669380 is in state STARTED 2026-01-10 14:42:52.723459 | orchestrator | 2026-01-10 14:42:52 | INFO  | Task b6367eb5-2686-46ac-9949-20af3bf840c4 is in state STARTED 2026-01-10 14:42:52.723517 | orchestrator | 2026-01-10 14:42:52 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:42:55.782845 | orchestrator | 2026-01-10 14:42:55 | INFO  | Task d11af9e0-773e-46f4-9ea7-f4be64669380 is in state STARTED 2026-01-10 14:42:55.786218 | orchestrator | 2026-01-10 14:42:55 | INFO  | Task b6367eb5-2686-46ac-9949-20af3bf840c4 is in state STARTED 2026-01-10 14:42:55.787239 | orchestrator | 2026-01-10 14:42:55 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:42:58.833043 | orchestrator | 2026-01-10 14:42:58 | INFO  | Task d11af9e0-773e-46f4-9ea7-f4be64669380 is in state STARTED 2026-01-10 14:42:58.834983 | orchestrator | 2026-01-10 14:42:58 | INFO  | Task b6367eb5-2686-46ac-9949-20af3bf840c4 is in state STARTED 2026-01-10 14:42:58.835062 | orchestrator | 2026-01-10 14:42:58 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:43:01.879808 | orchestrator | 2026-01-10 14:43:01 | INFO  | Task d11af9e0-773e-46f4-9ea7-f4be64669380 is in state STARTED 2026-01-10 14:43:01.884895 | orchestrator | 2026-01-10 14:43:01 | INFO  | Task b6367eb5-2686-46ac-9949-20af3bf840c4 is in state SUCCESS 2026-01-10 14:43:01.886681 | orchestrator | 2026-01-10 14:43:01.886753 | orchestrator | 2026-01-10 14:43:01.886764 | orchestrator | PLAY [Set kolla_action_mariadb] ************************************************ 2026-01-10 14:43:01.886773 | orchestrator | 2026-01-10 14:43:01.886782 | orchestrator | TASK [Inform the user about the following task] ******************************** 2026-01-10 14:43:01.886791 | orchestrator | Saturday 10 January 2026 14:39:33 +0000 (0:00:00.097) 0:00:00.097 ****** 2026-01-10 14:43:01.886799 | orchestrator | ok: [localhost] => { 2026-01-10 14:43:01.886809 | orchestrator |  "msg": "The task 'Check MariaDB service' fails if the MariaDB service has not yet been deployed. This is fine." 2026-01-10 14:43:01.886817 | orchestrator | } 2026-01-10 14:43:01.886826 | orchestrator | 2026-01-10 14:43:01.886834 | orchestrator | TASK [Check MariaDB service] *************************************************** 2026-01-10 14:43:01.886842 | orchestrator | Saturday 10 January 2026 14:39:33 +0000 (0:00:00.059) 0:00:00.157 ****** 2026-01-10 14:43:01.886850 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.9:3306"} 2026-01-10 14:43:01.886862 | orchestrator | ...ignoring 2026-01-10 14:43:01.886875 | orchestrator | 2026-01-10 14:43:01.886887 | orchestrator | TASK [Set kolla_action_mariadb = upgrade if MariaDB is already running] ******** 2026-01-10 14:43:01.886899 | orchestrator | Saturday 10 January 2026 14:39:36 +0000 (0:00:02.858) 0:00:03.015 ****** 2026-01-10 14:43:01.886912 | orchestrator | skipping: [localhost] 2026-01-10 14:43:01.886926 | orchestrator | 2026-01-10 14:43:01.886970 | orchestrator | TASK [Set kolla_action_mariadb = kolla_action_ng] ****************************** 2026-01-10 14:43:01.886986 | orchestrator | Saturday 10 January 2026 14:39:36 +0000 (0:00:00.069) 0:00:03.085 ****** 2026-01-10 14:43:01.886999 | orchestrator | ok: [localhost] 2026-01-10 14:43:01.887012 | orchestrator | 2026-01-10 14:43:01.887024 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-10 14:43:01.887403 | orchestrator | 2026-01-10 14:43:01.887444 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-10 14:43:01.887457 | orchestrator | Saturday 10 January 2026 14:39:36 +0000 (0:00:00.192) 0:00:03.278 ****** 2026-01-10 14:43:01.887472 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:43:01.887486 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:43:01.887499 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:43:01.887509 | orchestrator | 2026-01-10 14:43:01.887517 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-10 14:43:01.887525 | orchestrator | Saturday 10 January 2026 14:39:37 +0000 (0:00:00.383) 0:00:03.661 ****** 2026-01-10 14:43:01.887533 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2026-01-10 14:43:01.887542 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2026-01-10 14:43:01.887549 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2026-01-10 14:43:01.887557 | orchestrator | 2026-01-10 14:43:01.887565 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2026-01-10 14:43:01.887573 | orchestrator | 2026-01-10 14:43:01.887581 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2026-01-10 14:43:01.887589 | orchestrator | Saturday 10 January 2026 14:39:38 +0000 (0:00:00.918) 0:00:04.579 ****** 2026-01-10 14:43:01.887597 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-01-10 14:43:01.887605 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-01-10 14:43:01.887612 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-01-10 14:43:01.887620 | orchestrator | 2026-01-10 14:43:01.887627 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-01-10 14:43:01.887635 | orchestrator | Saturday 10 January 2026 14:39:38 +0000 (0:00:00.418) 0:00:04.998 ****** 2026-01-10 14:43:01.887657 | orchestrator | included: /ansible/roles/mariadb/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:43:01.887666 | orchestrator | 2026-01-10 14:43:01.887674 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2026-01-10 14:43:01.887682 | orchestrator | Saturday 10 January 2026 14:39:38 +0000 (0:00:00.548) 0:00:05.546 ****** 2026-01-10 14:43:01.887737 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-01-10 14:43:01.887773 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-01-10 14:43:01.887796 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-01-10 14:43:01.887820 | orchestrator | 2026-01-10 14:43:01.887872 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2026-01-10 14:43:01.887888 | orchestrator | Saturday 10 January 2026 14:39:42 +0000 (0:00:03.052) 0:00:08.598 ****** 2026-01-10 14:43:01.887899 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:43:01.887908 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:43:01.887916 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:43:01.887924 | orchestrator | 2026-01-10 14:43:01.887931 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2026-01-10 14:43:01.887939 | orchestrator | Saturday 10 January 2026 14:39:42 +0000 (0:00:00.642) 0:00:09.241 ****** 2026-01-10 14:43:01.887947 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:43:01.887955 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:43:01.887962 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:43:01.887970 | orchestrator | 2026-01-10 14:43:01.887978 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2026-01-10 14:43:01.887986 | orchestrator | Saturday 10 January 2026 14:39:44 +0000 (0:00:01.822) 0:00:11.063 ****** 2026-01-10 14:43:01.887995 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-01-10 14:43:01.888062 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-01-10 14:43:01.888092 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-01-10 14:43:01.888108 | orchestrator | 2026-01-10 14:43:01.888123 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2026-01-10 14:43:01.888137 | orchestrator | Saturday 10 January 2026 14:39:48 +0000 (0:00:03.876) 0:00:14.940 ****** 2026-01-10 14:43:01.888151 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:43:01.888163 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:43:01.888172 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:43:01.888181 | orchestrator | 2026-01-10 14:43:01.888190 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2026-01-10 14:43:01.888199 | orchestrator | Saturday 10 January 2026 14:39:49 +0000 (0:00:01.383) 0:00:16.324 ****** 2026-01-10 14:43:01.888208 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:43:01.888222 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:43:01.888231 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:43:01.888241 | orchestrator | 2026-01-10 14:43:01.888250 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-01-10 14:43:01.888259 | orchestrator | Saturday 10 January 2026 14:39:54 +0000 (0:00:04.912) 0:00:21.236 ****** 2026-01-10 14:43:01.888268 | orchestrator | included: /ansible/roles/mariadb/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:43:01.888278 | orchestrator | 2026-01-10 14:43:01.888290 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-01-10 14:43:01.888303 | orchestrator | Saturday 10 January 2026 14:39:55 +0000 (0:00:00.581) 0:00:21.818 ****** 2026-01-10 14:43:01.888337 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-10 14:43:01.888353 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:43:01.888373 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-10 14:43:01.888388 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:43:01.888435 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-10 14:43:01.888460 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:43:01.888473 | orchestrator | 2026-01-10 14:43:01.888486 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-01-10 14:43:01.888499 | orchestrator | Saturday 10 January 2026 14:39:58 +0000 (0:00:03.081) 0:00:24.899 ****** 2026-01-10 14:43:01.888513 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-10 14:43:01.888528 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:43:01.888548 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-10 14:43:01.888563 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:43:01.888576 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-10 14:43:01.888594 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:43:01.888728 | orchestrator | 2026-01-10 14:43:01.888747 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-01-10 14:43:01.888761 | orchestrator | Saturday 10 January 2026 14:40:02 +0000 (0:00:03.856) 0:00:28.756 ****** 2026-01-10 14:43:01.888783 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-10 14:43:01.888805 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:43:01.888831 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-10 14:43:01.888844 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:43:01.888862 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-10 14:43:01.888883 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:43:01.888897 | orchestrator | 2026-01-10 14:43:01.888910 | orchestrator | TASK [service-check-containers : mariadb | Check containers] ******************* 2026-01-10 14:43:01.888924 | orchestrator | Saturday 10 January 2026 14:40:05 +0000 (0:00:02.873) 0:00:31.630 ****** 2026-01-10 14:43:01.888949 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-01-10 14:43:01.888970 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-01-10 14:43:01.889003 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-01-10 14:43:01.889019 | orchestrator | 2026-01-10 14:43:01.889031 | orchestrator | TASK [service-check-containers : mariadb | Notify handlers to restart containers] *** 2026-01-10 14:43:01.889043 | orchestrator | Saturday 10 January 2026 14:40:08 +0000 (0:00:03.787) 0:00:35.418 ****** 2026-01-10 14:43:01.889056 | orchestrator | changed: [testbed-node-0] => { 2026-01-10 14:43:01.889068 | orchestrator |  "msg": "Notifying handlers" 2026-01-10 14:43:01.889081 | orchestrator | } 2026-01-10 14:43:01.889094 | orchestrator | changed: [testbed-node-1] => { 2026-01-10 14:43:01.889105 | orchestrator |  "msg": "Notifying handlers" 2026-01-10 14:43:01.889116 | orchestrator | } 2026-01-10 14:43:01.889129 | orchestrator | changed: [testbed-node-2] => { 2026-01-10 14:43:01.889141 | orchestrator |  "msg": "Notifying handlers" 2026-01-10 14:43:01.889154 | orchestrator | } 2026-01-10 14:43:01.889168 | orchestrator | 2026-01-10 14:43:01.889182 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-01-10 14:43:01.889195 | orchestrator | Saturday 10 January 2026 14:40:09 +0000 (0:00:00.622) 0:00:36.040 ****** 2026-01-10 14:43:01.889224 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-10 14:43:01.889233 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:43:01.889250 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-10 14:43:01.889259 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:43:01.889271 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-10 14:43:01.889286 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:43:01.889294 | orchestrator | 2026-01-10 14:43:01.889301 | orchestrator | TASK [mariadb : Checking for mariadb cluster] ********************************** 2026-01-10 14:43:01.889309 | orchestrator | Saturday 10 January 2026 14:40:12 +0000 (0:00:03.043) 0:00:39.083 ****** 2026-01-10 14:43:01.889317 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:43:01.889325 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:43:01.889333 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:43:01.889340 | orchestrator | 2026-01-10 14:43:01.889348 | orchestrator | TASK [mariadb : Cleaning up temp file on localhost] **************************** 2026-01-10 14:43:01.889356 | orchestrator | Saturday 10 January 2026 14:40:12 +0000 (0:00:00.298) 0:00:39.382 ****** 2026-01-10 14:43:01.889364 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:43:01.889372 | orchestrator | 2026-01-10 14:43:01.889379 | orchestrator | TASK [mariadb : Stop MariaDB containers] *************************************** 2026-01-10 14:43:01.889387 | orchestrator | Saturday 10 January 2026 14:40:12 +0000 (0:00:00.103) 0:00:39.485 ****** 2026-01-10 14:43:01.889395 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:43:01.889405 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:43:01.889444 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:43:01.889458 | orchestrator | 2026-01-10 14:43:01.889472 | orchestrator | TASK [mariadb : Run MariaDB wsrep recovery] ************************************ 2026-01-10 14:43:01.889485 | orchestrator | Saturday 10 January 2026 14:40:13 +0000 (0:00:00.442) 0:00:39.928 ****** 2026-01-10 14:43:01.889505 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:43:01.889518 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:43:01.889530 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:43:01.889543 | orchestrator | 2026-01-10 14:43:01.889555 | orchestrator | TASK [mariadb : Copying MariaDB log file to /tmp] ****************************** 2026-01-10 14:43:01.889568 | orchestrator | Saturday 10 January 2026 14:40:13 +0000 (0:00:00.303) 0:00:40.232 ****** 2026-01-10 14:43:01.889581 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:43:01.889594 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:43:01.889607 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:43:01.889620 | orchestrator | 2026-01-10 14:43:01.889634 | orchestrator | TASK [mariadb : Get MariaDB wsrep recovery seqno] ****************************** 2026-01-10 14:43:01.889647 | orchestrator | Saturday 10 January 2026 14:40:13 +0000 (0:00:00.286) 0:00:40.519 ****** 2026-01-10 14:43:01.889661 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:43:01.889685 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:43:01.889693 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:43:01.889701 | orchestrator | 2026-01-10 14:43:01.889709 | orchestrator | TASK [mariadb : Removing MariaDB log file from /tmp] *************************** 2026-01-10 14:43:01.889717 | orchestrator | Saturday 10 January 2026 14:40:14 +0000 (0:00:00.285) 0:00:40.805 ****** 2026-01-10 14:43:01.889724 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:43:01.889733 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:43:01.889746 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:43:01.889759 | orchestrator | 2026-01-10 14:43:01.889772 | orchestrator | TASK [mariadb : Registering MariaDB seqno variable] **************************** 2026-01-10 14:43:01.889785 | orchestrator | Saturday 10 January 2026 14:40:14 +0000 (0:00:00.414) 0:00:41.219 ****** 2026-01-10 14:43:01.889800 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:43:01.889813 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:43:01.889826 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:43:01.889839 | orchestrator | 2026-01-10 14:43:01.889849 | orchestrator | TASK [mariadb : Comparing seqno value on all mariadb hosts] ******************** 2026-01-10 14:43:01.889857 | orchestrator | Saturday 10 January 2026 14:40:15 +0000 (0:00:00.362) 0:00:41.582 ****** 2026-01-10 14:43:01.889865 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-01-10 14:43:01.889874 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-01-10 14:43:01.889881 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-01-10 14:43:01.889889 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:43:01.889897 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-01-10 14:43:01.889905 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-01-10 14:43:01.889913 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-01-10 14:43:01.889920 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:43:01.889928 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-01-10 14:43:01.889936 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-01-10 14:43:01.889943 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-01-10 14:43:01.889951 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:43:01.889959 | orchestrator | 2026-01-10 14:43:01.889966 | orchestrator | TASK [mariadb : Writing hostname of host with the largest seqno to temp file] *** 2026-01-10 14:43:01.889974 | orchestrator | Saturday 10 January 2026 14:40:15 +0000 (0:00:00.426) 0:00:42.008 ****** 2026-01-10 14:43:01.889982 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:43:01.889991 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:43:01.890004 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:43:01.890084 | orchestrator | 2026-01-10 14:43:01.890104 | orchestrator | TASK [mariadb : Registering mariadb_recover_inventory_name from temp file] ***** 2026-01-10 14:43:01.890118 | orchestrator | Saturday 10 January 2026 14:40:15 +0000 (0:00:00.282) 0:00:42.291 ****** 2026-01-10 14:43:01.890134 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:43:01.890142 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:43:01.890150 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:43:01.890158 | orchestrator | 2026-01-10 14:43:01.890166 | orchestrator | TASK [mariadb : Store bootstrap and master hostnames into facts] *************** 2026-01-10 14:43:01.890174 | orchestrator | Saturday 10 January 2026 14:40:16 +0000 (0:00:00.438) 0:00:42.729 ****** 2026-01-10 14:43:01.890182 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:43:01.890189 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:43:01.890197 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:43:01.890207 | orchestrator | 2026-01-10 14:43:01.890221 | orchestrator | TASK [mariadb : Set grastate.dat file from MariaDB container in bootstrap host] *** 2026-01-10 14:43:01.890234 | orchestrator | Saturday 10 January 2026 14:40:16 +0000 (0:00:00.282) 0:00:43.012 ****** 2026-01-10 14:43:01.890248 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:43:01.890262 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:43:01.890285 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:43:01.890299 | orchestrator | 2026-01-10 14:43:01.890307 | orchestrator | TASK [mariadb : Starting first MariaDB container] ****************************** 2026-01-10 14:43:01.890315 | orchestrator | Saturday 10 January 2026 14:40:16 +0000 (0:00:00.313) 0:00:43.325 ****** 2026-01-10 14:43:01.890324 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:43:01.890337 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:43:01.890350 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:43:01.890363 | orchestrator | 2026-01-10 14:43:01.890375 | orchestrator | TASK [mariadb : Wait for first MariaDB container] ****************************** 2026-01-10 14:43:01.890388 | orchestrator | Saturday 10 January 2026 14:40:17 +0000 (0:00:00.311) 0:00:43.637 ****** 2026-01-10 14:43:01.890402 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:43:01.890546 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:43:01.890557 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:43:01.890565 | orchestrator | 2026-01-10 14:43:01.890573 | orchestrator | TASK [mariadb : Set first MariaDB container as primary] ************************ 2026-01-10 14:43:01.890581 | orchestrator | Saturday 10 January 2026 14:40:17 +0000 (0:00:00.442) 0:00:44.080 ****** 2026-01-10 14:43:01.890588 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:43:01.890596 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:43:01.890604 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:43:01.890611 | orchestrator | 2026-01-10 14:43:01.890619 | orchestrator | TASK [mariadb : Wait for MariaDB to become operational] ************************ 2026-01-10 14:43:01.890640 | orchestrator | Saturday 10 January 2026 14:40:17 +0000 (0:00:00.279) 0:00:44.359 ****** 2026-01-10 14:43:01.890648 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:43:01.890656 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:43:01.890663 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:43:01.890670 | orchestrator | 2026-01-10 14:43:01.890676 | orchestrator | TASK [mariadb : Restart slave MariaDB container(s)] **************************** 2026-01-10 14:43:01.890683 | orchestrator | Saturday 10 January 2026 14:40:18 +0000 (0:00:00.324) 0:00:44.684 ****** 2026-01-10 14:43:01.890692 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-10 14:43:01.890700 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:43:01.890713 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-10 14:43:01.890727 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:43:01.890740 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-10 14:43:01.890748 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:43:01.890754 | orchestrator | 2026-01-10 14:43:01.890761 | orchestrator | TASK [mariadb : Wait for slave MariaDB] **************************************** 2026-01-10 14:43:01.890767 | orchestrator | Saturday 10 January 2026 14:40:20 +0000 (0:00:02.167) 0:00:46.852 ****** 2026-01-10 14:43:01.890774 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:43:01.890781 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:43:01.890792 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:43:01.890798 | orchestrator | 2026-01-10 14:43:01.890805 | orchestrator | TASK [mariadb : Restart master MariaDB container(s)] *************************** 2026-01-10 14:43:01.890814 | orchestrator | Saturday 10 January 2026 14:40:20 +0000 (0:00:00.352) 0:00:47.204 ****** 2026-01-10 14:43:01.890830 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-10 14:43:01.890846 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:43:01.890858 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-10 14:43:01.890876 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:43:01.890891 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-10 14:43:01.890901 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:43:01.890912 | orchestrator | 2026-01-10 14:43:01.890922 | orchestrator | TASK [mariadb : Wait for master mariadb] *************************************** 2026-01-10 14:43:01.890933 | orchestrator | Saturday 10 January 2026 14:40:23 +0000 (0:00:02.516) 0:00:49.721 ****** 2026-01-10 14:43:01.890944 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:43:01.890955 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:43:01.890962 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:43:01.890969 | orchestrator | 2026-01-10 14:43:01.890975 | orchestrator | TASK [service-check : mariadb | Get container facts] *************************** 2026-01-10 14:43:01.890987 | orchestrator | Saturday 10 January 2026 14:40:23 +0000 (0:00:00.346) 0:00:50.067 ****** 2026-01-10 14:43:01.890993 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:43:01.891000 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:43:01.891007 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:43:01.891014 | orchestrator | 2026-01-10 14:43:01.891020 | orchestrator | TASK [service-check : mariadb | Fail if containers are missing or not running] *** 2026-01-10 14:43:01.891027 | orchestrator | Saturday 10 January 2026 14:40:23 +0000 (0:00:00.304) 0:00:50.372 ****** 2026-01-10 14:43:01.891033 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:43:01.891040 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:43:01.891046 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:43:01.891053 | orchestrator | 2026-01-10 14:43:01.891059 | orchestrator | TASK [service-check : mariadb | Fail if containers are unhealthy] ************** 2026-01-10 14:43:01.891066 | orchestrator | Saturday 10 January 2026 14:40:24 +0000 (0:00:00.312) 0:00:50.685 ****** 2026-01-10 14:43:01.891072 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:43:01.891079 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:43:01.891085 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:43:01.891092 | orchestrator | 2026-01-10 14:43:01.891098 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2026-01-10 14:43:01.891105 | orchestrator | Saturday 10 January 2026 14:40:24 +0000 (0:00:00.761) 0:00:51.446 ****** 2026-01-10 14:43:01.891111 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:43:01.891124 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:43:01.891130 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:43:01.891137 | orchestrator | 2026-01-10 14:43:01.891144 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2026-01-10 14:43:01.891150 | orchestrator | Saturday 10 January 2026 14:40:25 +0000 (0:00:00.323) 0:00:51.770 ****** 2026-01-10 14:43:01.891157 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:43:01.891163 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:43:01.891170 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:43:01.891176 | orchestrator | 2026-01-10 14:43:01.891183 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2026-01-10 14:43:01.891189 | orchestrator | Saturday 10 January 2026 14:40:26 +0000 (0:00:01.001) 0:00:52.772 ****** 2026-01-10 14:43:01.891196 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:43:01.891203 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:43:01.891209 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:43:01.891216 | orchestrator | 2026-01-10 14:43:01.891222 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2026-01-10 14:43:01.891229 | orchestrator | Saturday 10 January 2026 14:40:26 +0000 (0:00:00.620) 0:00:53.392 ****** 2026-01-10 14:43:01.891235 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:43:01.891242 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:43:01.891248 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:43:01.891255 | orchestrator | 2026-01-10 14:43:01.891261 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2026-01-10 14:43:01.891268 | orchestrator | Saturday 10 January 2026 14:40:27 +0000 (0:00:00.370) 0:00:53.763 ****** 2026-01-10 14:43:01.891277 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.10:3306"} 2026-01-10 14:43:01.891289 | orchestrator | ...ignoring 2026-01-10 14:43:01.891306 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.11:3306"} 2026-01-10 14:43:01.891318 | orchestrator | ...ignoring 2026-01-10 14:43:01.891328 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.12:3306"} 2026-01-10 14:43:01.891345 | orchestrator | ...ignoring 2026-01-10 14:43:01.891355 | orchestrator | 2026-01-10 14:43:01.891366 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2026-01-10 14:43:01.891377 | orchestrator | Saturday 10 January 2026 14:40:38 +0000 (0:00:10.817) 0:01:04.580 ****** 2026-01-10 14:43:01.891387 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:43:01.891397 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:43:01.891407 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:43:01.891440 | orchestrator | 2026-01-10 14:43:01.891451 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2026-01-10 14:43:01.891462 | orchestrator | Saturday 10 January 2026 14:40:38 +0000 (0:00:00.330) 0:01:04.911 ****** 2026-01-10 14:43:01.891472 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:43:01.891483 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:43:01.891494 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:43:01.891504 | orchestrator | 2026-01-10 14:43:01.891516 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2026-01-10 14:43:01.891527 | orchestrator | Saturday 10 January 2026 14:40:38 +0000 (0:00:00.443) 0:01:05.354 ****** 2026-01-10 14:43:01.891537 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:43:01.891549 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:43:01.891560 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:43:01.891572 | orchestrator | 2026-01-10 14:43:01.891582 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2026-01-10 14:43:01.891593 | orchestrator | Saturday 10 January 2026 14:40:39 +0000 (0:00:00.321) 0:01:05.676 ****** 2026-01-10 14:43:01.891600 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:43:01.891614 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:43:01.891620 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:43:01.891626 | orchestrator | 2026-01-10 14:43:01.891633 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2026-01-10 14:43:01.891640 | orchestrator | Saturday 10 January 2026 14:40:39 +0000 (0:00:00.313) 0:01:05.989 ****** 2026-01-10 14:43:01.891646 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:43:01.891653 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:43:01.891659 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:43:01.891666 | orchestrator | 2026-01-10 14:43:01.891672 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2026-01-10 14:43:01.891679 | orchestrator | Saturday 10 January 2026 14:40:39 +0000 (0:00:00.327) 0:01:06.317 ****** 2026-01-10 14:43:01.891685 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:43:01.891698 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:43:01.891705 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:43:01.891711 | orchestrator | 2026-01-10 14:43:01.891718 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-01-10 14:43:01.891724 | orchestrator | Saturday 10 January 2026 14:40:40 +0000 (0:00:00.659) 0:01:06.976 ****** 2026-01-10 14:43:01.891731 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:43:01.891737 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:43:01.891748 | orchestrator | included: /ansible/roles/mariadb/tasks/bootstrap_cluster.yml for testbed-node-0 2026-01-10 14:43:01.891758 | orchestrator | 2026-01-10 14:43:01.891768 | orchestrator | TASK [mariadb : Running MariaDB bootstrap container] *************************** 2026-01-10 14:43:01.891778 | orchestrator | Saturday 10 January 2026 14:40:40 +0000 (0:00:00.400) 0:01:07.377 ****** 2026-01-10 14:43:01.891787 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:43:01.891797 | orchestrator | 2026-01-10 14:43:01.891808 | orchestrator | TASK [mariadb : Store bootstrap host name into facts] ************************** 2026-01-10 14:43:01.891819 | orchestrator | Saturday 10 January 2026 14:40:51 +0000 (0:00:10.952) 0:01:18.329 ****** 2026-01-10 14:43:01.891831 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:43:01.891842 | orchestrator | 2026-01-10 14:43:01.891853 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-01-10 14:43:01.891865 | orchestrator | Saturday 10 January 2026 14:40:51 +0000 (0:00:00.153) 0:01:18.482 ****** 2026-01-10 14:43:01.891876 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:43:01.891886 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:43:01.891896 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:43:01.891906 | orchestrator | 2026-01-10 14:43:01.891917 | orchestrator | RUNNING HANDLER [mariadb : Starting first MariaDB container] ******************* 2026-01-10 14:43:01.891928 | orchestrator | Saturday 10 January 2026 14:40:53 +0000 (0:00:01.082) 0:01:19.565 ****** 2026-01-10 14:43:01.891940 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:43:01.891950 | orchestrator | 2026-01-10 14:43:01.891962 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service port liveness] ******* 2026-01-10 14:43:01.891969 | orchestrator | Saturday 10 January 2026 14:41:00 +0000 (0:00:07.884) 0:01:27.449 ****** 2026-01-10 14:43:01.891976 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:43:01.891982 | orchestrator | 2026-01-10 14:43:01.891989 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service to sync WSREP] ******* 2026-01-10 14:43:01.891995 | orchestrator | Saturday 10 January 2026 14:41:02 +0000 (0:00:01.669) 0:01:29.119 ****** 2026-01-10 14:43:01.892002 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:43:01.892008 | orchestrator | 2026-01-10 14:43:01.892015 | orchestrator | RUNNING HANDLER [mariadb : Ensure MariaDB is running normally on bootstrap host] *** 2026-01-10 14:43:01.892021 | orchestrator | Saturday 10 January 2026 14:41:04 +0000 (0:00:02.226) 0:01:31.346 ****** 2026-01-10 14:43:01.892028 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:43:01.892034 | orchestrator | 2026-01-10 14:43:01.892041 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2026-01-10 14:43:01.892047 | orchestrator | Saturday 10 January 2026 14:41:04 +0000 (0:00:00.138) 0:01:31.484 ****** 2026-01-10 14:43:01.892064 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:43:01.892071 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:43:01.892077 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:43:01.892083 | orchestrator | 2026-01-10 14:43:01.892090 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2026-01-10 14:43:01.892096 | orchestrator | Saturday 10 January 2026 14:41:05 +0000 (0:00:00.460) 0:01:31.945 ****** 2026-01-10 14:43:01.892103 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:43:01.892109 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2026-01-10 14:43:01.892116 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:43:01.892123 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:43:01.892129 | orchestrator | 2026-01-10 14:43:01.892141 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-01-10 14:43:01.892148 | orchestrator | skipping: no hosts matched 2026-01-10 14:43:01.892154 | orchestrator | 2026-01-10 14:43:01.892161 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-01-10 14:43:01.892168 | orchestrator | 2026-01-10 14:43:01.892174 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-01-10 14:43:01.892181 | orchestrator | Saturday 10 January 2026 14:41:06 +0000 (0:00:00.625) 0:01:32.570 ****** 2026-01-10 14:43:01.892187 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:43:01.892194 | orchestrator | 2026-01-10 14:43:01.892200 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-01-10 14:43:01.892207 | orchestrator | Saturday 10 January 2026 14:41:28 +0000 (0:00:22.683) 0:01:55.254 ****** 2026-01-10 14:43:01.892213 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:43:01.892220 | orchestrator | 2026-01-10 14:43:01.892226 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-01-10 14:43:01.892233 | orchestrator | Saturday 10 January 2026 14:41:39 +0000 (0:00:10.719) 0:02:05.973 ****** 2026-01-10 14:43:01.892239 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:43:01.892246 | orchestrator | 2026-01-10 14:43:01.892252 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-01-10 14:43:01.892259 | orchestrator | 2026-01-10 14:43:01.892265 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-01-10 14:43:01.892272 | orchestrator | Saturday 10 January 2026 14:41:42 +0000 (0:00:02.641) 0:02:08.614 ****** 2026-01-10 14:43:01.892278 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:43:01.892285 | orchestrator | 2026-01-10 14:43:01.892291 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-01-10 14:43:01.892298 | orchestrator | Saturday 10 January 2026 14:42:04 +0000 (0:00:22.609) 0:02:31.224 ****** 2026-01-10 14:43:01.892304 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:43:01.892311 | orchestrator | 2026-01-10 14:43:01.892317 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-01-10 14:43:01.892324 | orchestrator | Saturday 10 January 2026 14:42:15 +0000 (0:00:10.619) 0:02:41.844 ****** 2026-01-10 14:43:01.892330 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:43:01.892337 | orchestrator | 2026-01-10 14:43:01.892343 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2026-01-10 14:43:01.892350 | orchestrator | 2026-01-10 14:43:01.892363 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-01-10 14:43:01.892370 | orchestrator | Saturday 10 January 2026 14:42:17 +0000 (0:00:02.326) 0:02:44.170 ****** 2026-01-10 14:43:01.892376 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:43:01.892383 | orchestrator | 2026-01-10 14:43:01.892390 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-01-10 14:43:01.892396 | orchestrator | Saturday 10 January 2026 14:42:30 +0000 (0:00:12.553) 0:02:56.724 ****** 2026-01-10 14:43:01.892403 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:43:01.892409 | orchestrator | 2026-01-10 14:43:01.892436 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-01-10 14:43:01.892448 | orchestrator | Saturday 10 January 2026 14:42:34 +0000 (0:00:04.718) 0:03:01.442 ****** 2026-01-10 14:43:01.892466 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:43:01.892473 | orchestrator | 2026-01-10 14:43:01.892480 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2026-01-10 14:43:01.892487 | orchestrator | 2026-01-10 14:43:01.892493 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2026-01-10 14:43:01.892500 | orchestrator | Saturday 10 January 2026 14:42:37 +0000 (0:00:02.557) 0:03:04.000 ****** 2026-01-10 14:43:01.892506 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:43:01.892513 | orchestrator | 2026-01-10 14:43:01.892520 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2026-01-10 14:43:01.892526 | orchestrator | Saturday 10 January 2026 14:42:38 +0000 (0:00:00.574) 0:03:04.575 ****** 2026-01-10 14:43:01.892533 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:43:01.892540 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:43:01.892546 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:43:01.892553 | orchestrator | 2026-01-10 14:43:01.892560 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2026-01-10 14:43:01.892566 | orchestrator | Saturday 10 January 2026 14:42:40 +0000 (0:00:02.518) 0:03:07.093 ****** 2026-01-10 14:43:01.892573 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:43:01.892579 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:43:01.892586 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:43:01.892592 | orchestrator | 2026-01-10 14:43:01.892599 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2026-01-10 14:43:01.892605 | orchestrator | Saturday 10 January 2026 14:42:43 +0000 (0:00:02.574) 0:03:09.668 ****** 2026-01-10 14:43:01.892612 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:43:01.892619 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:43:01.892625 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:43:01.892632 | orchestrator | 2026-01-10 14:43:01.892638 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2026-01-10 14:43:01.892645 | orchestrator | Saturday 10 January 2026 14:42:45 +0000 (0:00:02.395) 0:03:12.063 ****** 2026-01-10 14:43:01.892651 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:43:01.892658 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:43:01.892665 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:43:01.892671 | orchestrator | 2026-01-10 14:43:01.892680 | orchestrator | TASK [service-check : mariadb | Get container facts] *************************** 2026-01-10 14:43:01.892691 | orchestrator | Saturday 10 January 2026 14:42:47 +0000 (0:00:02.386) 0:03:14.450 ****** 2026-01-10 14:43:01.892702 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:43:01.892713 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:43:01.892725 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:43:01.892736 | orchestrator | 2026-01-10 14:43:01.892748 | orchestrator | TASK [service-check : mariadb | Fail if containers are missing or not running] *** 2026-01-10 14:43:01.892760 | orchestrator | Saturday 10 January 2026 14:42:52 +0000 (0:00:05.018) 0:03:19.469 ****** 2026-01-10 14:43:01.892767 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:43:01.892774 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:43:01.892780 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:43:01.892787 | orchestrator | 2026-01-10 14:43:01.892798 | orchestrator | TASK [service-check : mariadb | Fail if containers are unhealthy] ************** 2026-01-10 14:43:01.892805 | orchestrator | Saturday 10 January 2026 14:42:55 +0000 (0:00:02.572) 0:03:22.041 ****** 2026-01-10 14:43:01.892811 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:43:01.892818 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:43:01.892824 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:43:01.892831 | orchestrator | 2026-01-10 14:43:01.892838 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2026-01-10 14:43:01.892844 | orchestrator | Saturday 10 January 2026 14:42:56 +0000 (0:00:00.959) 0:03:23.001 ****** 2026-01-10 14:43:01.892851 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:43:01.892857 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:43:01.892868 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:43:01.892874 | orchestrator | 2026-01-10 14:43:01.892881 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2026-01-10 14:43:01.892888 | orchestrator | Saturday 10 January 2026 14:42:59 +0000 (0:00:02.790) 0:03:25.792 ****** 2026-01-10 14:43:01.892894 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:43:01.892901 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:43:01.892907 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:43:01.892914 | orchestrator | 2026-01-10 14:43:01.892920 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-10 14:43:01.892927 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2026-01-10 14:43:01.892935 | orchestrator | testbed-node-0 : ok=36  changed=17  unreachable=0 failed=0 skipped=39  rescued=0 ignored=1  2026-01-10 14:43:01.892943 | orchestrator | testbed-node-1 : ok=22  changed=8  unreachable=0 failed=0 skipped=45  rescued=0 ignored=1  2026-01-10 14:43:01.892950 | orchestrator | testbed-node-2 : ok=22  changed=8  unreachable=0 failed=0 skipped=45  rescued=0 ignored=1  2026-01-10 14:43:01.892960 | orchestrator | 2026-01-10 14:43:01.892971 | orchestrator | 2026-01-10 14:43:01.892993 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-10 14:43:01.893008 | orchestrator | Saturday 10 January 2026 14:42:59 +0000 (0:00:00.449) 0:03:26.242 ****** 2026-01-10 14:43:01.893018 | orchestrator | =============================================================================== 2026-01-10 14:43:01.893028 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 45.29s 2026-01-10 14:43:01.893039 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 21.34s 2026-01-10 14:43:01.893050 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 12.55s 2026-01-10 14:43:01.893059 | orchestrator | mariadb : Running MariaDB bootstrap container -------------------------- 10.95s 2026-01-10 14:43:01.893069 | orchestrator | mariadb : Check MariaDB service port liveness -------------------------- 10.82s 2026-01-10 14:43:01.893080 | orchestrator | mariadb : Starting first MariaDB container ------------------------------ 7.88s 2026-01-10 14:43:01.893091 | orchestrator | service-check : mariadb | Get container facts --------------------------- 5.02s 2026-01-10 14:43:01.893102 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 4.97s 2026-01-10 14:43:01.893112 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 4.91s 2026-01-10 14:43:01.893122 | orchestrator | mariadb : Wait for MariaDB service port liveness ------------------------ 4.72s 2026-01-10 14:43:01.893136 | orchestrator | mariadb : Copying over config.json files for services ------------------- 3.88s 2026-01-10 14:43:01.893147 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS certificate --- 3.86s 2026-01-10 14:43:01.893158 | orchestrator | service-check-containers : mariadb | Check containers ------------------- 3.79s 2026-01-10 14:43:01.893169 | orchestrator | service-cert-copy : mariadb | Copying over extra CA certificates -------- 3.08s 2026-01-10 14:43:01.893180 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 3.05s 2026-01-10 14:43:01.893189 | orchestrator | service-check-containers : Include tasks -------------------------------- 3.04s 2026-01-10 14:43:01.893195 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS key ----- 2.87s 2026-01-10 14:43:01.893202 | orchestrator | Check MariaDB service --------------------------------------------------- 2.86s 2026-01-10 14:43:01.893208 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 2.79s 2026-01-10 14:43:01.893215 | orchestrator | mariadb : Creating mysql monitor user ----------------------------------- 2.57s 2026-01-10 14:43:01.893222 | orchestrator | 2026-01-10 14:43:01 | INFO  | Task 703b5669-de21-4506-ab77-7254d80264e5 is in state STARTED 2026-01-10 14:43:01.893236 | orchestrator | 2026-01-10 14:43:01 | INFO  | Task 6f577a16-db16-4f3d-9b0a-ec524f08f8fa is in state STARTED 2026-01-10 14:43:01.893243 | orchestrator | 2026-01-10 14:43:01 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:43:04.942589 | orchestrator | 2026-01-10 14:43:04 | INFO  | Task d11af9e0-773e-46f4-9ea7-f4be64669380 is in state STARTED 2026-01-10 14:43:04.942856 | orchestrator | 2026-01-10 14:43:04 | INFO  | Task 703b5669-de21-4506-ab77-7254d80264e5 is in state STARTED 2026-01-10 14:43:04.944209 | orchestrator | 2026-01-10 14:43:04 | INFO  | Task 6f577a16-db16-4f3d-9b0a-ec524f08f8fa is in state STARTED 2026-01-10 14:43:04.944255 | orchestrator | 2026-01-10 14:43:04 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:43:07.990552 | orchestrator | 2026-01-10 14:43:07 | INFO  | Task d11af9e0-773e-46f4-9ea7-f4be64669380 is in state STARTED 2026-01-10 14:43:07.990730 | orchestrator | 2026-01-10 14:43:07 | INFO  | Task 703b5669-de21-4506-ab77-7254d80264e5 is in state STARTED 2026-01-10 14:43:07.991795 | orchestrator | 2026-01-10 14:43:07 | INFO  | Task 6f577a16-db16-4f3d-9b0a-ec524f08f8fa is in state STARTED 2026-01-10 14:43:07.991851 | orchestrator | 2026-01-10 14:43:07 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:43:11.029170 | orchestrator | 2026-01-10 14:43:11 | INFO  | Task d11af9e0-773e-46f4-9ea7-f4be64669380 is in state STARTED 2026-01-10 14:43:11.031728 | orchestrator | 2026-01-10 14:43:11 | INFO  | Task 703b5669-de21-4506-ab77-7254d80264e5 is in state STARTED 2026-01-10 14:43:11.035369 | orchestrator | 2026-01-10 14:43:11 | INFO  | Task 6f577a16-db16-4f3d-9b0a-ec524f08f8fa is in state STARTED 2026-01-10 14:43:11.035465 | orchestrator | 2026-01-10 14:43:11 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:43:14.082433 | orchestrator | 2026-01-10 14:43:14 | INFO  | Task d11af9e0-773e-46f4-9ea7-f4be64669380 is in state STARTED 2026-01-10 14:43:14.085126 | orchestrator | 2026-01-10 14:43:14 | INFO  | Task 703b5669-de21-4506-ab77-7254d80264e5 is in state STARTED 2026-01-10 14:43:14.085177 | orchestrator | 2026-01-10 14:43:14 | INFO  | Task 6f577a16-db16-4f3d-9b0a-ec524f08f8fa is in state STARTED 2026-01-10 14:43:14.085186 | orchestrator | 2026-01-10 14:43:14 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:43:17.124459 | orchestrator | 2026-01-10 14:43:17 | INFO  | Task d11af9e0-773e-46f4-9ea7-f4be64669380 is in state STARTED 2026-01-10 14:43:17.126755 | orchestrator | 2026-01-10 14:43:17 | INFO  | Task 703b5669-de21-4506-ab77-7254d80264e5 is in state STARTED 2026-01-10 14:43:17.128913 | orchestrator | 2026-01-10 14:43:17 | INFO  | Task 6f577a16-db16-4f3d-9b0a-ec524f08f8fa is in state STARTED 2026-01-10 14:43:17.128971 | orchestrator | 2026-01-10 14:43:17 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:43:20.168999 | orchestrator | 2026-01-10 14:43:20 | INFO  | Task d11af9e0-773e-46f4-9ea7-f4be64669380 is in state STARTED 2026-01-10 14:43:20.169318 | orchestrator | 2026-01-10 14:43:20 | INFO  | Task 703b5669-de21-4506-ab77-7254d80264e5 is in state STARTED 2026-01-10 14:43:20.170992 | orchestrator | 2026-01-10 14:43:20 | INFO  | Task 6f577a16-db16-4f3d-9b0a-ec524f08f8fa is in state STARTED 2026-01-10 14:43:20.171049 | orchestrator | 2026-01-10 14:43:20 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:43:23.219899 | orchestrator | 2026-01-10 14:43:23 | INFO  | Task d11af9e0-773e-46f4-9ea7-f4be64669380 is in state STARTED 2026-01-10 14:43:23.219995 | orchestrator | 2026-01-10 14:43:23 | INFO  | Task 703b5669-de21-4506-ab77-7254d80264e5 is in state STARTED 2026-01-10 14:43:23.220054 | orchestrator | 2026-01-10 14:43:23 | INFO  | Task 6f577a16-db16-4f3d-9b0a-ec524f08f8fa is in state STARTED 2026-01-10 14:43:23.220075 | orchestrator | 2026-01-10 14:43:23 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:43:26.257170 | orchestrator | 2026-01-10 14:43:26 | INFO  | Task d11af9e0-773e-46f4-9ea7-f4be64669380 is in state STARTED 2026-01-10 14:43:26.257545 | orchestrator | 2026-01-10 14:43:26 | INFO  | Task 703b5669-de21-4506-ab77-7254d80264e5 is in state STARTED 2026-01-10 14:43:26.259921 | orchestrator | 2026-01-10 14:43:26 | INFO  | Task 6f577a16-db16-4f3d-9b0a-ec524f08f8fa is in state STARTED 2026-01-10 14:43:26.259992 | orchestrator | 2026-01-10 14:43:26 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:43:29.295888 | orchestrator | 2026-01-10 14:43:29 | INFO  | Task d11af9e0-773e-46f4-9ea7-f4be64669380 is in state STARTED 2026-01-10 14:43:29.297357 | orchestrator | 2026-01-10 14:43:29 | INFO  | Task 703b5669-de21-4506-ab77-7254d80264e5 is in state STARTED 2026-01-10 14:43:29.299075 | orchestrator | 2026-01-10 14:43:29 | INFO  | Task 6f577a16-db16-4f3d-9b0a-ec524f08f8fa is in state STARTED 2026-01-10 14:43:29.299132 | orchestrator | 2026-01-10 14:43:29 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:43:32.339056 | orchestrator | 2026-01-10 14:43:32 | INFO  | Task d11af9e0-773e-46f4-9ea7-f4be64669380 is in state STARTED 2026-01-10 14:43:32.339148 | orchestrator | 2026-01-10 14:43:32 | INFO  | Task 703b5669-de21-4506-ab77-7254d80264e5 is in state STARTED 2026-01-10 14:43:32.339167 | orchestrator | 2026-01-10 14:43:32 | INFO  | Task 6f577a16-db16-4f3d-9b0a-ec524f08f8fa is in state STARTED 2026-01-10 14:43:32.339172 | orchestrator | 2026-01-10 14:43:32 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:43:35.372329 | orchestrator | 2026-01-10 14:43:35 | INFO  | Task d11af9e0-773e-46f4-9ea7-f4be64669380 is in state STARTED 2026-01-10 14:43:35.375306 | orchestrator | 2026-01-10 14:43:35 | INFO  | Task 703b5669-de21-4506-ab77-7254d80264e5 is in state STARTED 2026-01-10 14:43:35.378626 | orchestrator | 2026-01-10 14:43:35 | INFO  | Task 6f577a16-db16-4f3d-9b0a-ec524f08f8fa is in state STARTED 2026-01-10 14:43:35.378720 | orchestrator | 2026-01-10 14:43:35 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:43:38.430365 | orchestrator | 2026-01-10 14:43:38.430485 | orchestrator | 2026-01-10 14:43:38 | INFO  | Task d11af9e0-773e-46f4-9ea7-f4be64669380 is in state SUCCESS 2026-01-10 14:43:38.432604 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-01-10 14:43:38.432651 | orchestrator | 2.16.14 2026-01-10 14:43:38.432657 | orchestrator | 2026-01-10 14:43:38.432662 | orchestrator | PLAY [Create ceph pools] ******************************************************* 2026-01-10 14:43:38.432667 | orchestrator | 2026-01-10 14:43:38.432671 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-01-10 14:43:38.432676 | orchestrator | Saturday 10 January 2026 14:41:29 +0000 (0:00:00.650) 0:00:00.650 ****** 2026-01-10 14:43:38.432680 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-10 14:43:38.432685 | orchestrator | 2026-01-10 14:43:38.432689 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-01-10 14:43:38.432693 | orchestrator | Saturday 10 January 2026 14:41:29 +0000 (0:00:00.625) 0:00:01.275 ****** 2026-01-10 14:43:38.432698 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:43:38.432702 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:43:38.432706 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:43:38.432710 | orchestrator | 2026-01-10 14:43:38.432714 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-01-10 14:43:38.432718 | orchestrator | Saturday 10 January 2026 14:41:30 +0000 (0:00:00.689) 0:00:01.965 ****** 2026-01-10 14:43:38.432741 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:43:38.432746 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:43:38.432750 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:43:38.432753 | orchestrator | 2026-01-10 14:43:38.432757 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-01-10 14:43:38.432761 | orchestrator | Saturday 10 January 2026 14:41:30 +0000 (0:00:00.302) 0:00:02.267 ****** 2026-01-10 14:43:38.432765 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:43:38.432769 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:43:38.432773 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:43:38.432777 | orchestrator | 2026-01-10 14:43:38.432781 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-01-10 14:43:38.432785 | orchestrator | Saturday 10 January 2026 14:41:31 +0000 (0:00:00.898) 0:00:03.166 ****** 2026-01-10 14:43:38.432789 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:43:38.432792 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:43:38.432796 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:43:38.432800 | orchestrator | 2026-01-10 14:43:38.432804 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-01-10 14:43:38.432808 | orchestrator | Saturday 10 January 2026 14:41:32 +0000 (0:00:00.317) 0:00:03.483 ****** 2026-01-10 14:43:38.432812 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:43:38.432816 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:43:38.432820 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:43:38.432824 | orchestrator | 2026-01-10 14:43:38.432875 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-01-10 14:43:38.432879 | orchestrator | Saturday 10 January 2026 14:41:32 +0000 (0:00:00.290) 0:00:03.774 ****** 2026-01-10 14:43:38.432882 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:43:38.432925 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:43:38.432931 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:43:38.432937 | orchestrator | 2026-01-10 14:43:38.432943 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-01-10 14:43:38.432948 | orchestrator | Saturday 10 January 2026 14:41:32 +0000 (0:00:00.317) 0:00:04.091 ****** 2026-01-10 14:43:38.432954 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:43:38.432994 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:43:38.433002 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:43:38.433008 | orchestrator | 2026-01-10 14:43:38.433671 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-01-10 14:43:38.433682 | orchestrator | Saturday 10 January 2026 14:41:33 +0000 (0:00:00.502) 0:00:04.593 ****** 2026-01-10 14:43:38.433686 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:43:38.433691 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:43:38.433695 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:43:38.433699 | orchestrator | 2026-01-10 14:43:38.433703 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-01-10 14:43:38.433707 | orchestrator | Saturday 10 January 2026 14:41:33 +0000 (0:00:00.299) 0:00:04.893 ****** 2026-01-10 14:43:38.433711 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-10 14:43:38.433716 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-10 14:43:38.433720 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-10 14:43:38.433724 | orchestrator | 2026-01-10 14:43:38.433728 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-01-10 14:43:38.433731 | orchestrator | Saturday 10 January 2026 14:41:34 +0000 (0:00:00.698) 0:00:05.591 ****** 2026-01-10 14:43:38.433735 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:43:38.433739 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:43:38.433743 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:43:38.433747 | orchestrator | 2026-01-10 14:43:38.433751 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-01-10 14:43:38.433758 | orchestrator | Saturday 10 January 2026 14:41:34 +0000 (0:00:00.424) 0:00:06.016 ****** 2026-01-10 14:43:38.433769 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-10 14:43:38.433773 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-10 14:43:38.433777 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-10 14:43:38.433781 | orchestrator | 2026-01-10 14:43:38.433784 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-01-10 14:43:38.433788 | orchestrator | Saturday 10 January 2026 14:41:36 +0000 (0:00:02.275) 0:00:08.291 ****** 2026-01-10 14:43:38.433792 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-01-10 14:43:38.433796 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-01-10 14:43:38.433800 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-01-10 14:43:38.433804 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:43:38.433808 | orchestrator | 2026-01-10 14:43:38.433834 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-01-10 14:43:38.433838 | orchestrator | Saturday 10 January 2026 14:41:37 +0000 (0:00:00.662) 0:00:08.954 ****** 2026-01-10 14:43:38.433844 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-01-10 14:43:38.433850 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-01-10 14:43:38.433854 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-01-10 14:43:38.433857 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:43:38.433861 | orchestrator | 2026-01-10 14:43:38.433865 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-01-10 14:43:38.433869 | orchestrator | Saturday 10 January 2026 14:41:38 +0000 (0:00:00.879) 0:00:09.834 ****** 2026-01-10 14:43:38.433874 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-01-10 14:43:38.433880 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-01-10 14:43:38.433884 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-01-10 14:43:38.433887 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:43:38.433891 | orchestrator | 2026-01-10 14:43:38.433895 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-01-10 14:43:38.433899 | orchestrator | Saturday 10 January 2026 14:41:38 +0000 (0:00:00.349) 0:00:10.183 ****** 2026-01-10 14:43:38.433904 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'c8120fdfe5a3', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-01-10 14:41:35.417380', 'end': '2026-01-10 14:41:35.459393', 'delta': '0:00:00.042013', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['c8120fdfe5a3'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-01-10 14:43:38.433917 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '40334733961f', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-01-10 14:41:36.221391', 'end': '2026-01-10 14:41:36.257797', 'delta': '0:00:00.036406', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['40334733961f'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-01-10 14:43:38.433931 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'e5cac7951682', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-01-10 14:41:36.772756', 'end': '2026-01-10 14:41:36.806382', 'delta': '0:00:00.033626', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['e5cac7951682'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-01-10 14:43:38.433936 | orchestrator | 2026-01-10 14:43:38.433939 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-01-10 14:43:38.433943 | orchestrator | Saturday 10 January 2026 14:41:39 +0000 (0:00:00.209) 0:00:10.393 ****** 2026-01-10 14:43:38.433947 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:43:38.433951 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:43:38.433958 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:43:38.433964 | orchestrator | 2026-01-10 14:43:38.433970 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-01-10 14:43:38.433976 | orchestrator | Saturday 10 January 2026 14:41:39 +0000 (0:00:00.513) 0:00:10.906 ****** 2026-01-10 14:43:38.433982 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2026-01-10 14:43:38.433988 | orchestrator | 2026-01-10 14:43:38.433994 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-01-10 14:43:38.434000 | orchestrator | Saturday 10 January 2026 14:41:41 +0000 (0:00:02.115) 0:00:13.021 ****** 2026-01-10 14:43:38.434005 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:43:38.434012 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:43:38.434080 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:43:38.434086 | orchestrator | 2026-01-10 14:43:38.434092 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-01-10 14:43:38.434098 | orchestrator | Saturday 10 January 2026 14:41:41 +0000 (0:00:00.290) 0:00:13.311 ****** 2026-01-10 14:43:38.434104 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:43:38.434110 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:43:38.434115 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:43:38.434121 | orchestrator | 2026-01-10 14:43:38.434127 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-01-10 14:43:38.434139 | orchestrator | Saturday 10 January 2026 14:41:42 +0000 (0:00:00.427) 0:00:13.739 ****** 2026-01-10 14:43:38.434145 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:43:38.434150 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:43:38.434156 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:43:38.434162 | orchestrator | 2026-01-10 14:43:38.434168 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-01-10 14:43:38.434174 | orchestrator | Saturday 10 January 2026 14:41:42 +0000 (0:00:00.534) 0:00:14.273 ****** 2026-01-10 14:43:38.434179 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:43:38.434185 | orchestrator | 2026-01-10 14:43:38.434190 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-01-10 14:43:38.434196 | orchestrator | Saturday 10 January 2026 14:41:43 +0000 (0:00:00.173) 0:00:14.447 ****** 2026-01-10 14:43:38.434202 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:43:38.434207 | orchestrator | 2026-01-10 14:43:38.434213 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-01-10 14:43:38.434218 | orchestrator | Saturday 10 January 2026 14:41:43 +0000 (0:00:00.243) 0:00:14.691 ****** 2026-01-10 14:43:38.434224 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:43:38.434230 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:43:38.434235 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:43:38.434242 | orchestrator | 2026-01-10 14:43:38.434247 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-01-10 14:43:38.434253 | orchestrator | Saturday 10 January 2026 14:41:43 +0000 (0:00:00.320) 0:00:15.011 ****** 2026-01-10 14:43:38.434260 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:43:38.434266 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:43:38.434272 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:43:38.434278 | orchestrator | 2026-01-10 14:43:38.434286 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-01-10 14:43:38.434292 | orchestrator | Saturday 10 January 2026 14:41:44 +0000 (0:00:00.329) 0:00:15.341 ****** 2026-01-10 14:43:38.434298 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:43:38.434304 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:43:38.434310 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:43:38.434316 | orchestrator | 2026-01-10 14:43:38.434322 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-01-10 14:43:38.434332 | orchestrator | Saturday 10 January 2026 14:41:44 +0000 (0:00:00.530) 0:00:15.871 ****** 2026-01-10 14:43:38.434337 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:43:38.434343 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:43:38.434348 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:43:38.434354 | orchestrator | 2026-01-10 14:43:38.434364 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-01-10 14:43:38.434375 | orchestrator | Saturday 10 January 2026 14:41:44 +0000 (0:00:00.357) 0:00:16.228 ****** 2026-01-10 14:43:38.434400 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:43:38.434406 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:43:38.434412 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:43:38.434417 | orchestrator | 2026-01-10 14:43:38.434423 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-01-10 14:43:38.434429 | orchestrator | Saturday 10 January 2026 14:41:45 +0000 (0:00:00.332) 0:00:16.560 ****** 2026-01-10 14:43:38.434436 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:43:38.434442 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:43:38.434448 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:43:38.434454 | orchestrator | 2026-01-10 14:43:38.434491 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-01-10 14:43:38.434498 | orchestrator | Saturday 10 January 2026 14:41:45 +0000 (0:00:00.329) 0:00:16.890 ****** 2026-01-10 14:43:38.434504 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:43:38.434511 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:43:38.434516 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:43:38.434533 | orchestrator | 2026-01-10 14:43:38.434538 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-01-10 14:43:38.434615 | orchestrator | Saturday 10 January 2026 14:41:46 +0000 (0:00:00.612) 0:00:17.503 ****** 2026-01-10 14:43:38.434625 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--6bac10f4--8703--5b93--90a3--91ba865f27b3-osd--block--6bac10f4--8703--5b93--90a3--91ba865f27b3', 'dm-uuid-LVM-uH2Al5eNaR4ncNlj6O0iPJ5SHvylf9HIo5uifasG5P7LrbpfS2web6cXCqroC1KK'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-10 14:43:38.434633 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--ef830303--d908--5775--964e--bef8687288a6-osd--block--ef830303--d908--5775--964e--bef8687288a6', 'dm-uuid-LVM-hwyi5YZZ5T0V9hBEIvqpWwg3zruYopvYJ3dpdkoCkycM0D263lUAQLxdyI128ab2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-10 14:43:38.434638 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-10 14:43:38.434645 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-10 14:43:38.434649 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-10 14:43:38.434653 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-10 14:43:38.434660 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-10 14:43:38.434683 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-10 14:43:38.434693 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-10 14:43:38.434697 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-10 14:43:38.434703 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_21aeb2d8-b6d0-4615-8a9a-0fb4b5fbf431', 'scsi-SQEMU_QEMU_HARDDISK_21aeb2d8-b6d0-4615-8a9a-0fb4b5fbf431'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_21aeb2d8-b6d0-4615-8a9a-0fb4b5fbf431-part1', 'scsi-SQEMU_QEMU_HARDDISK_21aeb2d8-b6d0-4615-8a9a-0fb4b5fbf431-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_21aeb2d8-b6d0-4615-8a9a-0fb4b5fbf431-part14', 'scsi-SQEMU_QEMU_HARDDISK_21aeb2d8-b6d0-4615-8a9a-0fb4b5fbf431-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_21aeb2d8-b6d0-4615-8a9a-0fb4b5fbf431-part15', 'scsi-SQEMU_QEMU_HARDDISK_21aeb2d8-b6d0-4615-8a9a-0fb4b5fbf431-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_21aeb2d8-b6d0-4615-8a9a-0fb4b5fbf431-part16', 'scsi-SQEMU_QEMU_HARDDISK_21aeb2d8-b6d0-4615-8a9a-0fb4b5fbf431-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-10 14:43:38.434711 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--6bac10f4--8703--5b93--90a3--91ba865f27b3-osd--block--6bac10f4--8703--5b93--90a3--91ba865f27b3'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-B32ZwJ-eBQc-y79V-idgx-GHMM-RIEc-kPdv3Y', 'scsi-0QEMU_QEMU_HARDDISK_70c6fd94-218f-483a-b965-10c70b1b97fc', 'scsi-SQEMU_QEMU_HARDDISK_70c6fd94-218f-483a-b965-10c70b1b97fc'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-10 14:43:38.434729 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--ef830303--d908--5775--964e--bef8687288a6-osd--block--ef830303--d908--5775--964e--bef8687288a6'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-dDr3Q4-vkot-1toB-qHzf-rt63-1YC4-a2cdsm', 'scsi-0QEMU_QEMU_HARDDISK_f7705bd4-29b3-411e-b8b9-50568fcffd73', 'scsi-SQEMU_QEMU_HARDDISK_f7705bd4-29b3-411e-b8b9-50568fcffd73'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-10 14:43:38.434737 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2130b2ec-580e-4b39-88b4-748d7926916f', 'scsi-SQEMU_QEMU_HARDDISK_2130b2ec-580e-4b39-88b4-748d7926916f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-10 14:43:38.434741 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--0fad3856--f6d1--50e2--a5cb--d9f4a0859299-osd--block--0fad3856--f6d1--50e2--a5cb--d9f4a0859299', 'dm-uuid-LVM-NrSndplu8YjxJZJR7UELD6OYsvV50bPZ1u2VEUIggImfgCLc9zhjhhZDbVtJX9QT'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-10 14:43:38.434746 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-10-13-45-52-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-10 14:43:38.434749 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--39355231--3192--5ff7--9e27--947e8968f1e9-osd--block--39355231--3192--5ff7--9e27--947e8968f1e9', 'dm-uuid-LVM-0dLLKJtm6H324NqK1ZOHec17jVXqGr5vNKj6jpTpF1lhxA6YbcYHPuFYNGYyDWSE'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-10 14:43:38.434753 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-10 14:43:38.434760 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-10 14:43:38.434779 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-10 14:43:38.434784 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-10 14:43:38.434788 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-10 14:43:38.434791 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-10 14:43:38.434795 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-10 14:43:38.434799 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-10 14:43:38.434810 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6133482b-469c-4f4b-9769-bc6dc055ce78', 'scsi-SQEMU_QEMU_HARDDISK_6133482b-469c-4f4b-9769-bc6dc055ce78'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6133482b-469c-4f4b-9769-bc6dc055ce78-part1', 'scsi-SQEMU_QEMU_HARDDISK_6133482b-469c-4f4b-9769-bc6dc055ce78-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6133482b-469c-4f4b-9769-bc6dc055ce78-part14', 'scsi-SQEMU_QEMU_HARDDISK_6133482b-469c-4f4b-9769-bc6dc055ce78-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6133482b-469c-4f4b-9769-bc6dc055ce78-part15', 'scsi-SQEMU_QEMU_HARDDISK_6133482b-469c-4f4b-9769-bc6dc055ce78-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6133482b-469c-4f4b-9769-bc6dc055ce78-part16', 'scsi-SQEMU_QEMU_HARDDISK_6133482b-469c-4f4b-9769-bc6dc055ce78-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-10 14:43:38.434821 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--0fad3856--f6d1--50e2--a5cb--d9f4a0859299-osd--block--0fad3856--f6d1--50e2--a5cb--d9f4a0859299'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Vvc4YC-2Ex3-eCr9-vnZS-ADWO-gj04-g7abB6', 'scsi-0QEMU_QEMU_HARDDISK_763a4a26-d97a-40e2-a569-d464b2971007', 'scsi-SQEMU_QEMU_HARDDISK_763a4a26-d97a-40e2-a569-d464b2971007'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-10 14:43:38.434825 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--39355231--3192--5ff7--9e27--947e8968f1e9-osd--block--39355231--3192--5ff7--9e27--947e8968f1e9'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-60uRIt-IULU-CMat-eEKR-GmLG-bbFO-QnA2Tt', 'scsi-0QEMU_QEMU_HARDDISK_45b03c06-0ab6-4b62-8b16-77c772305c6a', 'scsi-SQEMU_QEMU_HARDDISK_45b03c06-0ab6-4b62-8b16-77c772305c6a'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-10 14:43:38.434829 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:43:38.434836 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e6c5241f-60aa-42cf-822c-98275b24deb1', 'scsi-SQEMU_QEMU_HARDDISK_e6c5241f-60aa-42cf-822c-98275b24deb1'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-10 14:43:38.434842 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-10-13-45-44-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-10 14:43:38.434848 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:43:38.434857 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--4cb3fc90--004d--5443--9ae7--f5eff9c4438f-osd--block--4cb3fc90--004d--5443--9ae7--f5eff9c4438f', 'dm-uuid-LVM-fIYasPDKY6yyb0lbN1hYZudeZijwr05t0znOImwORuoEgjaGyB4fyTgEynvK6HFS'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-10 14:43:38.434874 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--dec76364--a7ee--5469--8bc3--2dcf5060f83e-osd--block--dec76364--a7ee--5469--8bc3--2dcf5060f83e', 'dm-uuid-LVM-ELjjXI7PsiwNbDCw3Snq8tT0U2GbdoLWczg8BVDKOFs22fypwHVROqY12ftkOQHx'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-10 14:43:38.434881 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-10 14:43:38.434887 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-10 14:43:38.434893 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-10 14:43:38.434899 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-10 14:43:38.434906 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-10 14:43:38.434913 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-10 14:43:38.434919 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-10 14:43:38.434934 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-10 14:43:38.434946 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_832b7b05-f737-40ff-a441-99af22cffa7c', 'scsi-SQEMU_QEMU_HARDDISK_832b7b05-f737-40ff-a441-99af22cffa7c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_832b7b05-f737-40ff-a441-99af22cffa7c-part1', 'scsi-SQEMU_QEMU_HARDDISK_832b7b05-f737-40ff-a441-99af22cffa7c-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_832b7b05-f737-40ff-a441-99af22cffa7c-part14', 'scsi-SQEMU_QEMU_HARDDISK_832b7b05-f737-40ff-a441-99af22cffa7c-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_832b7b05-f737-40ff-a441-99af22cffa7c-part15', 'scsi-SQEMU_QEMU_HARDDISK_832b7b05-f737-40ff-a441-99af22cffa7c-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_832b7b05-f737-40ff-a441-99af22cffa7c-part16', 'scsi-SQEMU_QEMU_HARDDISK_832b7b05-f737-40ff-a441-99af22cffa7c-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-10 14:43:38.434954 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--4cb3fc90--004d--5443--9ae7--f5eff9c4438f-osd--block--4cb3fc90--004d--5443--9ae7--f5eff9c4438f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-J4k07S-Vr1V-a78k-IT0w-c3z0-Eftr-0EfL69', 'scsi-0QEMU_QEMU_HARDDISK_4515c98e-1f25-421e-81d3-264e20827141', 'scsi-SQEMU_QEMU_HARDDISK_4515c98e-1f25-421e-81d3-264e20827141'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-10 14:43:38.434959 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--dec76364--a7ee--5469--8bc3--2dcf5060f83e-osd--block--dec76364--a7ee--5469--8bc3--2dcf5060f83e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-rvYJNa-YbX5-CU38-DuHY-Y6W2-TgfW-vshxzL', 'scsi-0QEMU_QEMU_HARDDISK_9cc4e4a6-fdb0-4f2b-8497-7d80ad86af00', 'scsi-SQEMU_QEMU_HARDDISK_9cc4e4a6-fdb0-4f2b-8497-7d80ad86af00'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-10 14:43:38.434968 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_355a7212-75f2-41c4-a284-fbc15ac49d3c', 'scsi-SQEMU_QEMU_HARDDISK_355a7212-75f2-41c4-a284-fbc15ac49d3c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-10 14:43:38.434976 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-10-13-45-50-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-10 14:43:38.434981 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:43:38.434985 | orchestrator | 2026-01-10 14:43:38.434988 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-01-10 14:43:38.434992 | orchestrator | Saturday 10 January 2026 14:41:46 +0000 (0:00:00.685) 0:00:18.188 ****** 2026-01-10 14:43:38.434996 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--6bac10f4--8703--5b93--90a3--91ba865f27b3-osd--block--6bac10f4--8703--5b93--90a3--91ba865f27b3', 'dm-uuid-LVM-uH2Al5eNaR4ncNlj6O0iPJ5SHvylf9HIo5uifasG5P7LrbpfS2web6cXCqroC1KK'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:43:38.435001 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--ef830303--d908--5775--964e--bef8687288a6-osd--block--ef830303--d908--5775--964e--bef8687288a6', 'dm-uuid-LVM-hwyi5YZZ5T0V9hBEIvqpWwg3zruYopvYJ3dpdkoCkycM0D263lUAQLxdyI128ab2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:43:38.435005 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:43:38.435014 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:43:38.435018 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:43:38.435027 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:43:38.435031 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:43:38.435057 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:43:38.435062 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--0fad3856--f6d1--50e2--a5cb--d9f4a0859299-osd--block--0fad3856--f6d1--50e2--a5cb--d9f4a0859299', 'dm-uuid-LVM-NrSndplu8YjxJZJR7UELD6OYsvV50bPZ1u2VEUIggImfgCLc9zhjhhZDbVtJX9QT'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:43:38.435066 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:43:38.435075 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--39355231--3192--5ff7--9e27--947e8968f1e9-osd--block--39355231--3192--5ff7--9e27--947e8968f1e9', 'dm-uuid-LVM-0dLLKJtm6H324NqK1ZOHec17jVXqGr5vNKj6jpTpF1lhxA6YbcYHPuFYNGYyDWSE'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:43:38.435083 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:43:38.435087 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_21aeb2d8-b6d0-4615-8a9a-0fb4b5fbf431', 'scsi-SQEMU_QEMU_HARDDISK_21aeb2d8-b6d0-4615-8a9a-0fb4b5fbf431'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_21aeb2d8-b6d0-4615-8a9a-0fb4b5fbf431-part1', 'scsi-SQEMU_QEMU_HARDDISK_21aeb2d8-b6d0-4615-8a9a-0fb4b5fbf431-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_21aeb2d8-b6d0-4615-8a9a-0fb4b5fbf431-part14', 'scsi-SQEMU_QEMU_HARDDISK_21aeb2d8-b6d0-4615-8a9a-0fb4b5fbf431-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_21aeb2d8-b6d0-4615-8a9a-0fb4b5fbf431-part15', 'scsi-SQEMU_QEMU_HARDDISK_21aeb2d8-b6d0-4615-8a9a-0fb4b5fbf431-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_21aeb2d8-b6d0-4615-8a9a-0fb4b5fbf431-part16', 'scsi-SQEMU_QEMU_HARDDISK_21aeb2d8-b6d0-4615-8a9a-0fb4b5fbf431-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:43:38.435095 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:43:38.435105 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--6bac10f4--8703--5b93--90a3--91ba865f27b3-osd--block--6bac10f4--8703--5b93--90a3--91ba865f27b3'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-B32ZwJ-eBQc-y79V-idgx-GHMM-RIEc-kPdv3Y', 'scsi-0QEMU_QEMU_HARDDISK_70c6fd94-218f-483a-b965-10c70b1b97fc', 'scsi-SQEMU_QEMU_HARDDISK_70c6fd94-218f-483a-b965-10c70b1b97fc'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:43:38.435110 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--ef830303--d908--5775--964e--bef8687288a6-osd--block--ef830303--d908--5775--964e--bef8687288a6'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-dDr3Q4-vkot-1toB-qHzf-rt63-1YC4-a2cdsm', 'scsi-0QEMU_QEMU_HARDDISK_f7705bd4-29b3-411e-b8b9-50568fcffd73', 'scsi-SQEMU_QEMU_HARDDISK_f7705bd4-29b3-411e-b8b9-50568fcffd73'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:43:38.435114 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:43:38.435118 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2130b2ec-580e-4b39-88b4-748d7926916f', 'scsi-SQEMU_QEMU_HARDDISK_2130b2ec-580e-4b39-88b4-748d7926916f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:43:38.435124 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:43:38.435131 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-10-13-45-52-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:43:38.435135 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:43:38.435143 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:43:38.435147 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:43:38.435151 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:43:38.435155 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--4cb3fc90--004d--5443--9ae7--f5eff9c4438f-osd--block--4cb3fc90--004d--5443--9ae7--f5eff9c4438f', 'dm-uuid-LVM-fIYasPDKY6yyb0lbN1hYZudeZijwr05t0znOImwORuoEgjaGyB4fyTgEynvK6HFS'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:43:38.435162 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:43:38.435168 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--dec76364--a7ee--5469--8bc3--2dcf5060f83e-osd--block--dec76364--a7ee--5469--8bc3--2dcf5060f83e', 'dm-uuid-LVM-ELjjXI7PsiwNbDCw3Snq8tT0U2GbdoLWczg8BVDKOFs22fypwHVROqY12ftkOQHx'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:43:38.435175 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:43:38.435179 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:43:38.435183 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6133482b-469c-4f4b-9769-bc6dc055ce78', 'scsi-SQEMU_QEMU_HARDDISK_6133482b-469c-4f4b-9769-bc6dc055ce78'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6133482b-469c-4f4b-9769-bc6dc055ce78-part1', 'scsi-SQEMU_QEMU_HARDDISK_6133482b-469c-4f4b-9769-bc6dc055ce78-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6133482b-469c-4f4b-9769-bc6dc055ce78-part14', 'scsi-SQEMU_QEMU_HARDDISK_6133482b-469c-4f4b-9769-bc6dc055ce78-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6133482b-469c-4f4b-9769-bc6dc055ce78-part15', 'scsi-SQEMU_QEMU_HARDDISK_6133482b-469c-4f4b-9769-bc6dc055ce78-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6133482b-469c-4f4b-9769-bc6dc055ce78-part16', 'scsi-SQEMU_QEMU_HARDDISK_6133482b-469c-4f4b-9769-bc6dc055ce78-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:43:38.435193 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:43:38.435200 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--0fad3856--f6d1--50e2--a5cb--d9f4a0859299-osd--block--0fad3856--f6d1--50e2--a5cb--d9f4a0859299'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Vvc4YC-2Ex3-eCr9-vnZS-ADWO-gj04-g7abB6', 'scsi-0QEMU_QEMU_HARDDISK_763a4a26-d97a-40e2-a569-d464b2971007', 'scsi-SQEMU_QEMU_HARDDISK_763a4a26-d97a-40e2-a569-d464b2971007'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:43:38.435204 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:43:38.435208 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--39355231--3192--5ff7--9e27--947e8968f1e9-osd--block--39355231--3192--5ff7--9e27--947e8968f1e9'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-60uRIt-IULU-CMat-eEKR-GmLG-bbFO-QnA2Tt', 'scsi-0QEMU_QEMU_HARDDISK_45b03c06-0ab6-4b62-8b16-77c772305c6a', 'scsi-SQEMU_QEMU_HARDDISK_45b03c06-0ab6-4b62-8b16-77c772305c6a'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:43:38.435215 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:43:38.435221 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e6c5241f-60aa-42cf-822c-98275b24deb1', 'scsi-SQEMU_QEMU_HARDDISK_e6c5241f-60aa-42cf-822c-98275b24deb1'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:43:38.435228 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:43:38.435232 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-10-13-45-44-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:43:38.435236 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:43:38.435240 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:43:38.435248 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:43:38.435251 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:43:38.435262 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_832b7b05-f737-40ff-a441-99af22cffa7c', 'scsi-SQEMU_QEMU_HARDDISK_832b7b05-f737-40ff-a441-99af22cffa7c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_832b7b05-f737-40ff-a441-99af22cffa7c-part1', 'scsi-SQEMU_QEMU_HARDDISK_832b7b05-f737-40ff-a441-99af22cffa7c-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_832b7b05-f737-40ff-a441-99af22cffa7c-part14', 'scsi-SQEMU_QEMU_HARDDISK_832b7b05-f737-40ff-a441-99af22cffa7c-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_832b7b05-f737-40ff-a441-99af22cffa7c-part15', 'scsi-SQEMU_QEMU_HARDDISK_832b7b05-f737-40ff-a441-99af22cffa7c-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_832b7b05-f737-40ff-a441-99af22cffa7c-part16', 'scsi-SQEMU_QEMU_HARDDISK_832b7b05-f737-40ff-a441-99af22cffa7c-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:43:38.435267 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--4cb3fc90--004d--5443--9ae7--f5eff9c4438f-osd--block--4cb3fc90--004d--5443--9ae7--f5eff9c4438f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-J4k07S-Vr1V-a78k-IT0w-c3z0-Eftr-0EfL69', 'scsi-0QEMU_QEMU_HARDDISK_4515c98e-1f25-421e-81d3-264e20827141', 'scsi-SQEMU_QEMU_HARDDISK_4515c98e-1f25-421e-81d3-264e20827141'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:43:38.435275 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--dec76364--a7ee--5469--8bc3--2dcf5060f83e-osd--block--dec76364--a7ee--5469--8bc3--2dcf5060f83e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-rvYJNa-YbX5-CU38-DuHY-Y6W2-TgfW-vshxzL', 'scsi-0QEMU_QEMU_HARDDISK_9cc4e4a6-fdb0-4f2b-8497-7d80ad86af00', 'scsi-SQEMU_QEMU_HARDDISK_9cc4e4a6-fdb0-4f2b-8497-7d80ad86af00'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:43:38.435283 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_355a7212-75f2-41c4-a284-fbc15ac49d3c', 'scsi-SQEMU_QEMU_HARDDISK_355a7212-75f2-41c4-a284-fbc15ac49d3c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:43:38.435290 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-10-13-45-50-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-10 14:43:38.435295 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:43:38.435301 | orchestrator | 2026-01-10 14:43:38.435307 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-01-10 14:43:38.435312 | orchestrator | Saturday 10 January 2026 14:41:47 +0000 (0:00:00.585) 0:00:18.774 ****** 2026-01-10 14:43:38.435322 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:43:38.435329 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:43:38.435335 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:43:38.435341 | orchestrator | 2026-01-10 14:43:38.435346 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-01-10 14:43:38.435352 | orchestrator | Saturday 10 January 2026 14:41:48 +0000 (0:00:00.672) 0:00:19.446 ****** 2026-01-10 14:43:38.435359 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:43:38.435364 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:43:38.435370 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:43:38.435375 | orchestrator | 2026-01-10 14:43:38.435409 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-01-10 14:43:38.435421 | orchestrator | Saturday 10 January 2026 14:41:48 +0000 (0:00:00.530) 0:00:19.976 ****** 2026-01-10 14:43:38.435427 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:43:38.435432 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:43:38.435438 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:43:38.435444 | orchestrator | 2026-01-10 14:43:38.435450 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-01-10 14:43:38.435455 | orchestrator | Saturday 10 January 2026 14:41:49 +0000 (0:00:00.597) 0:00:20.574 ****** 2026-01-10 14:43:38.435461 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:43:38.435467 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:43:38.435473 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:43:38.435478 | orchestrator | 2026-01-10 14:43:38.435484 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-01-10 14:43:38.435490 | orchestrator | Saturday 10 January 2026 14:41:49 +0000 (0:00:00.344) 0:00:20.918 ****** 2026-01-10 14:43:38.435496 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:43:38.435503 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:43:38.435509 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:43:38.435515 | orchestrator | 2026-01-10 14:43:38.435522 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-01-10 14:43:38.435526 | orchestrator | Saturday 10 January 2026 14:41:50 +0000 (0:00:00.470) 0:00:21.389 ****** 2026-01-10 14:43:38.435530 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:43:38.435534 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:43:38.435538 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:43:38.435541 | orchestrator | 2026-01-10 14:43:38.435545 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-01-10 14:43:38.435549 | orchestrator | Saturday 10 January 2026 14:41:50 +0000 (0:00:00.540) 0:00:21.929 ****** 2026-01-10 14:43:38.435553 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-01-10 14:43:38.435557 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-01-10 14:43:38.435561 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-01-10 14:43:38.435564 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-01-10 14:43:38.435568 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-01-10 14:43:38.435572 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-01-10 14:43:38.435575 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-01-10 14:43:38.435579 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-01-10 14:43:38.435583 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-01-10 14:43:38.435586 | orchestrator | 2026-01-10 14:43:38.435590 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-01-10 14:43:38.435594 | orchestrator | Saturday 10 January 2026 14:41:51 +0000 (0:00:00.877) 0:00:22.807 ****** 2026-01-10 14:43:38.435598 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-01-10 14:43:38.435602 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-01-10 14:43:38.435606 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-01-10 14:43:38.435610 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:43:38.435614 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-01-10 14:43:38.435618 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-01-10 14:43:38.435706 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-01-10 14:43:38.435711 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:43:38.435715 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-01-10 14:43:38.435723 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-01-10 14:43:38.435727 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-01-10 14:43:38.435731 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:43:38.435758 | orchestrator | 2026-01-10 14:43:38.435762 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-01-10 14:43:38.435770 | orchestrator | Saturday 10 January 2026 14:41:51 +0000 (0:00:00.372) 0:00:23.180 ****** 2026-01-10 14:43:38.435775 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-10 14:43:38.435779 | orchestrator | 2026-01-10 14:43:38.435783 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-01-10 14:43:38.435787 | orchestrator | Saturday 10 January 2026 14:41:52 +0000 (0:00:00.732) 0:00:23.912 ****** 2026-01-10 14:43:38.435796 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:43:38.435800 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:43:38.435804 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:43:38.435808 | orchestrator | 2026-01-10 14:43:38.435812 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-01-10 14:43:38.435815 | orchestrator | Saturday 10 January 2026 14:41:52 +0000 (0:00:00.409) 0:00:24.321 ****** 2026-01-10 14:43:38.435819 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:43:38.435823 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:43:38.435827 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:43:38.435830 | orchestrator | 2026-01-10 14:43:38.435834 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-01-10 14:43:38.435838 | orchestrator | Saturday 10 January 2026 14:41:53 +0000 (0:00:00.394) 0:00:24.716 ****** 2026-01-10 14:43:38.435842 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:43:38.435845 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:43:38.435849 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:43:38.435853 | orchestrator | 2026-01-10 14:43:38.435856 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-01-10 14:43:38.435860 | orchestrator | Saturday 10 January 2026 14:41:53 +0000 (0:00:00.323) 0:00:25.039 ****** 2026-01-10 14:43:38.435864 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:43:38.435868 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:43:38.435871 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:43:38.435875 | orchestrator | 2026-01-10 14:43:38.435879 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-01-10 14:43:38.435882 | orchestrator | Saturday 10 January 2026 14:41:54 +0000 (0:00:00.620) 0:00:25.659 ****** 2026-01-10 14:43:38.435886 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-10 14:43:38.435890 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-10 14:43:38.435894 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-10 14:43:38.435897 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:43:38.435901 | orchestrator | 2026-01-10 14:43:38.435905 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-01-10 14:43:38.435908 | orchestrator | Saturday 10 January 2026 14:41:54 +0000 (0:00:00.364) 0:00:26.024 ****** 2026-01-10 14:43:38.435912 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-10 14:43:38.435916 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-10 14:43:38.435919 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-10 14:43:38.435923 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:43:38.435927 | orchestrator | 2026-01-10 14:43:38.435930 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-01-10 14:43:38.435934 | orchestrator | Saturday 10 January 2026 14:41:55 +0000 (0:00:00.396) 0:00:26.421 ****** 2026-01-10 14:43:38.435938 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-10 14:43:38.435942 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-10 14:43:38.435945 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-10 14:43:38.435949 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:43:38.435953 | orchestrator | 2026-01-10 14:43:38.435956 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-01-10 14:43:38.435964 | orchestrator | Saturday 10 January 2026 14:41:55 +0000 (0:00:00.433) 0:00:26.854 ****** 2026-01-10 14:43:38.435968 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:43:38.435971 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:43:38.435975 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:43:38.435979 | orchestrator | 2026-01-10 14:43:38.435982 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-01-10 14:43:38.435986 | orchestrator | Saturday 10 January 2026 14:41:55 +0000 (0:00:00.345) 0:00:27.200 ****** 2026-01-10 14:43:38.435990 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-01-10 14:43:38.435994 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-01-10 14:43:38.435997 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-01-10 14:43:38.436001 | orchestrator | 2026-01-10 14:43:38.436005 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-01-10 14:43:38.436008 | orchestrator | Saturday 10 January 2026 14:41:56 +0000 (0:00:00.583) 0:00:27.783 ****** 2026-01-10 14:43:38.436012 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-10 14:43:38.436016 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-10 14:43:38.436019 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-10 14:43:38.436023 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-01-10 14:43:38.436027 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-01-10 14:43:38.436031 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-01-10 14:43:38.436037 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-01-10 14:43:38.436041 | orchestrator | 2026-01-10 14:43:38.436045 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-01-10 14:43:38.436049 | orchestrator | Saturday 10 January 2026 14:41:57 +0000 (0:00:01.008) 0:00:28.792 ****** 2026-01-10 14:43:38.436052 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-10 14:43:38.436056 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-10 14:43:38.436060 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-10 14:43:38.436063 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-01-10 14:43:38.436067 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-01-10 14:43:38.436071 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-01-10 14:43:38.436077 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-01-10 14:43:38.436081 | orchestrator | 2026-01-10 14:43:38.436085 | orchestrator | TASK [Include tasks from the ceph-osd role] ************************************ 2026-01-10 14:43:38.436089 | orchestrator | Saturday 10 January 2026 14:41:59 +0000 (0:00:02.029) 0:00:30.821 ****** 2026-01-10 14:43:38.436092 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:43:38.436096 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:43:38.436100 | orchestrator | included: /ansible/tasks/openstack_config.yml for testbed-node-5 2026-01-10 14:43:38.436104 | orchestrator | 2026-01-10 14:43:38.436107 | orchestrator | TASK [create openstack pool(s)] ************************************************ 2026-01-10 14:43:38.436111 | orchestrator | Saturday 10 January 2026 14:41:59 +0000 (0:00:00.361) 0:00:31.183 ****** 2026-01-10 14:43:38.436115 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'backups', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-01-10 14:43:38.436120 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'volumes', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-01-10 14:43:38.436127 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'images', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-01-10 14:43:38.436131 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'metrics', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-01-10 14:43:38.436135 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'vms', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-01-10 14:43:38.436139 | orchestrator | 2026-01-10 14:43:38.436143 | orchestrator | TASK [generate keys] *********************************************************** 2026-01-10 14:43:38.436147 | orchestrator | Saturday 10 January 2026 14:42:43 +0000 (0:00:43.664) 0:01:14.848 ****** 2026-01-10 14:43:38.436151 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-10 14:43:38.436154 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-10 14:43:38.436158 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-10 14:43:38.436162 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-10 14:43:38.436166 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-10 14:43:38.436169 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-10 14:43:38.436173 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] }}] 2026-01-10 14:43:38.436177 | orchestrator | 2026-01-10 14:43:38.436180 | orchestrator | TASK [get keys from monitors] ************************************************** 2026-01-10 14:43:38.436184 | orchestrator | Saturday 10 January 2026 14:43:08 +0000 (0:00:24.491) 0:01:39.339 ****** 2026-01-10 14:43:38.436188 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-10 14:43:38.436192 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-10 14:43:38.436195 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-10 14:43:38.436199 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-10 14:43:38.436203 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-10 14:43:38.436209 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-10 14:43:38.436213 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-01-10 14:43:38.436217 | orchestrator | 2026-01-10 14:43:38.436221 | orchestrator | TASK [copy ceph key(s) if needed] ********************************************** 2026-01-10 14:43:38.436225 | orchestrator | Saturday 10 January 2026 14:43:19 +0000 (0:00:11.791) 0:01:51.130 ****** 2026-01-10 14:43:38.436228 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-10 14:43:38.436232 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-01-10 14:43:38.436236 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-01-10 14:43:38.436240 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-10 14:43:38.436243 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-01-10 14:43:38.436249 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-01-10 14:43:38.436257 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-10 14:43:38.436260 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-01-10 14:43:38.436264 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-01-10 14:43:38.436268 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-10 14:43:38.436272 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-01-10 14:43:38.436275 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-01-10 14:43:38.436279 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-10 14:43:38.436283 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-01-10 14:43:38.436286 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-01-10 14:43:38.436290 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-10 14:43:38.436294 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-01-10 14:43:38.436298 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-01-10 14:43:38.436302 | orchestrator | changed: [testbed-node-5 -> {{ item.1 }}] 2026-01-10 14:43:38.436306 | orchestrator | 2026-01-10 14:43:38.436309 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-10 14:43:38.436313 | orchestrator | testbed-node-3 : ok=25  changed=0 unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-01-10 14:43:38.436318 | orchestrator | testbed-node-4 : ok=18  changed=0 unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2026-01-10 14:43:38.436321 | orchestrator | testbed-node-5 : ok=23  changed=3  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-01-10 14:43:38.436326 | orchestrator | 2026-01-10 14:43:38.436331 | orchestrator | 2026-01-10 14:43:38.436337 | orchestrator | 2026-01-10 14:43:38.436343 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-10 14:43:38.436349 | orchestrator | Saturday 10 January 2026 14:43:37 +0000 (0:00:17.934) 0:02:09.065 ****** 2026-01-10 14:43:38.436359 | orchestrator | =============================================================================== 2026-01-10 14:43:38.436367 | orchestrator | create openstack pool(s) ----------------------------------------------- 43.67s 2026-01-10 14:43:38.436372 | orchestrator | generate keys ---------------------------------------------------------- 24.49s 2026-01-10 14:43:38.436396 | orchestrator | copy ceph key(s) if needed --------------------------------------------- 17.93s 2026-01-10 14:43:38.436403 | orchestrator | get keys from monitors ------------------------------------------------- 11.79s 2026-01-10 14:43:38.436409 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 2.28s 2026-01-10 14:43:38.436415 | orchestrator | ceph-facts : Get current fsid if cluster is already running ------------- 2.12s 2026-01-10 14:43:38.436421 | orchestrator | ceph-facts : Set_fact ceph_admin_command -------------------------------- 2.03s 2026-01-10 14:43:38.436427 | orchestrator | ceph-facts : Set_fact ceph_run_cmd -------------------------------------- 1.01s 2026-01-10 14:43:38.436433 | orchestrator | ceph-facts : Check if podman binary is present -------------------------- 0.90s 2026-01-10 14:43:38.436439 | orchestrator | ceph-facts : Check if the ceph mon socket is in-use --------------------- 0.88s 2026-01-10 14:43:38.436446 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 0.88s 2026-01-10 14:43:38.436452 | orchestrator | ceph-facts : Import_tasks set_radosgw_address.yml ----------------------- 0.73s 2026-01-10 14:43:38.436458 | orchestrator | ceph-facts : Set_fact monitor_name ansible_facts['hostname'] ------------ 0.70s 2026-01-10 14:43:38.436464 | orchestrator | ceph-facts : Check if it is atomic host --------------------------------- 0.69s 2026-01-10 14:43:38.436476 | orchestrator | ceph-facts : Collect existed devices ------------------------------------ 0.69s 2026-01-10 14:43:38.436481 | orchestrator | ceph-facts : Check if the ceph conf exists ------------------------------ 0.67s 2026-01-10 14:43:38.436486 | orchestrator | ceph-facts : Check for a ceph mon socket -------------------------------- 0.66s 2026-01-10 14:43:38.436490 | orchestrator | ceph-facts : Include facts.yml ------------------------------------------ 0.63s 2026-01-10 14:43:38.436498 | orchestrator | ceph-facts : Set_fact _radosgw_address to radosgw_address --------------- 0.62s 2026-01-10 14:43:38.436502 | orchestrator | ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks --- 0.61s 2026-01-10 14:43:38.436506 | orchestrator | 2026-01-10 14:43:38 | INFO  | Task 703b5669-de21-4506-ab77-7254d80264e5 is in state STARTED 2026-01-10 14:43:38.436511 | orchestrator | 2026-01-10 14:43:38 | INFO  | Task 6f577a16-db16-4f3d-9b0a-ec524f08f8fa is in state STARTED 2026-01-10 14:43:38.436516 | orchestrator | 2026-01-10 14:43:38 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:43:41.487247 | orchestrator | 2026-01-10 14:43:41 | INFO  | Task 703b5669-de21-4506-ab77-7254d80264e5 is in state STARTED 2026-01-10 14:43:41.489514 | orchestrator | 2026-01-10 14:43:41 | INFO  | Task 6f577a16-db16-4f3d-9b0a-ec524f08f8fa is in state STARTED 2026-01-10 14:43:41.490628 | orchestrator | 2026-01-10 14:43:41 | INFO  | Task 62ea0cc9-09a8-4e41-86d2-2e9e217b1894 is in state STARTED 2026-01-10 14:43:41.490654 | orchestrator | 2026-01-10 14:43:41 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:43:44.542203 | orchestrator | 2026-01-10 14:43:44 | INFO  | Task 703b5669-de21-4506-ab77-7254d80264e5 is in state STARTED 2026-01-10 14:43:44.542753 | orchestrator | 2026-01-10 14:43:44 | INFO  | Task 6f577a16-db16-4f3d-9b0a-ec524f08f8fa is in state STARTED 2026-01-10 14:43:44.543931 | orchestrator | 2026-01-10 14:43:44 | INFO  | Task 62ea0cc9-09a8-4e41-86d2-2e9e217b1894 is in state STARTED 2026-01-10 14:43:44.543985 | orchestrator | 2026-01-10 14:43:44 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:43:47.590150 | orchestrator | 2026-01-10 14:43:47 | INFO  | Task 703b5669-de21-4506-ab77-7254d80264e5 is in state STARTED 2026-01-10 14:43:47.592766 | orchestrator | 2026-01-10 14:43:47 | INFO  | Task 6f577a16-db16-4f3d-9b0a-ec524f08f8fa is in state STARTED 2026-01-10 14:43:47.597163 | orchestrator | 2026-01-10 14:43:47 | INFO  | Task 62ea0cc9-09a8-4e41-86d2-2e9e217b1894 is in state STARTED 2026-01-10 14:43:47.597252 | orchestrator | 2026-01-10 14:43:47 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:43:50.638058 | orchestrator | 2026-01-10 14:43:50 | INFO  | Task 703b5669-de21-4506-ab77-7254d80264e5 is in state STARTED 2026-01-10 14:43:50.639623 | orchestrator | 2026-01-10 14:43:50 | INFO  | Task 6f577a16-db16-4f3d-9b0a-ec524f08f8fa is in state STARTED 2026-01-10 14:43:50.640728 | orchestrator | 2026-01-10 14:43:50 | INFO  | Task 62ea0cc9-09a8-4e41-86d2-2e9e217b1894 is in state STARTED 2026-01-10 14:43:50.641091 | orchestrator | 2026-01-10 14:43:50 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:43:53.700527 | orchestrator | 2026-01-10 14:43:53 | INFO  | Task 703b5669-de21-4506-ab77-7254d80264e5 is in state STARTED 2026-01-10 14:43:53.702324 | orchestrator | 2026-01-10 14:43:53 | INFO  | Task 6f577a16-db16-4f3d-9b0a-ec524f08f8fa is in state STARTED 2026-01-10 14:43:53.703477 | orchestrator | 2026-01-10 14:43:53 | INFO  | Task 62ea0cc9-09a8-4e41-86d2-2e9e217b1894 is in state STARTED 2026-01-10 14:43:53.703521 | orchestrator | 2026-01-10 14:43:53 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:43:56.758900 | orchestrator | 2026-01-10 14:43:56 | INFO  | Task 703b5669-de21-4506-ab77-7254d80264e5 is in state STARTED 2026-01-10 14:43:56.760218 | orchestrator | 2026-01-10 14:43:56 | INFO  | Task 6f577a16-db16-4f3d-9b0a-ec524f08f8fa is in state STARTED 2026-01-10 14:43:56.761595 | orchestrator | 2026-01-10 14:43:56 | INFO  | Task 62ea0cc9-09a8-4e41-86d2-2e9e217b1894 is in state STARTED 2026-01-10 14:43:56.761718 | orchestrator | 2026-01-10 14:43:56 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:43:59.806541 | orchestrator | 2026-01-10 14:43:59 | INFO  | Task 703b5669-de21-4506-ab77-7254d80264e5 is in state STARTED 2026-01-10 14:43:59.807867 | orchestrator | 2026-01-10 14:43:59 | INFO  | Task 6f577a16-db16-4f3d-9b0a-ec524f08f8fa is in state STARTED 2026-01-10 14:43:59.810421 | orchestrator | 2026-01-10 14:43:59 | INFO  | Task 62ea0cc9-09a8-4e41-86d2-2e9e217b1894 is in state STARTED 2026-01-10 14:43:59.810484 | orchestrator | 2026-01-10 14:43:59 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:44:02.867814 | orchestrator | 2026-01-10 14:44:02 | INFO  | Task 703b5669-de21-4506-ab77-7254d80264e5 is in state STARTED 2026-01-10 14:44:02.869651 | orchestrator | 2026-01-10 14:44:02 | INFO  | Task 6f577a16-db16-4f3d-9b0a-ec524f08f8fa is in state STARTED 2026-01-10 14:44:02.872586 | orchestrator | 2026-01-10 14:44:02 | INFO  | Task 62ea0cc9-09a8-4e41-86d2-2e9e217b1894 is in state STARTED 2026-01-10 14:44:02.872681 | orchestrator | 2026-01-10 14:44:02 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:44:05.924940 | orchestrator | 2026-01-10 14:44:05 | INFO  | Task 703b5669-de21-4506-ab77-7254d80264e5 is in state STARTED 2026-01-10 14:44:05.927728 | orchestrator | 2026-01-10 14:44:05 | INFO  | Task 6f577a16-db16-4f3d-9b0a-ec524f08f8fa is in state STARTED 2026-01-10 14:44:05.930624 | orchestrator | 2026-01-10 14:44:05 | INFO  | Task 62ea0cc9-09a8-4e41-86d2-2e9e217b1894 is in state STARTED 2026-01-10 14:44:05.930671 | orchestrator | 2026-01-10 14:44:05 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:44:08.982998 | orchestrator | 2026-01-10 14:44:08 | INFO  | Task 703b5669-de21-4506-ab77-7254d80264e5 is in state STARTED 2026-01-10 14:44:08.987655 | orchestrator | 2026-01-10 14:44:08 | INFO  | Task 6f577a16-db16-4f3d-9b0a-ec524f08f8fa is in state STARTED 2026-01-10 14:44:08.989401 | orchestrator | 2026-01-10 14:44:08 | INFO  | Task 62ea0cc9-09a8-4e41-86d2-2e9e217b1894 is in state STARTED 2026-01-10 14:44:08.989443 | orchestrator | 2026-01-10 14:44:08 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:44:12.055504 | orchestrator | 2026-01-10 14:44:12 | INFO  | Task 703b5669-de21-4506-ab77-7254d80264e5 is in state STARTED 2026-01-10 14:44:12.057115 | orchestrator | 2026-01-10 14:44:12 | INFO  | Task 6f577a16-db16-4f3d-9b0a-ec524f08f8fa is in state STARTED 2026-01-10 14:44:12.060185 | orchestrator | 2026-01-10 14:44:12 | INFO  | Task 62ea0cc9-09a8-4e41-86d2-2e9e217b1894 is in state STARTED 2026-01-10 14:44:12.060224 | orchestrator | 2026-01-10 14:44:12 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:44:15.114088 | orchestrator | 2026-01-10 14:44:15 | INFO  | Task 703b5669-de21-4506-ab77-7254d80264e5 is in state STARTED 2026-01-10 14:44:15.116208 | orchestrator | 2026-01-10 14:44:15 | INFO  | Task 6f577a16-db16-4f3d-9b0a-ec524f08f8fa is in state STARTED 2026-01-10 14:44:15.118851 | orchestrator | 2026-01-10 14:44:15 | INFO  | Task 62ea0cc9-09a8-4e41-86d2-2e9e217b1894 is in state STARTED 2026-01-10 14:44:15.119110 | orchestrator | 2026-01-10 14:44:15 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:44:18.174506 | orchestrator | 2026-01-10 14:44:18 | INFO  | Task 703b5669-de21-4506-ab77-7254d80264e5 is in state STARTED 2026-01-10 14:44:18.177766 | orchestrator | 2026-01-10 14:44:18 | INFO  | Task 6f577a16-db16-4f3d-9b0a-ec524f08f8fa is in state STARTED 2026-01-10 14:44:18.179695 | orchestrator | 2026-01-10 14:44:18 | INFO  | Task 62ea0cc9-09a8-4e41-86d2-2e9e217b1894 is in state STARTED 2026-01-10 14:44:18.179747 | orchestrator | 2026-01-10 14:44:18 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:44:21.232859 | orchestrator | 2026-01-10 14:44:21 | INFO  | Task 703b5669-de21-4506-ab77-7254d80264e5 is in state STARTED 2026-01-10 14:44:21.235031 | orchestrator | 2026-01-10 14:44:21 | INFO  | Task 6f577a16-db16-4f3d-9b0a-ec524f08f8fa is in state STARTED 2026-01-10 14:44:21.236898 | orchestrator | 2026-01-10 14:44:21 | INFO  | Task 62ea0cc9-09a8-4e41-86d2-2e9e217b1894 is in state SUCCESS 2026-01-10 14:44:21.238753 | orchestrator | 2026-01-10 14:44:21 | INFO  | Task 1d2b3968-ff18-427d-8b4e-32942a34d5c1 is in state STARTED 2026-01-10 14:44:21.239037 | orchestrator | 2026-01-10 14:44:21 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:44:24.296773 | orchestrator | 2026-01-10 14:44:24 | INFO  | Task 703b5669-de21-4506-ab77-7254d80264e5 is in state STARTED 2026-01-10 14:44:24.296861 | orchestrator | 2026-01-10 14:44:24 | INFO  | Task 6f577a16-db16-4f3d-9b0a-ec524f08f8fa is in state STARTED 2026-01-10 14:44:24.298096 | orchestrator | 2026-01-10 14:44:24 | INFO  | Task 1d2b3968-ff18-427d-8b4e-32942a34d5c1 is in state STARTED 2026-01-10 14:44:24.298144 | orchestrator | 2026-01-10 14:44:24 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:44:27.348875 | orchestrator | 2026-01-10 14:44:27 | INFO  | Task 703b5669-de21-4506-ab77-7254d80264e5 is in state STARTED 2026-01-10 14:44:27.349229 | orchestrator | 2026-01-10 14:44:27 | INFO  | Task 6f577a16-db16-4f3d-9b0a-ec524f08f8fa is in state STARTED 2026-01-10 14:44:27.350612 | orchestrator | 2026-01-10 14:44:27 | INFO  | Task 1d2b3968-ff18-427d-8b4e-32942a34d5c1 is in state STARTED 2026-01-10 14:44:27.350656 | orchestrator | 2026-01-10 14:44:27 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:44:30.400043 | orchestrator | 2026-01-10 14:44:30 | INFO  | Task 703b5669-de21-4506-ab77-7254d80264e5 is in state STARTED 2026-01-10 14:44:30.405531 | orchestrator | 2026-01-10 14:44:30 | INFO  | Task 6f577a16-db16-4f3d-9b0a-ec524f08f8fa is in state STARTED 2026-01-10 14:44:30.405599 | orchestrator | 2026-01-10 14:44:30 | INFO  | Task 1d2b3968-ff18-427d-8b4e-32942a34d5c1 is in state STARTED 2026-01-10 14:44:30.405744 | orchestrator | 2026-01-10 14:44:30 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:44:33.454178 | orchestrator | 2026-01-10 14:44:33 | INFO  | Task 703b5669-de21-4506-ab77-7254d80264e5 is in state STARTED 2026-01-10 14:44:33.456901 | orchestrator | 2026-01-10 14:44:33 | INFO  | Task 6f577a16-db16-4f3d-9b0a-ec524f08f8fa is in state STARTED 2026-01-10 14:44:33.459482 | orchestrator | 2026-01-10 14:44:33 | INFO  | Task 1d2b3968-ff18-427d-8b4e-32942a34d5c1 is in state STARTED 2026-01-10 14:44:33.459537 | orchestrator | 2026-01-10 14:44:33 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:44:36.504127 | orchestrator | 2026-01-10 14:44:36 | INFO  | Task 703b5669-de21-4506-ab77-7254d80264e5 is in state STARTED 2026-01-10 14:44:36.504616 | orchestrator | 2026-01-10 14:44:36 | INFO  | Task 6f577a16-db16-4f3d-9b0a-ec524f08f8fa is in state STARTED 2026-01-10 14:44:36.505788 | orchestrator | 2026-01-10 14:44:36 | INFO  | Task 1d2b3968-ff18-427d-8b4e-32942a34d5c1 is in state STARTED 2026-01-10 14:44:36.505830 | orchestrator | 2026-01-10 14:44:36 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:44:39.551195 | orchestrator | 2026-01-10 14:44:39 | INFO  | Task 703b5669-de21-4506-ab77-7254d80264e5 is in state STARTED 2026-01-10 14:44:39.552176 | orchestrator | 2026-01-10 14:44:39 | INFO  | Task 6f577a16-db16-4f3d-9b0a-ec524f08f8fa is in state STARTED 2026-01-10 14:44:39.552918 | orchestrator | 2026-01-10 14:44:39 | INFO  | Task 1d2b3968-ff18-427d-8b4e-32942a34d5c1 is in state STARTED 2026-01-10 14:44:39.552972 | orchestrator | 2026-01-10 14:44:39 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:44:42.596182 | orchestrator | 2026-01-10 14:44:42 | INFO  | Task 703b5669-de21-4506-ab77-7254d80264e5 is in state STARTED 2026-01-10 14:44:42.596248 | orchestrator | 2026-01-10 14:44:42 | INFO  | Task 6f577a16-db16-4f3d-9b0a-ec524f08f8fa is in state STARTED 2026-01-10 14:44:42.598179 | orchestrator | 2026-01-10 14:44:42 | INFO  | Task 1d2b3968-ff18-427d-8b4e-32942a34d5c1 is in state STARTED 2026-01-10 14:44:42.598270 | orchestrator | 2026-01-10 14:44:42 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:44:45.651298 | orchestrator | 2026-01-10 14:44:45 | INFO  | Task 703b5669-de21-4506-ab77-7254d80264e5 is in state STARTED 2026-01-10 14:44:45.653386 | orchestrator | 2026-01-10 14:44:45 | INFO  | Task 6f577a16-db16-4f3d-9b0a-ec524f08f8fa is in state STARTED 2026-01-10 14:44:45.654852 | orchestrator | 2026-01-10 14:44:45 | INFO  | Task 1d2b3968-ff18-427d-8b4e-32942a34d5c1 is in state STARTED 2026-01-10 14:44:45.654883 | orchestrator | 2026-01-10 14:44:45 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:44:48.713410 | orchestrator | 2026-01-10 14:44:48 | INFO  | Task 703b5669-de21-4506-ab77-7254d80264e5 is in state STARTED 2026-01-10 14:44:48.715224 | orchestrator | 2026-01-10 14:44:48 | INFO  | Task 6f577a16-db16-4f3d-9b0a-ec524f08f8fa is in state SUCCESS 2026-01-10 14:44:48.716336 | orchestrator | 2026-01-10 14:44:48.716378 | orchestrator | 2026-01-10 14:44:48.716386 | orchestrator | PLAY [Copy ceph keys to the configuration repository] ************************** 2026-01-10 14:44:48.716393 | orchestrator | 2026-01-10 14:44:48.716399 | orchestrator | TASK [Check if ceph keys exist] ************************************************ 2026-01-10 14:44:48.716405 | orchestrator | Saturday 10 January 2026 14:43:42 +0000 (0:00:00.161) 0:00:00.161 ****** 2026-01-10 14:44:48.716412 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-01-10 14:44:48.716418 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-01-10 14:44:48.716424 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-01-10 14:44:48.716430 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-01-10 14:44:48.716435 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-01-10 14:44:48.716441 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-01-10 14:44:48.716447 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-01-10 14:44:48.716452 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-01-10 14:44:48.716458 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-01-10 14:44:48.716464 | orchestrator | 2026-01-10 14:44:48.716470 | orchestrator | TASK [Fetch all ceph keys] ***************************************************** 2026-01-10 14:44:48.716475 | orchestrator | Saturday 10 January 2026 14:43:47 +0000 (0:00:05.007) 0:00:05.168 ****** 2026-01-10 14:44:48.716489 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-01-10 14:44:48.716496 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-01-10 14:44:48.716502 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-01-10 14:44:48.716522 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-01-10 14:44:48.716528 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-01-10 14:44:48.716534 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-01-10 14:44:48.716540 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-01-10 14:44:48.716546 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-01-10 14:44:48.716552 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-01-10 14:44:48.716567 | orchestrator | 2026-01-10 14:44:48.716577 | orchestrator | TASK [Create share directory] ************************************************** 2026-01-10 14:44:48.716583 | orchestrator | Saturday 10 January 2026 14:43:52 +0000 (0:00:04.431) 0:00:09.599 ****** 2026-01-10 14:44:48.716589 | orchestrator | changed: [testbed-manager -> localhost] 2026-01-10 14:44:48.716594 | orchestrator | 2026-01-10 14:44:48.716599 | orchestrator | TASK [Write ceph keys to the share directory] ********************************** 2026-01-10 14:44:48.716604 | orchestrator | Saturday 10 January 2026 14:43:53 +0000 (0:00:01.050) 0:00:10.649 ****** 2026-01-10 14:44:48.716609 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2026-01-10 14:44:48.716615 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-01-10 14:44:48.716626 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-01-10 14:44:48.716632 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2026-01-10 14:44:48.716709 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-01-10 14:44:48.716718 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2026-01-10 14:44:48.716723 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2026-01-10 14:44:48.716892 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2026-01-10 14:44:48.716902 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2026-01-10 14:44:48.716907 | orchestrator | 2026-01-10 14:44:48.716913 | orchestrator | TASK [Check if target directories exist] *************************************** 2026-01-10 14:44:48.716918 | orchestrator | Saturday 10 January 2026 14:44:07 +0000 (0:00:14.544) 0:00:25.193 ****** 2026-01-10 14:44:48.716923 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/infrastructure/files/ceph) 2026-01-10 14:44:48.716955 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-volume) 2026-01-10 14:44:48.716963 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-01-10 14:44:48.716968 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-01-10 14:44:48.716990 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-01-10 14:44:48.716996 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-01-10 14:44:48.717001 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/glance) 2026-01-10 14:44:48.717006 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/gnocchi) 2026-01-10 14:44:48.717011 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/manila) 2026-01-10 14:44:48.717016 | orchestrator | 2026-01-10 14:44:48.717021 | orchestrator | TASK [Write ceph keys to the configuration directory] ************************** 2026-01-10 14:44:48.717026 | orchestrator | Saturday 10 January 2026 14:44:11 +0000 (0:00:03.176) 0:00:28.370 ****** 2026-01-10 14:44:48.717040 | orchestrator | changed: [testbed-manager] => (item=ceph.client.admin.keyring) 2026-01-10 14:44:48.717045 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-01-10 14:44:48.717050 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-01-10 14:44:48.717055 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder-backup.keyring) 2026-01-10 14:44:48.717060 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-01-10 14:44:48.717065 | orchestrator | changed: [testbed-manager] => (item=ceph.client.nova.keyring) 2026-01-10 14:44:48.717069 | orchestrator | changed: [testbed-manager] => (item=ceph.client.glance.keyring) 2026-01-10 14:44:48.717073 | orchestrator | changed: [testbed-manager] => (item=ceph.client.gnocchi.keyring) 2026-01-10 14:44:48.717078 | orchestrator | changed: [testbed-manager] => (item=ceph.client.manila.keyring) 2026-01-10 14:44:48.717083 | orchestrator | 2026-01-10 14:44:48.717088 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-10 14:44:48.717097 | orchestrator | testbed-manager : ok=6  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-10 14:44:48.717103 | orchestrator | 2026-01-10 14:44:48.717108 | orchestrator | 2026-01-10 14:44:48.717113 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-10 14:44:48.717118 | orchestrator | Saturday 10 January 2026 14:44:18 +0000 (0:00:07.294) 0:00:35.665 ****** 2026-01-10 14:44:48.717123 | orchestrator | =============================================================================== 2026-01-10 14:44:48.717128 | orchestrator | Write ceph keys to the share directory --------------------------------- 14.54s 2026-01-10 14:44:48.717133 | orchestrator | Write ceph keys to the configuration directory -------------------------- 7.29s 2026-01-10 14:44:48.717138 | orchestrator | Check if ceph keys exist ------------------------------------------------ 5.01s 2026-01-10 14:44:48.717143 | orchestrator | Fetch all ceph keys ----------------------------------------------------- 4.43s 2026-01-10 14:44:48.717149 | orchestrator | Check if target directories exist --------------------------------------- 3.18s 2026-01-10 14:44:48.717154 | orchestrator | Create share directory -------------------------------------------------- 1.05s 2026-01-10 14:44:48.717158 | orchestrator | 2026-01-10 14:44:48.717163 | orchestrator | 2026-01-10 14:44:48.717167 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-10 14:44:48.717172 | orchestrator | 2026-01-10 14:44:48.717176 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-10 14:44:48.717181 | orchestrator | Saturday 10 January 2026 14:43:04 +0000 (0:00:00.264) 0:00:00.264 ****** 2026-01-10 14:44:48.717186 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:44:48.717191 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:44:48.717196 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:44:48.717201 | orchestrator | 2026-01-10 14:44:48.717205 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-10 14:44:48.717210 | orchestrator | Saturday 10 January 2026 14:43:04 +0000 (0:00:00.323) 0:00:00.588 ****** 2026-01-10 14:44:48.717214 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2026-01-10 14:44:48.717220 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2026-01-10 14:44:48.717225 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2026-01-10 14:44:48.717230 | orchestrator | 2026-01-10 14:44:48.717235 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2026-01-10 14:44:48.717240 | orchestrator | 2026-01-10 14:44:48.717244 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-01-10 14:44:48.717249 | orchestrator | Saturday 10 January 2026 14:43:05 +0000 (0:00:00.491) 0:00:01.079 ****** 2026-01-10 14:44:48.717254 | orchestrator | included: /ansible/roles/horizon/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:44:48.717259 | orchestrator | 2026-01-10 14:44:48.717264 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2026-01-10 14:44:48.717275 | orchestrator | Saturday 10 January 2026 14:43:05 +0000 (0:00:00.530) 0:00:01.610 ****** 2026-01-10 14:44:48.717304 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-10 14:44:48.717331 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-10 14:44:48.717349 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-10 14:44:48.717356 | orchestrator | 2026-01-10 14:44:48.717361 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2026-01-10 14:44:48.717367 | orchestrator | Saturday 10 January 2026 14:43:07 +0000 (0:00:01.208) 0:00:02.818 ****** 2026-01-10 14:44:48.717372 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:44:48.717377 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:44:48.717383 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:44:48.717389 | orchestrator | 2026-01-10 14:44:48.717394 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-01-10 14:44:48.717399 | orchestrator | Saturday 10 January 2026 14:43:07 +0000 (0:00:00.496) 0:00:03.314 ****** 2026-01-10 14:44:48.717404 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2026-01-10 14:44:48.717410 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'heat', 'enabled': 'no'})  2026-01-10 14:44:48.717415 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'ironic', 'enabled': False})  2026-01-10 14:44:48.717421 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2026-01-10 14:44:48.717427 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2026-01-10 14:44:48.717432 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2026-01-10 14:44:48.717442 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2026-01-10 14:44:48.717448 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2026-01-10 14:44:48.717453 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2026-01-10 14:44:48.717458 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'heat', 'enabled': 'no'})  2026-01-10 14:44:48.717464 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'ironic', 'enabled': False})  2026-01-10 14:44:48.717469 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2026-01-10 14:44:48.717474 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2026-01-10 14:44:48.717480 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2026-01-10 14:44:48.717485 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2026-01-10 14:44:48.717490 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2026-01-10 14:44:48.717496 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2026-01-10 14:44:48.717501 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'heat', 'enabled': 'no'})  2026-01-10 14:44:48.717507 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'ironic', 'enabled': False})  2026-01-10 14:44:48.717513 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2026-01-10 14:44:48.717521 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2026-01-10 14:44:48.717527 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2026-01-10 14:44:48.717533 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2026-01-10 14:44:48.717539 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2026-01-10 14:44:48.717545 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2026-01-10 14:44:48.717552 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2026-01-10 14:44:48.717557 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2026-01-10 14:44:48.717563 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2026-01-10 14:44:48.717569 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2026-01-10 14:44:48.717575 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2026-01-10 14:44:48.717584 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2026-01-10 14:44:48.717590 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2026-01-10 14:44:48.717596 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2026-01-10 14:44:48.717602 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2026-01-10 14:44:48.717612 | orchestrator | 2026-01-10 14:44:48.717618 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-10 14:44:48.717624 | orchestrator | Saturday 10 January 2026 14:43:08 +0000 (0:00:00.900) 0:00:04.215 ****** 2026-01-10 14:44:48.717630 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:44:48.717635 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:44:48.717641 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:44:48.717647 | orchestrator | 2026-01-10 14:44:48.717653 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-10 14:44:48.717659 | orchestrator | Saturday 10 January 2026 14:43:08 +0000 (0:00:00.379) 0:00:04.594 ****** 2026-01-10 14:44:48.717665 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:44:48.717671 | orchestrator | 2026-01-10 14:44:48.717677 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-10 14:44:48.717683 | orchestrator | Saturday 10 January 2026 14:43:09 +0000 (0:00:00.142) 0:00:04.737 ****** 2026-01-10 14:44:48.717689 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:44:48.717695 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:44:48.717700 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:44:48.717706 | orchestrator | 2026-01-10 14:44:48.717712 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-10 14:44:48.717718 | orchestrator | Saturday 10 January 2026 14:43:09 +0000 (0:00:00.517) 0:00:05.255 ****** 2026-01-10 14:44:48.717724 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:44:48.717730 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:44:48.717736 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:44:48.717741 | orchestrator | 2026-01-10 14:44:48.717747 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-10 14:44:48.717753 | orchestrator | Saturday 10 January 2026 14:43:09 +0000 (0:00:00.338) 0:00:05.594 ****** 2026-01-10 14:44:48.717758 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:44:48.717764 | orchestrator | 2026-01-10 14:44:48.717770 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-10 14:44:48.717775 | orchestrator | Saturday 10 January 2026 14:43:10 +0000 (0:00:00.147) 0:00:05.742 ****** 2026-01-10 14:44:48.717781 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:44:48.717788 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:44:48.717794 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:44:48.717799 | orchestrator | 2026-01-10 14:44:48.717805 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-10 14:44:48.717811 | orchestrator | Saturday 10 January 2026 14:43:10 +0000 (0:00:00.337) 0:00:06.080 ****** 2026-01-10 14:44:48.717817 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:44:48.717823 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:44:48.717829 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:44:48.717835 | orchestrator | 2026-01-10 14:44:48.717841 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-10 14:44:48.717847 | orchestrator | Saturday 10 January 2026 14:43:10 +0000 (0:00:00.322) 0:00:06.402 ****** 2026-01-10 14:44:48.717853 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:44:48.717858 | orchestrator | 2026-01-10 14:44:48.717864 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-10 14:44:48.717870 | orchestrator | Saturday 10 January 2026 14:43:11 +0000 (0:00:00.365) 0:00:06.768 ****** 2026-01-10 14:44:48.717881 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:44:48.717887 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:44:48.717892 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:44:48.717896 | orchestrator | 2026-01-10 14:44:48.717901 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-10 14:44:48.717906 | orchestrator | Saturday 10 January 2026 14:43:11 +0000 (0:00:00.336) 0:00:07.104 ****** 2026-01-10 14:44:48.717911 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:44:48.717917 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:44:48.717922 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:44:48.717928 | orchestrator | 2026-01-10 14:44:48.717933 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-10 14:44:48.717943 | orchestrator | Saturday 10 January 2026 14:43:11 +0000 (0:00:00.330) 0:00:07.435 ****** 2026-01-10 14:44:48.717949 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:44:48.717955 | orchestrator | 2026-01-10 14:44:48.717961 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-10 14:44:48.717967 | orchestrator | Saturday 10 January 2026 14:43:11 +0000 (0:00:00.138) 0:00:07.573 ****** 2026-01-10 14:44:48.717972 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:44:48.717978 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:44:48.717984 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:44:48.717990 | orchestrator | 2026-01-10 14:44:48.717996 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-10 14:44:48.718002 | orchestrator | Saturday 10 January 2026 14:43:12 +0000 (0:00:00.303) 0:00:07.877 ****** 2026-01-10 14:44:48.718009 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:44:48.718050 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:44:48.718057 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:44:48.718063 | orchestrator | 2026-01-10 14:44:48.718069 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-10 14:44:48.718075 | orchestrator | Saturday 10 January 2026 14:43:12 +0000 (0:00:00.536) 0:00:08.413 ****** 2026-01-10 14:44:48.718081 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:44:48.718086 | orchestrator | 2026-01-10 14:44:48.718092 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-10 14:44:48.718100 | orchestrator | Saturday 10 January 2026 14:43:12 +0000 (0:00:00.130) 0:00:08.544 ****** 2026-01-10 14:44:48.718106 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:44:48.718112 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:44:48.718118 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:44:48.718124 | orchestrator | 2026-01-10 14:44:48.718129 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-10 14:44:48.718135 | orchestrator | Saturday 10 January 2026 14:43:13 +0000 (0:00:00.319) 0:00:08.863 ****** 2026-01-10 14:44:48.718141 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:44:48.718147 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:44:48.718153 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:44:48.718159 | orchestrator | 2026-01-10 14:44:48.718165 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-10 14:44:48.718172 | orchestrator | Saturday 10 January 2026 14:43:13 +0000 (0:00:00.328) 0:00:09.192 ****** 2026-01-10 14:44:48.718177 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:44:48.718182 | orchestrator | 2026-01-10 14:44:48.718187 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-10 14:44:48.718193 | orchestrator | Saturday 10 January 2026 14:43:13 +0000 (0:00:00.157) 0:00:09.350 ****** 2026-01-10 14:44:48.718199 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:44:48.718205 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:44:48.718211 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:44:48.718216 | orchestrator | 2026-01-10 14:44:48.718222 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-10 14:44:48.718227 | orchestrator | Saturday 10 January 2026 14:43:14 +0000 (0:00:00.417) 0:00:09.768 ****** 2026-01-10 14:44:48.718233 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:44:48.718238 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:44:48.718244 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:44:48.718250 | orchestrator | 2026-01-10 14:44:48.718255 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-10 14:44:48.718261 | orchestrator | Saturday 10 January 2026 14:43:14 +0000 (0:00:00.536) 0:00:10.305 ****** 2026-01-10 14:44:48.718267 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:44:48.718273 | orchestrator | 2026-01-10 14:44:48.718280 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-10 14:44:48.718286 | orchestrator | Saturday 10 January 2026 14:43:14 +0000 (0:00:00.119) 0:00:10.424 ****** 2026-01-10 14:44:48.718299 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:44:48.718305 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:44:48.718358 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:44:48.718364 | orchestrator | 2026-01-10 14:44:48.718370 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-10 14:44:48.718376 | orchestrator | Saturday 10 January 2026 14:43:15 +0000 (0:00:00.279) 0:00:10.703 ****** 2026-01-10 14:44:48.718381 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:44:48.718386 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:44:48.718392 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:44:48.718397 | orchestrator | 2026-01-10 14:44:48.718402 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-10 14:44:48.718408 | orchestrator | Saturday 10 January 2026 14:43:15 +0000 (0:00:00.384) 0:00:11.088 ****** 2026-01-10 14:44:48.718413 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:44:48.718419 | orchestrator | 2026-01-10 14:44:48.718425 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-10 14:44:48.718430 | orchestrator | Saturday 10 January 2026 14:43:15 +0000 (0:00:00.138) 0:00:11.227 ****** 2026-01-10 14:44:48.718436 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:44:48.718441 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:44:48.718446 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:44:48.718451 | orchestrator | 2026-01-10 14:44:48.718457 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-10 14:44:48.718462 | orchestrator | Saturday 10 January 2026 14:43:15 +0000 (0:00:00.306) 0:00:11.534 ****** 2026-01-10 14:44:48.718467 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:44:48.718473 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:44:48.718478 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:44:48.718483 | orchestrator | 2026-01-10 14:44:48.718495 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-10 14:44:48.718501 | orchestrator | Saturday 10 January 2026 14:43:16 +0000 (0:00:00.589) 0:00:12.123 ****** 2026-01-10 14:44:48.718506 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:44:48.718512 | orchestrator | 2026-01-10 14:44:48.718517 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-10 14:44:48.718523 | orchestrator | Saturday 10 January 2026 14:43:16 +0000 (0:00:00.134) 0:00:12.257 ****** 2026-01-10 14:44:48.718528 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:44:48.718534 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:44:48.718539 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:44:48.718544 | orchestrator | 2026-01-10 14:44:48.718550 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-10 14:44:48.718555 | orchestrator | Saturday 10 January 2026 14:43:16 +0000 (0:00:00.298) 0:00:12.556 ****** 2026-01-10 14:44:48.718561 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:44:48.718566 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:44:48.718572 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:44:48.718578 | orchestrator | 2026-01-10 14:44:48.718583 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-10 14:44:48.718588 | orchestrator | Saturday 10 January 2026 14:43:17 +0000 (0:00:00.335) 0:00:12.891 ****** 2026-01-10 14:44:48.718594 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:44:48.718599 | orchestrator | 2026-01-10 14:44:48.718604 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-10 14:44:48.718610 | orchestrator | Saturday 10 January 2026 14:43:17 +0000 (0:00:00.136) 0:00:13.027 ****** 2026-01-10 14:44:48.718615 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:44:48.718620 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:44:48.718625 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:44:48.718631 | orchestrator | 2026-01-10 14:44:48.718636 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2026-01-10 14:44:48.718642 | orchestrator | Saturday 10 January 2026 14:43:17 +0000 (0:00:00.523) 0:00:13.551 ****** 2026-01-10 14:44:48.718647 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:44:48.718660 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:44:48.718664 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:44:48.718673 | orchestrator | 2026-01-10 14:44:48.718678 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2026-01-10 14:44:48.718683 | orchestrator | Saturday 10 January 2026 14:43:19 +0000 (0:00:01.853) 0:00:15.405 ****** 2026-01-10 14:44:48.718688 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-01-10 14:44:48.718693 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-01-10 14:44:48.718697 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-01-10 14:44:48.718702 | orchestrator | 2026-01-10 14:44:48.718707 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2026-01-10 14:44:48.718712 | orchestrator | Saturday 10 January 2026 14:43:21 +0000 (0:00:02.083) 0:00:17.488 ****** 2026-01-10 14:44:48.718717 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-01-10 14:44:48.718723 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-01-10 14:44:48.718728 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-01-10 14:44:48.718733 | orchestrator | 2026-01-10 14:44:48.718738 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2026-01-10 14:44:48.718743 | orchestrator | Saturday 10 January 2026 14:43:24 +0000 (0:00:02.468) 0:00:19.956 ****** 2026-01-10 14:44:48.718747 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-01-10 14:44:48.718752 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-01-10 14:44:48.718756 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-01-10 14:44:48.718761 | orchestrator | 2026-01-10 14:44:48.718766 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2026-01-10 14:44:48.718770 | orchestrator | Saturday 10 January 2026 14:43:26 +0000 (0:00:02.202) 0:00:22.159 ****** 2026-01-10 14:44:48.718775 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:44:48.718780 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:44:48.718785 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:44:48.718791 | orchestrator | 2026-01-10 14:44:48.718795 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2026-01-10 14:44:48.718800 | orchestrator | Saturday 10 January 2026 14:43:26 +0000 (0:00:00.308) 0:00:22.467 ****** 2026-01-10 14:44:48.718805 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:44:48.718810 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:44:48.718815 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:44:48.718820 | orchestrator | 2026-01-10 14:44:48.718825 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-01-10 14:44:48.718830 | orchestrator | Saturday 10 January 2026 14:43:27 +0000 (0:00:00.313) 0:00:22.780 ****** 2026-01-10 14:44:48.718835 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:44:48.718840 | orchestrator | 2026-01-10 14:44:48.718845 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2026-01-10 14:44:48.718850 | orchestrator | Saturday 10 January 2026 14:43:27 +0000 (0:00:00.813) 0:00:23.594 ****** 2026-01-10 14:44:48.718869 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-10 14:44:48.718885 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-10 14:44:48.718895 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-10 14:44:48.718904 | orchestrator | 2026-01-10 14:44:48.718909 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2026-01-10 14:44:48.718915 | orchestrator | Saturday 10 January 2026 14:43:29 +0000 (0:00:01.602) 0:00:25.197 ****** 2026-01-10 14:44:48.718925 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-10 14:44:48.718935 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:44:48.718944 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-10 14:44:48.718950 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:44:48.718960 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-10 14:44:48.718969 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:44:48.718975 | orchestrator | 2026-01-10 14:44:48.718981 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2026-01-10 14:44:48.718987 | orchestrator | Saturday 10 January 2026 14:43:30 +0000 (0:00:00.666) 0:00:25.864 ****** 2026-01-10 14:44:48.718994 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-10 14:44:48.719001 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:44:48.719014 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-10 14:44:48.719023 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:44:48.719029 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-10 14:44:48.719039 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:44:48.719045 | orchestrator | 2026-01-10 14:44:48.719051 | orchestrator | TASK [service-check-containers : horizon | Check containers] ******************* 2026-01-10 14:44:48.719056 | orchestrator | Saturday 10 January 2026 14:43:31 +0000 (0:00:00.867) 0:00:26.731 ****** 2026-01-10 14:44:48.719069 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-10 14:44:48.719080 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-10 14:44:48.719095 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-10 14:44:48.719102 | orchestrator | 2026-01-10 14:44:48.719108 | orchestrator | TASK [service-check-containers : horizon | Notify handlers to restart containers] *** 2026-01-10 14:44:48.719113 | orchestrator | Saturday 10 January 2026 14:43:32 +0000 (0:00:01.769) 0:00:28.501 ****** 2026-01-10 14:44:48.719119 | orchestrator | changed: [testbed-node-0] => { 2026-01-10 14:44:48.719125 | orchestrator |  "msg": "Notifying handlers" 2026-01-10 14:44:48.719130 | orchestrator | } 2026-01-10 14:44:48.719136 | orchestrator | changed: [testbed-node-1] => { 2026-01-10 14:44:48.719142 | orchestrator |  "msg": "Notifying handlers" 2026-01-10 14:44:48.719148 | orchestrator | } 2026-01-10 14:44:48.719153 | orchestrator | changed: [testbed-node-2] => { 2026-01-10 14:44:48.719159 | orchestrator |  "msg": "Notifying handlers" 2026-01-10 14:44:48.719164 | orchestrator | } 2026-01-10 14:44:48.719169 | orchestrator | 2026-01-10 14:44:48.719175 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-01-10 14:44:48.719181 | orchestrator | Saturday 10 January 2026 14:43:33 +0000 (0:00:00.400) 0:00:28.902 ****** 2026-01-10 14:44:48.719195 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-10 14:44:48.719201 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:44:48.719210 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-10 14:44:48.719219 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:44:48.719231 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-10 14:44:48.719238 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:44:48.719244 | orchestrator | 2026-01-10 14:44:48.719250 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-01-10 14:44:48.719255 | orchestrator | Saturday 10 January 2026 14:43:34 +0000 (0:00:00.911) 0:00:29.813 ****** 2026-01-10 14:44:48.719261 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:44:48.719266 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:44:48.719272 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:44:48.719278 | orchestrator | 2026-01-10 14:44:48.719284 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-01-10 14:44:48.719290 | orchestrator | Saturday 10 January 2026 14:43:34 +0000 (0:00:00.527) 0:00:30.340 ****** 2026-01-10 14:44:48.719295 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:44:48.719301 | orchestrator | 2026-01-10 14:44:48.719307 | orchestrator | TASK [horizon : Creating Horizon database] ************************************* 2026-01-10 14:44:48.719344 | orchestrator | Saturday 10 January 2026 14:43:35 +0000 (0:00:00.545) 0:00:30.886 ****** 2026-01-10 14:44:48.719350 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:44:48.719355 | orchestrator | 2026-01-10 14:44:48.719361 | orchestrator | TASK [horizon : Creating Horizon database user and setting permissions] ******** 2026-01-10 14:44:48.719372 | orchestrator | Saturday 10 January 2026 14:43:37 +0000 (0:00:02.514) 0:00:33.401 ****** 2026-01-10 14:44:48.719378 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:44:48.719384 | orchestrator | 2026-01-10 14:44:48.719389 | orchestrator | TASK [horizon : Running Horizon bootstrap container] *************************** 2026-01-10 14:44:48.719395 | orchestrator | Saturday 10 January 2026 14:43:40 +0000 (0:00:02.363) 0:00:35.764 ****** 2026-01-10 14:44:48.719401 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:44:48.719407 | orchestrator | 2026-01-10 14:44:48.719413 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-01-10 14:44:48.719418 | orchestrator | Saturday 10 January 2026 14:43:56 +0000 (0:00:16.468) 0:00:52.233 ****** 2026-01-10 14:44:48.719424 | orchestrator | 2026-01-10 14:44:48.719429 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-01-10 14:44:48.719435 | orchestrator | Saturday 10 January 2026 14:43:56 +0000 (0:00:00.070) 0:00:52.304 ****** 2026-01-10 14:44:48.719441 | orchestrator | 2026-01-10 14:44:48.719447 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-01-10 14:44:48.719452 | orchestrator | Saturday 10 January 2026 14:43:56 +0000 (0:00:00.253) 0:00:52.557 ****** 2026-01-10 14:44:48.719458 | orchestrator | 2026-01-10 14:44:48.719464 | orchestrator | RUNNING HANDLER [horizon : Restart horizon container] ************************** 2026-01-10 14:44:48.719470 | orchestrator | Saturday 10 January 2026 14:43:57 +0000 (0:00:00.068) 0:00:52.626 ****** 2026-01-10 14:44:48.719475 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:44:48.719481 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:44:48.719487 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:44:48.719491 | orchestrator | 2026-01-10 14:44:48.719497 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-10 14:44:48.719503 | orchestrator | testbed-node-0 : ok=38  changed=12  unreachable=0 failed=0 skipped=26  rescued=0 ignored=0 2026-01-10 14:44:48.719509 | orchestrator | testbed-node-1 : ok=35  changed=9  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2026-01-10 14:44:48.719517 | orchestrator | testbed-node-2 : ok=35  changed=9  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2026-01-10 14:44:48.719523 | orchestrator | 2026-01-10 14:44:48.719528 | orchestrator | 2026-01-10 14:44:48.719533 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-10 14:44:48.719539 | orchestrator | Saturday 10 January 2026 14:44:45 +0000 (0:00:48.415) 0:01:41.042 ****** 2026-01-10 14:44:48.719544 | orchestrator | =============================================================================== 2026-01-10 14:44:48.719549 | orchestrator | horizon : Restart horizon container ------------------------------------ 48.42s 2026-01-10 14:44:48.719555 | orchestrator | horizon : Running Horizon bootstrap container -------------------------- 16.47s 2026-01-10 14:44:48.719560 | orchestrator | horizon : Creating Horizon database ------------------------------------- 2.51s 2026-01-10 14:44:48.719566 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 2.47s 2026-01-10 14:44:48.719572 | orchestrator | horizon : Creating Horizon database user and setting permissions -------- 2.36s 2026-01-10 14:44:48.719577 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 2.20s 2026-01-10 14:44:48.719583 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 2.08s 2026-01-10 14:44:48.719588 | orchestrator | horizon : Copying over config.json files for services ------------------- 1.85s 2026-01-10 14:44:48.719594 | orchestrator | service-check-containers : horizon | Check containers ------------------- 1.77s 2026-01-10 14:44:48.719599 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 1.60s 2026-01-10 14:44:48.719604 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 1.21s 2026-01-10 14:44:48.719610 | orchestrator | service-check-containers : Include tasks -------------------------------- 0.91s 2026-01-10 14:44:48.719619 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.90s 2026-01-10 14:44:48.719624 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 0.87s 2026-01-10 14:44:48.719633 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.81s 2026-01-10 14:44:48.719638 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 0.67s 2026-01-10 14:44:48.719644 | orchestrator | horizon : Update policy file name --------------------------------------- 0.59s 2026-01-10 14:44:48.719649 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.55s 2026-01-10 14:44:48.719655 | orchestrator | horizon : Update policy file name --------------------------------------- 0.54s 2026-01-10 14:44:48.719661 | orchestrator | horizon : Update policy file name --------------------------------------- 0.54s 2026-01-10 14:44:48.719667 | orchestrator | 2026-01-10 14:44:48 | INFO  | Task 1d2b3968-ff18-427d-8b4e-32942a34d5c1 is in state STARTED 2026-01-10 14:44:48.720906 | orchestrator | 2026-01-10 14:44:48 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:44:51.766625 | orchestrator | 2026-01-10 14:44:51 | INFO  | Task 703b5669-de21-4506-ab77-7254d80264e5 is in state STARTED 2026-01-10 14:44:51.770827 | orchestrator | 2026-01-10 14:44:51 | INFO  | Task 1d2b3968-ff18-427d-8b4e-32942a34d5c1 is in state STARTED 2026-01-10 14:44:51.770883 | orchestrator | 2026-01-10 14:44:51 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:44:54.816178 | orchestrator | 2026-01-10 14:44:54 | INFO  | Task 703b5669-de21-4506-ab77-7254d80264e5 is in state STARTED 2026-01-10 14:44:54.816228 | orchestrator | 2026-01-10 14:44:54 | INFO  | Task 1d2b3968-ff18-427d-8b4e-32942a34d5c1 is in state STARTED 2026-01-10 14:44:54.816233 | orchestrator | 2026-01-10 14:44:54 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:44:57.865414 | orchestrator | 2026-01-10 14:44:57 | INFO  | Task 703b5669-de21-4506-ab77-7254d80264e5 is in state STARTED 2026-01-10 14:44:57.867558 | orchestrator | 2026-01-10 14:44:57 | INFO  | Task 1d2b3968-ff18-427d-8b4e-32942a34d5c1 is in state STARTED 2026-01-10 14:44:57.867604 | orchestrator | 2026-01-10 14:44:57 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:45:00.912264 | orchestrator | 2026-01-10 14:45:00 | INFO  | Task 703b5669-de21-4506-ab77-7254d80264e5 is in state STARTED 2026-01-10 14:45:00.915221 | orchestrator | 2026-01-10 14:45:00 | INFO  | Task 1d2b3968-ff18-427d-8b4e-32942a34d5c1 is in state STARTED 2026-01-10 14:45:00.915281 | orchestrator | 2026-01-10 14:45:00 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:45:03.955877 | orchestrator | 2026-01-10 14:45:03 | INFO  | Task 703b5669-de21-4506-ab77-7254d80264e5 is in state STARTED 2026-01-10 14:45:03.957288 | orchestrator | 2026-01-10 14:45:03 | INFO  | Task 1d2b3968-ff18-427d-8b4e-32942a34d5c1 is in state STARTED 2026-01-10 14:45:03.957360 | orchestrator | 2026-01-10 14:45:03 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:45:07.004985 | orchestrator | 2026-01-10 14:45:07 | INFO  | Task 703b5669-de21-4506-ab77-7254d80264e5 is in state STARTED 2026-01-10 14:45:07.008459 | orchestrator | 2026-01-10 14:45:07 | INFO  | Task 1d2b3968-ff18-427d-8b4e-32942a34d5c1 is in state STARTED 2026-01-10 14:45:07.008507 | orchestrator | 2026-01-10 14:45:07 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:45:10.056204 | orchestrator | 2026-01-10 14:45:10 | INFO  | Task 703b5669-de21-4506-ab77-7254d80264e5 is in state STARTED 2026-01-10 14:45:10.058654 | orchestrator | 2026-01-10 14:45:10 | INFO  | Task 1d2b3968-ff18-427d-8b4e-32942a34d5c1 is in state STARTED 2026-01-10 14:45:10.058741 | orchestrator | 2026-01-10 14:45:10 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:45:13.110723 | orchestrator | 2026-01-10 14:45:13 | INFO  | Task 703b5669-de21-4506-ab77-7254d80264e5 is in state STARTED 2026-01-10 14:45:13.112386 | orchestrator | 2026-01-10 14:45:13 | INFO  | Task 1d2b3968-ff18-427d-8b4e-32942a34d5c1 is in state STARTED 2026-01-10 14:45:13.112430 | orchestrator | 2026-01-10 14:45:13 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:45:16.164843 | orchestrator | 2026-01-10 14:45:16 | INFO  | Task 703b5669-de21-4506-ab77-7254d80264e5 is in state STARTED 2026-01-10 14:45:16.165531 | orchestrator | 2026-01-10 14:45:16 | INFO  | Task 1d2b3968-ff18-427d-8b4e-32942a34d5c1 is in state STARTED 2026-01-10 14:45:16.165571 | orchestrator | 2026-01-10 14:45:16 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:45:19.219233 | orchestrator | 2026-01-10 14:45:19 | INFO  | Task 703b5669-de21-4506-ab77-7254d80264e5 is in state STARTED 2026-01-10 14:45:19.223203 | orchestrator | 2026-01-10 14:45:19 | INFO  | Task 1d2b3968-ff18-427d-8b4e-32942a34d5c1 is in state SUCCESS 2026-01-10 14:45:19.223295 | orchestrator | 2026-01-10 14:45:19 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:45:22.299347 | orchestrator | 2026-01-10 14:45:22 | INFO  | Task c5a6b699-ef36-440a-9f5e-c325dd5ac1c4 is in state STARTED 2026-01-10 14:45:22.301626 | orchestrator | 2026-01-10 14:45:22 | INFO  | Task adbd6655-6165-4362-8d33-f83c47ce1fde is in state STARTED 2026-01-10 14:45:22.304192 | orchestrator | 2026-01-10 14:45:22 | INFO  | Task 769ebac9-a8d8-466e-bba8-fc224e1eda3b is in state STARTED 2026-01-10 14:45:22.306675 | orchestrator | 2026-01-10 14:45:22 | INFO  | Task 703b5669-de21-4506-ab77-7254d80264e5 is in state STARTED 2026-01-10 14:45:22.306884 | orchestrator | 2026-01-10 14:45:22 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:45:25.355643 | orchestrator | 2026-01-10 14:45:25 | INFO  | Task c5a6b699-ef36-440a-9f5e-c325dd5ac1c4 is in state STARTED 2026-01-10 14:45:25.355954 | orchestrator | 2026-01-10 14:45:25 | INFO  | Task adbd6655-6165-4362-8d33-f83c47ce1fde is in state STARTED 2026-01-10 14:45:25.357044 | orchestrator | 2026-01-10 14:45:25 | INFO  | Task 769ebac9-a8d8-466e-bba8-fc224e1eda3b is in state SUCCESS 2026-01-10 14:45:25.357791 | orchestrator | 2026-01-10 14:45:25 | INFO  | Task 703b5669-de21-4506-ab77-7254d80264e5 is in state STARTED 2026-01-10 14:45:25.357819 | orchestrator | 2026-01-10 14:45:25 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:45:28.943039 | orchestrator | 2026-01-10 14:45:28 | INFO  | Task e6df1422-2b05-46dd-b008-408c5499098c is in state STARTED 2026-01-10 14:45:28.947707 | orchestrator | 2026-01-10 14:45:28 | INFO  | Task c5a6b699-ef36-440a-9f5e-c325dd5ac1c4 is in state STARTED 2026-01-10 14:45:28.953668 | orchestrator | 2026-01-10 14:45:28 | INFO  | Task adbd6655-6165-4362-8d33-f83c47ce1fde is in state STARTED 2026-01-10 14:45:28.975387 | orchestrator | 2026-01-10 14:45:28 | INFO  | Task 703b5669-de21-4506-ab77-7254d80264e5 is in state STARTED 2026-01-10 14:45:28.979164 | orchestrator | 2026-01-10 14:45:28 | INFO  | Task 3e2d7132-1b2e-427a-96db-eb49ff35fee2 is in state STARTED 2026-01-10 14:45:28.979336 | orchestrator | 2026-01-10 14:45:28 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:45:32.048372 | orchestrator | 2026-01-10 14:45:32 | INFO  | Task e6df1422-2b05-46dd-b008-408c5499098c is in state STARTED 2026-01-10 14:45:32.052523 | orchestrator | 2026-01-10 14:45:32 | INFO  | Task c5a6b699-ef36-440a-9f5e-c325dd5ac1c4 is in state STARTED 2026-01-10 14:45:32.053046 | orchestrator | 2026-01-10 14:45:32 | INFO  | Task adbd6655-6165-4362-8d33-f83c47ce1fde is in state STARTED 2026-01-10 14:45:32.058619 | orchestrator | 2026-01-10 14:45:32 | INFO  | Task 703b5669-de21-4506-ab77-7254d80264e5 is in state SUCCESS 2026-01-10 14:45:32.060225 | orchestrator | 2026-01-10 14:45:32.060306 | orchestrator | 2026-01-10 14:45:32.060313 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2026-01-10 14:45:32.060320 | orchestrator | 2026-01-10 14:45:32.060327 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2026-01-10 14:45:32.060336 | orchestrator | Saturday 10 January 2026 14:44:23 +0000 (0:00:00.255) 0:00:00.255 ****** 2026-01-10 14:45:32.060345 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2026-01-10 14:45:32.060354 | orchestrator | 2026-01-10 14:45:32.060360 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2026-01-10 14:45:32.060366 | orchestrator | Saturday 10 January 2026 14:44:23 +0000 (0:00:00.219) 0:00:00.474 ****** 2026-01-10 14:45:32.060373 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/configuration) 2026-01-10 14:45:32.060381 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/data) 2026-01-10 14:45:32.060388 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2026-01-10 14:45:32.060394 | orchestrator | 2026-01-10 14:45:32.060400 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2026-01-10 14:45:32.060407 | orchestrator | Saturday 10 January 2026 14:44:24 +0000 (0:00:01.295) 0:00:01.770 ****** 2026-01-10 14:45:32.060413 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2026-01-10 14:45:32.060418 | orchestrator | 2026-01-10 14:45:32.060422 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2026-01-10 14:45:32.060426 | orchestrator | Saturday 10 January 2026 14:44:26 +0000 (0:00:01.626) 0:00:03.396 ****** 2026-01-10 14:45:32.060431 | orchestrator | changed: [testbed-manager] 2026-01-10 14:45:32.060435 | orchestrator | 2026-01-10 14:45:32.060439 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2026-01-10 14:45:32.060443 | orchestrator | Saturday 10 January 2026 14:44:27 +0000 (0:00:00.985) 0:00:04.381 ****** 2026-01-10 14:45:32.060447 | orchestrator | changed: [testbed-manager] 2026-01-10 14:45:32.060450 | orchestrator | 2026-01-10 14:45:32.060454 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2026-01-10 14:45:32.060458 | orchestrator | Saturday 10 January 2026 14:44:28 +0000 (0:00:01.081) 0:00:05.462 ****** 2026-01-10 14:45:32.060462 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage cephclient service (10 retries left). 2026-01-10 14:45:32.060465 | orchestrator | ok: [testbed-manager] 2026-01-10 14:45:32.060469 | orchestrator | 2026-01-10 14:45:32.060488 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2026-01-10 14:45:32.060491 | orchestrator | Saturday 10 January 2026 14:45:08 +0000 (0:00:39.516) 0:00:44.978 ****** 2026-01-10 14:45:32.060495 | orchestrator | changed: [testbed-manager] => (item=ceph) 2026-01-10 14:45:32.060500 | orchestrator | changed: [testbed-manager] => (item=ceph-authtool) 2026-01-10 14:45:32.060503 | orchestrator | changed: [testbed-manager] => (item=rados) 2026-01-10 14:45:32.060507 | orchestrator | changed: [testbed-manager] => (item=radosgw-admin) 2026-01-10 14:45:32.060511 | orchestrator | changed: [testbed-manager] => (item=rbd) 2026-01-10 14:45:32.060525 | orchestrator | 2026-01-10 14:45:32.060529 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2026-01-10 14:45:32.060533 | orchestrator | Saturday 10 January 2026 14:45:12 +0000 (0:00:04.364) 0:00:49.343 ****** 2026-01-10 14:45:32.060542 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2026-01-10 14:45:32.060546 | orchestrator | 2026-01-10 14:45:32.060550 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2026-01-10 14:45:32.060553 | orchestrator | Saturday 10 January 2026 14:45:12 +0000 (0:00:00.482) 0:00:49.826 ****** 2026-01-10 14:45:32.060557 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:45:32.060576 | orchestrator | 2026-01-10 14:45:32.060580 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2026-01-10 14:45:32.060584 | orchestrator | Saturday 10 January 2026 14:45:13 +0000 (0:00:00.147) 0:00:49.973 ****** 2026-01-10 14:45:32.060588 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:45:32.060592 | orchestrator | 2026-01-10 14:45:32.060630 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Restart cephclient service] ******* 2026-01-10 14:45:32.060638 | orchestrator | Saturday 10 January 2026 14:45:13 +0000 (0:00:00.541) 0:00:50.515 ****** 2026-01-10 14:45:32.060643 | orchestrator | changed: [testbed-manager] 2026-01-10 14:45:32.060648 | orchestrator | 2026-01-10 14:45:32.060654 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Ensure that all containers are up] *** 2026-01-10 14:45:32.060660 | orchestrator | Saturday 10 January 2026 14:45:14 +0000 (0:00:01.445) 0:00:51.961 ****** 2026-01-10 14:45:32.060665 | orchestrator | changed: [testbed-manager] 2026-01-10 14:45:32.060670 | orchestrator | 2026-01-10 14:45:32.060676 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Wait for an healthy service] ****** 2026-01-10 14:45:32.060682 | orchestrator | Saturday 10 January 2026 14:45:15 +0000 (0:00:00.807) 0:00:52.769 ****** 2026-01-10 14:45:32.060687 | orchestrator | changed: [testbed-manager] 2026-01-10 14:45:32.060693 | orchestrator | 2026-01-10 14:45:32.060699 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Copy bash completion scripts] ***** 2026-01-10 14:45:32.060706 | orchestrator | Saturday 10 January 2026 14:45:16 +0000 (0:00:00.590) 0:00:53.359 ****** 2026-01-10 14:45:32.060711 | orchestrator | ok: [testbed-manager] => (item=ceph) 2026-01-10 14:45:32.060717 | orchestrator | ok: [testbed-manager] => (item=rados) 2026-01-10 14:45:32.060722 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2026-01-10 14:45:32.060728 | orchestrator | ok: [testbed-manager] => (item=rbd) 2026-01-10 14:45:32.060744 | orchestrator | 2026-01-10 14:45:32.060751 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-10 14:45:32.060764 | orchestrator | testbed-manager : ok=12  changed=8  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-10 14:45:32.060772 | orchestrator | 2026-01-10 14:45:32.060778 | orchestrator | 2026-01-10 14:45:32.061146 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-10 14:45:32.061161 | orchestrator | Saturday 10 January 2026 14:45:18 +0000 (0:00:01.612) 0:00:54.972 ****** 2026-01-10 14:45:32.061165 | orchestrator | =============================================================================== 2026-01-10 14:45:32.061169 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------ 39.52s 2026-01-10 14:45:32.061173 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 4.36s 2026-01-10 14:45:32.061177 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 1.63s 2026-01-10 14:45:32.061181 | orchestrator | osism.services.cephclient : Copy bash completion scripts ---------------- 1.61s 2026-01-10 14:45:32.061185 | orchestrator | osism.services.cephclient : Restart cephclient service ------------------ 1.45s 2026-01-10 14:45:32.061189 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.30s 2026-01-10 14:45:32.061193 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 1.08s 2026-01-10 14:45:32.061197 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 0.99s 2026-01-10 14:45:32.061201 | orchestrator | osism.services.cephclient : Ensure that all containers are up ----------- 0.81s 2026-01-10 14:45:32.061205 | orchestrator | osism.services.cephclient : Wait for an healthy service ----------------- 0.59s 2026-01-10 14:45:32.061208 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 0.54s 2026-01-10 14:45:32.061213 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.48s 2026-01-10 14:45:32.061219 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.22s 2026-01-10 14:45:32.061225 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.15s 2026-01-10 14:45:32.061246 | orchestrator | 2026-01-10 14:45:32.061253 | orchestrator | 2026-01-10 14:45:32.061258 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-10 14:45:32.061288 | orchestrator | 2026-01-10 14:45:32.061295 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-10 14:45:32.061301 | orchestrator | Saturday 10 January 2026 14:45:22 +0000 (0:00:00.178) 0:00:00.178 ****** 2026-01-10 14:45:32.061307 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:45:32.061312 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:45:32.061318 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:45:32.061324 | orchestrator | 2026-01-10 14:45:32.061329 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-10 14:45:32.061335 | orchestrator | Saturday 10 January 2026 14:45:23 +0000 (0:00:00.328) 0:00:00.507 ****** 2026-01-10 14:45:32.061349 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2026-01-10 14:45:32.061356 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2026-01-10 14:45:32.061362 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2026-01-10 14:45:32.061368 | orchestrator | 2026-01-10 14:45:32.061374 | orchestrator | PLAY [Wait for the Keystone service] ******************************************* 2026-01-10 14:45:32.061380 | orchestrator | 2026-01-10 14:45:32.061386 | orchestrator | TASK [Waiting for Keystone public port to be UP] ******************************* 2026-01-10 14:45:32.061392 | orchestrator | Saturday 10 January 2026 14:45:23 +0000 (0:00:00.812) 0:00:01.319 ****** 2026-01-10 14:45:32.061398 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:45:32.061405 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:45:32.061409 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:45:32.061413 | orchestrator | 2026-01-10 14:45:32.061417 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-10 14:45:32.061422 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-10 14:45:32.061427 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-10 14:45:32.061431 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-10 14:45:32.061434 | orchestrator | 2026-01-10 14:45:32.061438 | orchestrator | 2026-01-10 14:45:32.061442 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-10 14:45:32.061446 | orchestrator | Saturday 10 January 2026 14:45:24 +0000 (0:00:00.796) 0:00:02.116 ****** 2026-01-10 14:45:32.061449 | orchestrator | =============================================================================== 2026-01-10 14:45:32.061453 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.81s 2026-01-10 14:45:32.061457 | orchestrator | Waiting for Keystone public port to be UP ------------------------------- 0.80s 2026-01-10 14:45:32.061460 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.33s 2026-01-10 14:45:32.061464 | orchestrator | 2026-01-10 14:45:32.061468 | orchestrator | 2026-01-10 14:45:32.061471 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-10 14:45:32.061475 | orchestrator | 2026-01-10 14:45:32.061479 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-10 14:45:32.061483 | orchestrator | Saturday 10 January 2026 14:43:04 +0000 (0:00:00.257) 0:00:00.257 ****** 2026-01-10 14:45:32.061488 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:45:32.061494 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:45:32.061500 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:45:32.061505 | orchestrator | 2026-01-10 14:45:32.061510 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-10 14:45:32.061515 | orchestrator | Saturday 10 January 2026 14:43:04 +0000 (0:00:00.304) 0:00:00.561 ****** 2026-01-10 14:45:32.061521 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2026-01-10 14:45:32.061526 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2026-01-10 14:45:32.061540 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2026-01-10 14:45:32.061545 | orchestrator | 2026-01-10 14:45:32.061551 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2026-01-10 14:45:32.061556 | orchestrator | 2026-01-10 14:45:32.061599 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-01-10 14:45:32.061607 | orchestrator | Saturday 10 January 2026 14:43:05 +0000 (0:00:00.463) 0:00:01.025 ****** 2026-01-10 14:45:32.061613 | orchestrator | included: /ansible/roles/keystone/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:45:32.061620 | orchestrator | 2026-01-10 14:45:32.061626 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2026-01-10 14:45:32.061633 | orchestrator | Saturday 10 January 2026 14:43:05 +0000 (0:00:00.579) 0:00:01.605 ****** 2026-01-10 14:45:32.061645 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-01-10 14:45:32.061661 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-01-10 14:45:32.061669 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-01-10 14:45:32.061705 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-10 14:45:32.061715 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-10 14:45:32.061721 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-10 14:45:32.061734 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-10 14:45:32.061741 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-10 14:45:32.061745 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-10 14:45:32.061749 | orchestrator | 2026-01-10 14:45:32.061753 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2026-01-10 14:45:32.061756 | orchestrator | Saturday 10 January 2026 14:43:07 +0000 (0:00:01.926) 0:00:03.532 ****** 2026-01-10 14:45:32.061765 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:45:32.061769 | orchestrator | 2026-01-10 14:45:32.061772 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2026-01-10 14:45:32.061776 | orchestrator | Saturday 10 January 2026 14:43:07 +0000 (0:00:00.155) 0:00:03.688 ****** 2026-01-10 14:45:32.061780 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:45:32.061784 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:45:32.061789 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:45:32.061793 | orchestrator | 2026-01-10 14:45:32.061797 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2026-01-10 14:45:32.061802 | orchestrator | Saturday 10 January 2026 14:43:08 +0000 (0:00:00.534) 0:00:04.222 ****** 2026-01-10 14:45:32.061806 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-10 14:45:32.061810 | orchestrator | 2026-01-10 14:45:32.061815 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-01-10 14:45:32.061819 | orchestrator | Saturday 10 January 2026 14:43:09 +0000 (0:00:00.896) 0:00:05.119 ****** 2026-01-10 14:45:32.061837 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:45:32.061842 | orchestrator | 2026-01-10 14:45:32.061884 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2026-01-10 14:45:32.061889 | orchestrator | Saturday 10 January 2026 14:43:09 +0000 (0:00:00.618) 0:00:05.737 ****** 2026-01-10 14:45:32.061894 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-01-10 14:45:32.061903 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-01-10 14:45:32.061908 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-01-10 14:45:32.061919 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-10 14:45:32.061939 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-10 14:45:32.061944 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-10 14:45:32.061949 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-10 14:45:32.061957 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-10 14:45:32.061961 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-10 14:45:32.061970 | orchestrator | 2026-01-10 14:45:32.061975 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2026-01-10 14:45:32.061979 | orchestrator | Saturday 10 January 2026 14:43:13 +0000 (0:00:03.624) 0:00:09.362 ****** 2026-01-10 14:45:32.061986 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-01-10 14:45:32.061990 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-10 14:45:32.061994 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-10 14:45:32.061998 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:45:32.062009 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-01-10 14:45:32.062061 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-10 14:45:32.062068 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-10 14:45:32.062074 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:45:32.062089 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-01-10 14:45:32.062096 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-10 14:45:32.062107 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-10 14:45:32.062114 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:45:32.062120 | orchestrator | 2026-01-10 14:45:32.062126 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2026-01-10 14:45:32.062138 | orchestrator | Saturday 10 January 2026 14:43:14 +0000 (0:00:00.821) 0:00:10.184 ****** 2026-01-10 14:45:32.062143 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-01-10 14:45:32.062148 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-10 14:45:32.062156 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-10 14:45:32.062160 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:45:32.062164 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-01-10 14:45:32.062170 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-10 14:45:32.062179 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-10 14:45:32.062183 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:45:32.062187 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-01-10 14:45:32.062196 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-10 14:45:32.062200 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-10 14:45:32.062204 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:45:32.062207 | orchestrator | 2026-01-10 14:45:32.062211 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2026-01-10 14:45:32.062215 | orchestrator | Saturday 10 January 2026 14:43:15 +0000 (0:00:00.793) 0:00:10.977 ****** 2026-01-10 14:45:32.062222 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-01-10 14:45:32.062231 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-01-10 14:45:32.062239 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-01-10 14:45:32.062244 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-10 14:45:32.062248 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-10 14:45:32.062259 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-10 14:45:32.062281 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-10 14:45:32.062286 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-10 14:45:32.062290 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-10 14:45:32.062294 | orchestrator | 2026-01-10 14:45:32.062298 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2026-01-10 14:45:32.062301 | orchestrator | Saturday 10 January 2026 14:43:18 +0000 (0:00:03.158) 0:00:14.135 ****** 2026-01-10 14:45:32.062311 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-01-10 14:45:32.062315 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-10 14:45:32.062326 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-01-10 14:45:32.062331 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-10 14:45:32.062339 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-01-10 14:45:32.062344 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-10 14:45:32.062348 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-10 14:45:32.062359 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-10 14:45:32.062363 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-10 14:45:32.062366 | orchestrator | 2026-01-10 14:45:32.062370 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2026-01-10 14:45:32.062374 | orchestrator | Saturday 10 January 2026 14:43:24 +0000 (0:00:05.951) 0:00:20.087 ****** 2026-01-10 14:45:32.062378 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:45:32.062382 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:45:32.062385 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:45:32.062389 | orchestrator | 2026-01-10 14:45:32.062393 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2026-01-10 14:45:32.062397 | orchestrator | Saturday 10 January 2026 14:43:25 +0000 (0:00:01.619) 0:00:21.706 ****** 2026-01-10 14:45:32.062400 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:45:32.062404 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:45:32.062408 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:45:32.062411 | orchestrator | 2026-01-10 14:45:32.062415 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2026-01-10 14:45:32.062419 | orchestrator | Saturday 10 January 2026 14:43:26 +0000 (0:00:00.598) 0:00:22.305 ****** 2026-01-10 14:45:32.062423 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:45:32.062426 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:45:32.062430 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:45:32.062434 | orchestrator | 2026-01-10 14:45:32.062438 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2026-01-10 14:45:32.062441 | orchestrator | Saturday 10 January 2026 14:43:26 +0000 (0:00:00.297) 0:00:22.602 ****** 2026-01-10 14:45:32.062445 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:45:32.062449 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:45:32.062452 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:45:32.062456 | orchestrator | 2026-01-10 14:45:32.062460 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2026-01-10 14:45:32.062463 | orchestrator | Saturday 10 January 2026 14:43:27 +0000 (0:00:00.516) 0:00:23.119 ****** 2026-01-10 14:45:32.062471 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-01-10 14:45:32.062481 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-10 14:45:32.062489 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-10 14:45:32.062493 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:45:32.062497 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-01-10 14:45:32.062502 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-10 14:45:32.062510 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-10 14:45:32.062518 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:45:32.062522 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-01-10 14:45:32.062529 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-10 14:45:32.062533 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-10 14:45:32.062537 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:45:32.062540 | orchestrator | 2026-01-10 14:45:32.062544 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-01-10 14:45:32.062548 | orchestrator | Saturday 10 January 2026 14:43:27 +0000 (0:00:00.600) 0:00:23.719 ****** 2026-01-10 14:45:32.062552 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:45:32.062556 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:45:32.062559 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:45:32.062563 | orchestrator | 2026-01-10 14:45:32.062567 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2026-01-10 14:45:32.062570 | orchestrator | Saturday 10 January 2026 14:43:28 +0000 (0:00:00.300) 0:00:24.020 ****** 2026-01-10 14:45:32.062574 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-01-10 14:45:32.062578 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-01-10 14:45:32.062582 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-01-10 14:45:32.062585 | orchestrator | 2026-01-10 14:45:32.062589 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2026-01-10 14:45:32.062596 | orchestrator | Saturday 10 January 2026 14:43:30 +0000 (0:00:01.787) 0:00:25.807 ****** 2026-01-10 14:45:32.062600 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-10 14:45:32.062603 | orchestrator | 2026-01-10 14:45:32.062607 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2026-01-10 14:45:32.062611 | orchestrator | Saturday 10 January 2026 14:43:31 +0000 (0:00:01.033) 0:00:26.841 ****** 2026-01-10 14:45:32.062614 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:45:32.062618 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:45:32.062622 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:45:32.062626 | orchestrator | 2026-01-10 14:45:32.062629 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2026-01-10 14:45:32.062635 | orchestrator | Saturday 10 January 2026 14:43:32 +0000 (0:00:01.123) 0:00:27.965 ****** 2026-01-10 14:45:32.062639 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-10 14:45:32.062643 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-01-10 14:45:32.062646 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-01-10 14:45:32.062650 | orchestrator | 2026-01-10 14:45:32.062654 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2026-01-10 14:45:32.062658 | orchestrator | Saturday 10 January 2026 14:43:33 +0000 (0:00:01.203) 0:00:29.169 ****** 2026-01-10 14:45:32.062662 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:45:32.062665 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:45:32.062669 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:45:32.062673 | orchestrator | 2026-01-10 14:45:32.062677 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2026-01-10 14:45:32.062680 | orchestrator | Saturday 10 January 2026 14:43:33 +0000 (0:00:00.377) 0:00:29.547 ****** 2026-01-10 14:45:32.062684 | orchestrator | changed: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-01-10 14:45:32.062688 | orchestrator | changed: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-01-10 14:45:32.062691 | orchestrator | changed: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-01-10 14:45:32.062695 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-01-10 14:45:32.062699 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-01-10 14:45:32.062703 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-01-10 14:45:32.062706 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-01-10 14:45:32.062710 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-01-10 14:45:32.062714 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-01-10 14:45:32.062718 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-01-10 14:45:32.062721 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-01-10 14:45:32.062728 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-01-10 14:45:32.062732 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-01-10 14:45:32.062736 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-01-10 14:45:32.062739 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-01-10 14:45:32.062743 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-01-10 14:45:32.062747 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-01-10 14:45:32.062755 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-01-10 14:45:32.062759 | orchestrator | changed: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-01-10 14:45:32.062763 | orchestrator | changed: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-01-10 14:45:32.062767 | orchestrator | changed: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-01-10 14:45:32.062770 | orchestrator | 2026-01-10 14:45:32.062774 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2026-01-10 14:45:32.062778 | orchestrator | Saturday 10 January 2026 14:43:43 +0000 (0:00:09.521) 0:00:39.069 ****** 2026-01-10 14:45:32.062781 | orchestrator | changed: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-01-10 14:45:32.062785 | orchestrator | changed: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-01-10 14:45:32.062789 | orchestrator | changed: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-01-10 14:45:32.062793 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-01-10 14:45:32.062797 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-01-10 14:45:32.062800 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-01-10 14:45:32.062804 | orchestrator | 2026-01-10 14:45:32.062808 | orchestrator | TASK [service-check-containers : keystone | Check containers] ****************** 2026-01-10 14:45:32.062811 | orchestrator | Saturday 10 January 2026 14:43:46 +0000 (0:00:03.128) 0:00:42.197 ****** 2026-01-10 14:45:32.062819 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-01-10 14:45:32.062823 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-01-10 14:45:32.062830 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-01-10 14:45:32.062838 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-10 14:45:32.062842 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-10 14:45:32.062851 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-10 14:45:32.062855 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-10 14:45:32.062859 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-10 14:45:32.062869 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-10 14:45:32.062873 | orchestrator | 2026-01-10 14:45:32.062877 | orchestrator | TASK [service-check-containers : keystone | Notify handlers to restart containers] *** 2026-01-10 14:45:32.062881 | orchestrator | Saturday 10 January 2026 14:43:49 +0000 (0:00:02.608) 0:00:44.806 ****** 2026-01-10 14:45:32.062884 | orchestrator | changed: [testbed-node-0] => { 2026-01-10 14:45:32.062888 | orchestrator |  "msg": "Notifying handlers" 2026-01-10 14:45:32.062892 | orchestrator | } 2026-01-10 14:45:32.062896 | orchestrator | changed: [testbed-node-1] => { 2026-01-10 14:45:32.062900 | orchestrator |  "msg": "Notifying handlers" 2026-01-10 14:45:32.062904 | orchestrator | } 2026-01-10 14:45:32.062908 | orchestrator | changed: [testbed-node-2] => { 2026-01-10 14:45:32.062911 | orchestrator |  "msg": "Notifying handlers" 2026-01-10 14:45:32.062915 | orchestrator | } 2026-01-10 14:45:32.062919 | orchestrator | 2026-01-10 14:45:32.062923 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-01-10 14:45:32.062926 | orchestrator | Saturday 10 January 2026 14:43:49 +0000 (0:00:00.402) 0:00:45.209 ****** 2026-01-10 14:45:32.062930 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-01-10 14:45:32.062938 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-10 14:45:32.062942 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-10 14:45:32.062952 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:45:32.062958 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-01-10 14:45:32.062963 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-10 14:45:32.062967 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-10 14:45:32.062971 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:45:32.062977 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-01-10 14:45:32.062982 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-10 14:45:32.062989 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-10 14:45:32.062993 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:45:32.062996 | orchestrator | 2026-01-10 14:45:32.063000 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-01-10 14:45:32.063004 | orchestrator | Saturday 10 January 2026 14:43:50 +0000 (0:00:01.061) 0:00:46.271 ****** 2026-01-10 14:45:32.063008 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:45:32.063012 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:45:32.063015 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:45:32.063019 | orchestrator | 2026-01-10 14:45:32.063023 | orchestrator | TASK [keystone : Creating keystone database] *********************************** 2026-01-10 14:45:32.063026 | orchestrator | Saturday 10 January 2026 14:43:50 +0000 (0:00:00.338) 0:00:46.609 ****** 2026-01-10 14:45:32.063030 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:45:32.063034 | orchestrator | 2026-01-10 14:45:32.063037 | orchestrator | TASK [keystone : Creating Keystone database user and setting permissions] ****** 2026-01-10 14:45:32.063041 | orchestrator | Saturday 10 January 2026 14:43:53 +0000 (0:00:02.236) 0:00:48.846 ****** 2026-01-10 14:45:32.063046 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:45:32.063052 | orchestrator | 2026-01-10 14:45:32.063058 | orchestrator | TASK [keystone : Checking for any running keystone_fernet containers] ********** 2026-01-10 14:45:32.063064 | orchestrator | Saturday 10 January 2026 14:43:55 +0000 (0:00:02.027) 0:00:50.873 ****** 2026-01-10 14:45:32.063069 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:45:32.063075 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:45:32.063081 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:45:32.063087 | orchestrator | 2026-01-10 14:45:32.063092 | orchestrator | TASK [keystone : Group nodes where keystone_fernet is running] ***************** 2026-01-10 14:45:32.063098 | orchestrator | Saturday 10 January 2026 14:43:55 +0000 (0:00:00.878) 0:00:51.751 ****** 2026-01-10 14:45:32.063104 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:45:32.063109 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:45:32.063115 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:45:32.063120 | orchestrator | 2026-01-10 14:45:32.063126 | orchestrator | TASK [keystone : Fail if any hosts need bootstrapping and not all hosts targeted] *** 2026-01-10 14:45:32.063132 | orchestrator | Saturday 10 January 2026 14:43:56 +0000 (0:00:00.366) 0:00:52.118 ****** 2026-01-10 14:45:32.063138 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:45:32.063144 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:45:32.063149 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:45:32.063156 | orchestrator | 2026-01-10 14:45:32.063162 | orchestrator | TASK [keystone : Running Keystone bootstrap container] ************************* 2026-01-10 14:45:32.063246 | orchestrator | Saturday 10 January 2026 14:43:56 +0000 (0:00:00.574) 0:00:52.692 ****** 2026-01-10 14:45:32.063253 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:45:32.063257 | orchestrator | 2026-01-10 14:45:32.063261 | orchestrator | TASK [keystone : Running Keystone fernet bootstrap container] ****************** 2026-01-10 14:45:32.063310 | orchestrator | Saturday 10 January 2026 14:44:11 +0000 (0:00:15.081) 0:01:07.773 ****** 2026-01-10 14:45:32.063314 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:45:32.063318 | orchestrator | 2026-01-10 14:45:32.063322 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-01-10 14:45:32.063326 | orchestrator | Saturday 10 January 2026 14:44:24 +0000 (0:00:12.015) 0:01:19.789 ****** 2026-01-10 14:45:32.063336 | orchestrator | 2026-01-10 14:45:32.063340 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-01-10 14:45:32.063344 | orchestrator | Saturday 10 January 2026 14:44:24 +0000 (0:00:00.078) 0:01:19.868 ****** 2026-01-10 14:45:32.063348 | orchestrator | 2026-01-10 14:45:32.063351 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-01-10 14:45:32.063355 | orchestrator | Saturday 10 January 2026 14:44:24 +0000 (0:00:00.073) 0:01:19.941 ****** 2026-01-10 14:45:32.063359 | orchestrator | 2026-01-10 14:45:32.063367 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-ssh container] ******************** 2026-01-10 14:45:32.063371 | orchestrator | Saturday 10 January 2026 14:44:24 +0000 (0:00:00.070) 0:01:20.011 ****** 2026-01-10 14:45:32.063375 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:45:32.063379 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:45:32.063382 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:45:32.063386 | orchestrator | 2026-01-10 14:45:32.063390 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-fernet container] ***************** 2026-01-10 14:45:32.063393 | orchestrator | Saturday 10 January 2026 14:44:38 +0000 (0:00:14.313) 0:01:34.325 ****** 2026-01-10 14:45:32.063397 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:45:32.063401 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:45:32.063404 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:45:32.063408 | orchestrator | 2026-01-10 14:45:32.063412 | orchestrator | RUNNING HANDLER [keystone : Restart keystone container] ************************ 2026-01-10 14:45:32.063416 | orchestrator | Saturday 10 January 2026 14:44:43 +0000 (0:00:05.441) 0:01:39.766 ****** 2026-01-10 14:45:32.063419 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:45:32.063423 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:45:32.063427 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:45:32.063430 | orchestrator | 2026-01-10 14:45:32.063434 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-01-10 14:45:32.063438 | orchestrator | Saturday 10 January 2026 14:44:54 +0000 (0:00:10.772) 0:01:50.539 ****** 2026-01-10 14:45:32.063442 | orchestrator | included: /ansible/roles/keystone/tasks/distribute_fernet.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:45:32.063445 | orchestrator | 2026-01-10 14:45:32.063449 | orchestrator | TASK [keystone : Waiting for Keystone SSH port to be UP] *********************** 2026-01-10 14:45:32.063453 | orchestrator | Saturday 10 January 2026 14:44:55 +0000 (0:00:00.574) 0:01:51.113 ****** 2026-01-10 14:45:32.063457 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:45:32.063460 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:45:32.063464 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:45:32.063468 | orchestrator | 2026-01-10 14:45:32.063540 | orchestrator | TASK [keystone : Run key distribution] ***************************************** 2026-01-10 14:45:32.063556 | orchestrator | Saturday 10 January 2026 14:44:56 +0000 (0:00:01.194) 0:01:52.307 ****** 2026-01-10 14:45:32.063560 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:45:32.063563 | orchestrator | 2026-01-10 14:45:32.063567 | orchestrator | TASK [keystone : Creating admin project, user, role, service, and endpoint] **** 2026-01-10 14:45:32.063571 | orchestrator | Saturday 10 January 2026 14:44:58 +0000 (0:00:01.706) 0:01:54.014 ****** 2026-01-10 14:45:32.063575 | orchestrator | changed: [testbed-node-0] => (item=RegionOne) 2026-01-10 14:45:32.063579 | orchestrator | 2026-01-10 14:45:32.063585 | orchestrator | TASK [service-ks-register : keystone | Creating/deleting services] ************* 2026-01-10 14:45:32.063589 | orchestrator | Saturday 10 January 2026 14:45:10 +0000 (0:00:12.060) 0:02:06.074 ****** 2026-01-10 14:45:32.063592 | orchestrator | changed: [testbed-node-0] => (item=keystone (identity)) 2026-01-10 14:45:32.063596 | orchestrator | 2026-01-10 14:45:32.063600 | orchestrator | TASK [service-ks-register : keystone | Creating/deleting endpoints] ************ 2026-01-10 14:45:32.063603 | orchestrator | Saturday 10 January 2026 14:45:15 +0000 (0:00:04.746) 0:02:10.821 ****** 2026-01-10 14:45:32.063607 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api-int.testbed.osism.xyz:5000 -> internal) 2026-01-10 14:45:32.063618 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api.testbed.osism.xyz:5000 -> public) 2026-01-10 14:45:32.063622 | orchestrator | 2026-01-10 14:45:32.063626 | orchestrator | TASK [service-ks-register : keystone | Creating projects] ********************** 2026-01-10 14:45:32.063630 | orchestrator | Saturday 10 January 2026 14:45:22 +0000 (0:00:07.789) 0:02:18.611 ****** 2026-01-10 14:45:32.063633 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:45:32.063637 | orchestrator | 2026-01-10 14:45:32.063641 | orchestrator | TASK [service-ks-register : keystone | Creating users] ************************* 2026-01-10 14:45:32.063644 | orchestrator | Saturday 10 January 2026 14:45:22 +0000 (0:00:00.174) 0:02:18.786 ****** 2026-01-10 14:45:32.063648 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:45:32.063652 | orchestrator | 2026-01-10 14:45:32.063656 | orchestrator | TASK [service-ks-register : keystone | Creating roles] ************************* 2026-01-10 14:45:32.063663 | orchestrator | Saturday 10 January 2026 14:45:23 +0000 (0:00:00.198) 0:02:18.984 ****** 2026-01-10 14:45:32.063668 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:45:32.063674 | orchestrator | 2026-01-10 14:45:32.063679 | orchestrator | TASK [service-ks-register : keystone | Granting/revoking user roles] *********** 2026-01-10 14:45:32.063685 | orchestrator | Saturday 10 January 2026 14:45:23 +0000 (0:00:00.174) 0:02:19.158 ****** 2026-01-10 14:45:32.063691 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:45:32.063697 | orchestrator | 2026-01-10 14:45:32.063703 | orchestrator | TASK [keystone : Creating default user role] *********************************** 2026-01-10 14:45:32.063708 | orchestrator | Saturday 10 January 2026 14:45:23 +0000 (0:00:00.329) 0:02:19.488 ****** 2026-01-10 14:45:32.063714 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:45:32.063719 | orchestrator | 2026-01-10 14:45:32.063724 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-01-10 14:45:32.063731 | orchestrator | Saturday 10 January 2026 14:45:28 +0000 (0:00:04.393) 0:02:23.881 ****** 2026-01-10 14:45:32.063737 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:45:32.063743 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:45:32.063749 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:45:32.063755 | orchestrator | 2026-01-10 14:45:32.063761 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-10 14:45:32.063767 | orchestrator | testbed-node-0 : ok=34  changed=20  unreachable=0 failed=0 skipped=18  rescued=0 ignored=0 2026-01-10 14:45:32.063775 | orchestrator | testbed-node-1 : ok=23  changed=13  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2026-01-10 14:45:32.063784 | orchestrator | testbed-node-2 : ok=23  changed=13  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2026-01-10 14:45:32.063788 | orchestrator | 2026-01-10 14:45:32.063792 | orchestrator | 2026-01-10 14:45:32.063795 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-10 14:45:32.063799 | orchestrator | Saturday 10 January 2026 14:45:29 +0000 (0:00:01.188) 0:02:25.069 ****** 2026-01-10 14:45:32.063803 | orchestrator | =============================================================================== 2026-01-10 14:45:32.063806 | orchestrator | keystone : Running Keystone bootstrap container ------------------------ 15.08s 2026-01-10 14:45:32.063810 | orchestrator | keystone : Restart keystone-ssh container ------------------------------ 14.31s 2026-01-10 14:45:32.063814 | orchestrator | keystone : Creating admin project, user, role, service, and endpoint --- 12.06s 2026-01-10 14:45:32.063818 | orchestrator | keystone : Running Keystone fernet bootstrap container ----------------- 12.02s 2026-01-10 14:45:32.063821 | orchestrator | keystone : Restart keystone container ---------------------------------- 10.77s 2026-01-10 14:45:32.063825 | orchestrator | keystone : Copying files for keystone-fernet ---------------------------- 9.52s 2026-01-10 14:45:32.063829 | orchestrator | service-ks-register : keystone | Creating/deleting endpoints ------------ 7.79s 2026-01-10 14:45:32.063832 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 5.95s 2026-01-10 14:45:32.063841 | orchestrator | keystone : Restart keystone-fernet container ---------------------------- 5.44s 2026-01-10 14:45:32.063845 | orchestrator | service-ks-register : keystone | Creating/deleting services ------------- 4.75s 2026-01-10 14:45:32.063849 | orchestrator | keystone : Creating default user role ----------------------------------- 4.39s 2026-01-10 14:45:32.063852 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 3.62s 2026-01-10 14:45:32.063856 | orchestrator | keystone : Copying over config.json files for services ------------------ 3.16s 2026-01-10 14:45:32.063859 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 3.13s 2026-01-10 14:45:32.063863 | orchestrator | service-check-containers : keystone | Check containers ------------------ 2.61s 2026-01-10 14:45:32.063867 | orchestrator | keystone : Creating keystone database ----------------------------------- 2.24s 2026-01-10 14:45:32.063870 | orchestrator | keystone : Creating Keystone database user and setting permissions ------ 2.03s 2026-01-10 14:45:32.063874 | orchestrator | keystone : Ensuring config directories exist ---------------------------- 1.93s 2026-01-10 14:45:32.063882 | orchestrator | keystone : Copying over wsgi-keystone.conf ------------------------------ 1.79s 2026-01-10 14:45:32.063885 | orchestrator | keystone : Run key distribution ----------------------------------------- 1.71s 2026-01-10 14:45:32.063890 | orchestrator | 2026-01-10 14:45:32 | INFO  | Task 3e2d7132-1b2e-427a-96db-eb49ff35fee2 is in state STARTED 2026-01-10 14:45:32.063894 | orchestrator | 2026-01-10 14:45:32 | INFO  | Task 314f12a4-1942-42a1-ba39-50021887af8d is in state STARTED 2026-01-10 14:45:32.063899 | orchestrator | 2026-01-10 14:45:32 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:45:35.121526 | orchestrator | 2026-01-10 14:45:35 | INFO  | Task e6df1422-2b05-46dd-b008-408c5499098c is in state STARTED 2026-01-10 14:45:35.121612 | orchestrator | 2026-01-10 14:45:35 | INFO  | Task c5a6b699-ef36-440a-9f5e-c325dd5ac1c4 is in state STARTED 2026-01-10 14:45:35.121622 | orchestrator | 2026-01-10 14:45:35 | INFO  | Task adbd6655-6165-4362-8d33-f83c47ce1fde is in state STARTED 2026-01-10 14:45:35.121628 | orchestrator | 2026-01-10 14:45:35 | INFO  | Task 3e2d7132-1b2e-427a-96db-eb49ff35fee2 is in state STARTED 2026-01-10 14:45:35.121635 | orchestrator | 2026-01-10 14:45:35 | INFO  | Task 314f12a4-1942-42a1-ba39-50021887af8d is in state STARTED 2026-01-10 14:45:35.121641 | orchestrator | 2026-01-10 14:45:35 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:45:38.140623 | orchestrator | 2026-01-10 14:45:38 | INFO  | Task e6df1422-2b05-46dd-b008-408c5499098c is in state STARTED 2026-01-10 14:45:38.142780 | orchestrator | 2026-01-10 14:45:38 | INFO  | Task c5a6b699-ef36-440a-9f5e-c325dd5ac1c4 is in state STARTED 2026-01-10 14:45:38.144639 | orchestrator | 2026-01-10 14:45:38 | INFO  | Task adbd6655-6165-4362-8d33-f83c47ce1fde is in state STARTED 2026-01-10 14:45:38.146640 | orchestrator | 2026-01-10 14:45:38 | INFO  | Task 3e2d7132-1b2e-427a-96db-eb49ff35fee2 is in state STARTED 2026-01-10 14:45:38.148945 | orchestrator | 2026-01-10 14:45:38 | INFO  | Task 314f12a4-1942-42a1-ba39-50021887af8d is in state STARTED 2026-01-10 14:45:38.148999 | orchestrator | 2026-01-10 14:45:38 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:45:41.192228 | orchestrator | 2026-01-10 14:45:41 | INFO  | Task e6df1422-2b05-46dd-b008-408c5499098c is in state STARTED 2026-01-10 14:45:41.193511 | orchestrator | 2026-01-10 14:45:41 | INFO  | Task c5a6b699-ef36-440a-9f5e-c325dd5ac1c4 is in state STARTED 2026-01-10 14:45:41.194306 | orchestrator | 2026-01-10 14:45:41 | INFO  | Task adbd6655-6165-4362-8d33-f83c47ce1fde is in state STARTED 2026-01-10 14:45:41.195042 | orchestrator | 2026-01-10 14:45:41 | INFO  | Task 3e2d7132-1b2e-427a-96db-eb49ff35fee2 is in state STARTED 2026-01-10 14:45:41.195918 | orchestrator | 2026-01-10 14:45:41 | INFO  | Task 314f12a4-1942-42a1-ba39-50021887af8d is in state STARTED 2026-01-10 14:45:41.195939 | orchestrator | 2026-01-10 14:45:41 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:45:44.234182 | orchestrator | 2026-01-10 14:45:44 | INFO  | Task e6df1422-2b05-46dd-b008-408c5499098c is in state STARTED 2026-01-10 14:45:44.236627 | orchestrator | 2026-01-10 14:45:44 | INFO  | Task c5a6b699-ef36-440a-9f5e-c325dd5ac1c4 is in state STARTED 2026-01-10 14:45:44.238557 | orchestrator | 2026-01-10 14:45:44 | INFO  | Task adbd6655-6165-4362-8d33-f83c47ce1fde is in state STARTED 2026-01-10 14:45:44.239990 | orchestrator | 2026-01-10 14:45:44 | INFO  | Task 3e2d7132-1b2e-427a-96db-eb49ff35fee2 is in state STARTED 2026-01-10 14:45:44.241742 | orchestrator | 2026-01-10 14:45:44 | INFO  | Task 314f12a4-1942-42a1-ba39-50021887af8d is in state STARTED 2026-01-10 14:45:44.241806 | orchestrator | 2026-01-10 14:45:44 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:45:47.278762 | orchestrator | 2026-01-10 14:45:47 | INFO  | Task e6df1422-2b05-46dd-b008-408c5499098c is in state STARTED 2026-01-10 14:45:47.283430 | orchestrator | 2026-01-10 14:45:47 | INFO  | Task c5a6b699-ef36-440a-9f5e-c325dd5ac1c4 is in state STARTED 2026-01-10 14:45:47.285144 | orchestrator | 2026-01-10 14:45:47 | INFO  | Task adbd6655-6165-4362-8d33-f83c47ce1fde is in state STARTED 2026-01-10 14:45:47.287480 | orchestrator | 2026-01-10 14:45:47 | INFO  | Task 3e2d7132-1b2e-427a-96db-eb49ff35fee2 is in state STARTED 2026-01-10 14:45:47.289869 | orchestrator | 2026-01-10 14:45:47 | INFO  | Task 314f12a4-1942-42a1-ba39-50021887af8d is in state STARTED 2026-01-10 14:45:47.289911 | orchestrator | 2026-01-10 14:45:47 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:45:50.335702 | orchestrator | 2026-01-10 14:45:50 | INFO  | Task e6df1422-2b05-46dd-b008-408c5499098c is in state STARTED 2026-01-10 14:45:50.339154 | orchestrator | 2026-01-10 14:45:50 | INFO  | Task c5a6b699-ef36-440a-9f5e-c325dd5ac1c4 is in state STARTED 2026-01-10 14:45:50.343148 | orchestrator | 2026-01-10 14:45:50 | INFO  | Task adbd6655-6165-4362-8d33-f83c47ce1fde is in state STARTED 2026-01-10 14:45:50.344785 | orchestrator | 2026-01-10 14:45:50 | INFO  | Task 3e2d7132-1b2e-427a-96db-eb49ff35fee2 is in state STARTED 2026-01-10 14:45:50.346584 | orchestrator | 2026-01-10 14:45:50 | INFO  | Task 314f12a4-1942-42a1-ba39-50021887af8d is in state STARTED 2026-01-10 14:45:50.346770 | orchestrator | 2026-01-10 14:45:50 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:45:53.401833 | orchestrator | 2026-01-10 14:45:53 | INFO  | Task e6df1422-2b05-46dd-b008-408c5499098c is in state STARTED 2026-01-10 14:45:53.407290 | orchestrator | 2026-01-10 14:45:53 | INFO  | Task c5a6b699-ef36-440a-9f5e-c325dd5ac1c4 is in state STARTED 2026-01-10 14:45:53.408197 | orchestrator | 2026-01-10 14:45:53 | INFO  | Task adbd6655-6165-4362-8d33-f83c47ce1fde is in state STARTED 2026-01-10 14:45:53.411814 | orchestrator | 2026-01-10 14:45:53 | INFO  | Task 3e2d7132-1b2e-427a-96db-eb49ff35fee2 is in state STARTED 2026-01-10 14:45:53.413238 | orchestrator | 2026-01-10 14:45:53 | INFO  | Task 314f12a4-1942-42a1-ba39-50021887af8d is in state STARTED 2026-01-10 14:45:53.413340 | orchestrator | 2026-01-10 14:45:53 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:45:56.443633 | orchestrator | 2026-01-10 14:45:56 | INFO  | Task e6df1422-2b05-46dd-b008-408c5499098c is in state STARTED 2026-01-10 14:45:56.444732 | orchestrator | 2026-01-10 14:45:56 | INFO  | Task c5a6b699-ef36-440a-9f5e-c325dd5ac1c4 is in state STARTED 2026-01-10 14:45:56.445843 | orchestrator | 2026-01-10 14:45:56 | INFO  | Task adbd6655-6165-4362-8d33-f83c47ce1fde is in state STARTED 2026-01-10 14:45:56.446968 | orchestrator | 2026-01-10 14:45:56 | INFO  | Task 3e2d7132-1b2e-427a-96db-eb49ff35fee2 is in state STARTED 2026-01-10 14:45:56.448366 | orchestrator | 2026-01-10 14:45:56 | INFO  | Task 314f12a4-1942-42a1-ba39-50021887af8d is in state STARTED 2026-01-10 14:45:56.448391 | orchestrator | 2026-01-10 14:45:56 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:45:59.504891 | orchestrator | 2026-01-10 14:45:59 | INFO  | Task e6df1422-2b05-46dd-b008-408c5499098c is in state STARTED 2026-01-10 14:45:59.507519 | orchestrator | 2026-01-10 14:45:59 | INFO  | Task c5a6b699-ef36-440a-9f5e-c325dd5ac1c4 is in state STARTED 2026-01-10 14:45:59.508885 | orchestrator | 2026-01-10 14:45:59 | INFO  | Task adbd6655-6165-4362-8d33-f83c47ce1fde is in state STARTED 2026-01-10 14:45:59.511854 | orchestrator | 2026-01-10 14:45:59 | INFO  | Task 3e2d7132-1b2e-427a-96db-eb49ff35fee2 is in state STARTED 2026-01-10 14:45:59.513210 | orchestrator | 2026-01-10 14:45:59 | INFO  | Task 314f12a4-1942-42a1-ba39-50021887af8d is in state STARTED 2026-01-10 14:45:59.513540 | orchestrator | 2026-01-10 14:45:59 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:46:02.556947 | orchestrator | 2026-01-10 14:46:02 | INFO  | Task e6df1422-2b05-46dd-b008-408c5499098c is in state STARTED 2026-01-10 14:46:02.557612 | orchestrator | 2026-01-10 14:46:02 | INFO  | Task c5a6b699-ef36-440a-9f5e-c325dd5ac1c4 is in state STARTED 2026-01-10 14:46:02.559847 | orchestrator | 2026-01-10 14:46:02 | INFO  | Task adbd6655-6165-4362-8d33-f83c47ce1fde is in state STARTED 2026-01-10 14:46:02.561100 | orchestrator | 2026-01-10 14:46:02 | INFO  | Task 3e2d7132-1b2e-427a-96db-eb49ff35fee2 is in state STARTED 2026-01-10 14:46:02.561867 | orchestrator | 2026-01-10 14:46:02 | INFO  | Task 314f12a4-1942-42a1-ba39-50021887af8d is in state STARTED 2026-01-10 14:46:02.561998 | orchestrator | 2026-01-10 14:46:02 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:46:05.605061 | orchestrator | 2026-01-10 14:46:05 | INFO  | Task e6df1422-2b05-46dd-b008-408c5499098c is in state STARTED 2026-01-10 14:46:05.605524 | orchestrator | 2026-01-10 14:46:05 | INFO  | Task c5a6b699-ef36-440a-9f5e-c325dd5ac1c4 is in state STARTED 2026-01-10 14:46:05.606375 | orchestrator | 2026-01-10 14:46:05 | INFO  | Task adbd6655-6165-4362-8d33-f83c47ce1fde is in state STARTED 2026-01-10 14:46:05.607166 | orchestrator | 2026-01-10 14:46:05 | INFO  | Task 3e2d7132-1b2e-427a-96db-eb49ff35fee2 is in state STARTED 2026-01-10 14:46:05.607921 | orchestrator | 2026-01-10 14:46:05 | INFO  | Task 314f12a4-1942-42a1-ba39-50021887af8d is in state STARTED 2026-01-10 14:46:05.607953 | orchestrator | 2026-01-10 14:46:05 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:46:08.644968 | orchestrator | 2026-01-10 14:46:08 | INFO  | Task e6df1422-2b05-46dd-b008-408c5499098c is in state SUCCESS 2026-01-10 14:46:08.646350 | orchestrator | 2026-01-10 14:46:08 | INFO  | Task c5a6b699-ef36-440a-9f5e-c325dd5ac1c4 is in state STARTED 2026-01-10 14:46:08.648532 | orchestrator | 2026-01-10 14:46:08 | INFO  | Task adbd6655-6165-4362-8d33-f83c47ce1fde is in state STARTED 2026-01-10 14:46:08.649990 | orchestrator | 2026-01-10 14:46:08 | INFO  | Task 3e2d7132-1b2e-427a-96db-eb49ff35fee2 is in state STARTED 2026-01-10 14:46:08.651487 | orchestrator | 2026-01-10 14:46:08 | INFO  | Task 314f12a4-1942-42a1-ba39-50021887af8d is in state STARTED 2026-01-10 14:46:08.651723 | orchestrator | 2026-01-10 14:46:08 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:46:11.689450 | orchestrator | 2026-01-10 14:46:11 | INFO  | Task c5a6b699-ef36-440a-9f5e-c325dd5ac1c4 is in state STARTED 2026-01-10 14:46:11.691562 | orchestrator | 2026-01-10 14:46:11 | INFO  | Task adbd6655-6165-4362-8d33-f83c47ce1fde is in state STARTED 2026-01-10 14:46:11.693036 | orchestrator | 2026-01-10 14:46:11 | INFO  | Task 3e2d7132-1b2e-427a-96db-eb49ff35fee2 is in state STARTED 2026-01-10 14:46:11.694559 | orchestrator | 2026-01-10 14:46:11 | INFO  | Task 314f12a4-1942-42a1-ba39-50021887af8d is in state STARTED 2026-01-10 14:46:11.695863 | orchestrator | 2026-01-10 14:46:11 | INFO  | Task 27a2dbc7-3cd7-4e7d-a96b-79d180378f23 is in state STARTED 2026-01-10 14:46:11.695899 | orchestrator | 2026-01-10 14:46:11 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:46:14.721082 | orchestrator | 2026-01-10 14:46:14 | INFO  | Task c5a6b699-ef36-440a-9f5e-c325dd5ac1c4 is in state STARTED 2026-01-10 14:46:14.721475 | orchestrator | 2026-01-10 14:46:14 | INFO  | Task adbd6655-6165-4362-8d33-f83c47ce1fde is in state STARTED 2026-01-10 14:46:14.723104 | orchestrator | 2026-01-10 14:46:14 | INFO  | Task 3e2d7132-1b2e-427a-96db-eb49ff35fee2 is in state STARTED 2026-01-10 14:46:14.723474 | orchestrator | 2026-01-10 14:46:14 | INFO  | Task 314f12a4-1942-42a1-ba39-50021887af8d is in state STARTED 2026-01-10 14:46:14.724464 | orchestrator | 2026-01-10 14:46:14 | INFO  | Task 27a2dbc7-3cd7-4e7d-a96b-79d180378f23 is in state STARTED 2026-01-10 14:46:14.724497 | orchestrator | 2026-01-10 14:46:14 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:46:17.753983 | orchestrator | 2026-01-10 14:46:17 | INFO  | Task c5a6b699-ef36-440a-9f5e-c325dd5ac1c4 is in state STARTED 2026-01-10 14:46:17.754076 | orchestrator | 2026-01-10 14:46:17 | INFO  | Task adbd6655-6165-4362-8d33-f83c47ce1fde is in state STARTED 2026-01-10 14:46:17.754088 | orchestrator | 2026-01-10 14:46:17 | INFO  | Task 3e2d7132-1b2e-427a-96db-eb49ff35fee2 is in state STARTED 2026-01-10 14:46:17.754096 | orchestrator | 2026-01-10 14:46:17 | INFO  | Task 314f12a4-1942-42a1-ba39-50021887af8d is in state STARTED 2026-01-10 14:46:17.754103 | orchestrator | 2026-01-10 14:46:17 | INFO  | Task 27a2dbc7-3cd7-4e7d-a96b-79d180378f23 is in state STARTED 2026-01-10 14:46:17.754110 | orchestrator | 2026-01-10 14:46:17 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:46:20.785389 | orchestrator | 2026-01-10 14:46:20 | INFO  | Task c5a6b699-ef36-440a-9f5e-c325dd5ac1c4 is in state STARTED 2026-01-10 14:46:20.785421 | orchestrator | 2026-01-10 14:46:20 | INFO  | Task adbd6655-6165-4362-8d33-f83c47ce1fde is in state STARTED 2026-01-10 14:46:20.785428 | orchestrator | 2026-01-10 14:46:20 | INFO  | Task 3e2d7132-1b2e-427a-96db-eb49ff35fee2 is in state STARTED 2026-01-10 14:46:20.785435 | orchestrator | 2026-01-10 14:46:20 | INFO  | Task 314f12a4-1942-42a1-ba39-50021887af8d is in state STARTED 2026-01-10 14:46:20.785441 | orchestrator | 2026-01-10 14:46:20 | INFO  | Task 27a2dbc7-3cd7-4e7d-a96b-79d180378f23 is in state STARTED 2026-01-10 14:46:20.785448 | orchestrator | 2026-01-10 14:46:20 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:46:23.807802 | orchestrator | 2026-01-10 14:46:23 | INFO  | Task c5a6b699-ef36-440a-9f5e-c325dd5ac1c4 is in state STARTED 2026-01-10 14:46:23.810605 | orchestrator | 2026-01-10 14:46:23 | INFO  | Task adbd6655-6165-4362-8d33-f83c47ce1fde is in state STARTED 2026-01-10 14:46:23.815329 | orchestrator | 2026-01-10 14:46:23 | INFO  | Task 3e2d7132-1b2e-427a-96db-eb49ff35fee2 is in state STARTED 2026-01-10 14:46:23.816295 | orchestrator | 2026-01-10 14:46:23 | INFO  | Task 314f12a4-1942-42a1-ba39-50021887af8d is in state STARTED 2026-01-10 14:46:23.819902 | orchestrator | 2026-01-10 14:46:23 | INFO  | Task 27a2dbc7-3cd7-4e7d-a96b-79d180378f23 is in state STARTED 2026-01-10 14:46:23.819995 | orchestrator | 2026-01-10 14:46:23 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:46:26.850472 | orchestrator | 2026-01-10 14:46:26 | INFO  | Task c5a6b699-ef36-440a-9f5e-c325dd5ac1c4 is in state STARTED 2026-01-10 14:46:26.851130 | orchestrator | 2026-01-10 14:46:26 | INFO  | Task adbd6655-6165-4362-8d33-f83c47ce1fde is in state STARTED 2026-01-10 14:46:26.852134 | orchestrator | 2026-01-10 14:46:26 | INFO  | Task 3e2d7132-1b2e-427a-96db-eb49ff35fee2 is in state STARTED 2026-01-10 14:46:26.852887 | orchestrator | 2026-01-10 14:46:26 | INFO  | Task 314f12a4-1942-42a1-ba39-50021887af8d is in state STARTED 2026-01-10 14:46:26.854962 | orchestrator | 2026-01-10 14:46:26 | INFO  | Task 27a2dbc7-3cd7-4e7d-a96b-79d180378f23 is in state STARTED 2026-01-10 14:46:26.855011 | orchestrator | 2026-01-10 14:46:26 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:46:29.882928 | orchestrator | 2026-01-10 14:46:29 | INFO  | Task c5a6b699-ef36-440a-9f5e-c325dd5ac1c4 is in state STARTED 2026-01-10 14:46:29.884335 | orchestrator | 2026-01-10 14:46:29 | INFO  | Task adbd6655-6165-4362-8d33-f83c47ce1fde is in state STARTED 2026-01-10 14:46:29.884925 | orchestrator | 2026-01-10 14:46:29 | INFO  | Task 3e2d7132-1b2e-427a-96db-eb49ff35fee2 is in state STARTED 2026-01-10 14:46:29.885864 | orchestrator | 2026-01-10 14:46:29 | INFO  | Task 314f12a4-1942-42a1-ba39-50021887af8d is in state STARTED 2026-01-10 14:46:29.887459 | orchestrator | 2026-01-10 14:46:29 | INFO  | Task 27a2dbc7-3cd7-4e7d-a96b-79d180378f23 is in state STARTED 2026-01-10 14:46:29.887495 | orchestrator | 2026-01-10 14:46:29 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:46:32.907622 | orchestrator | 2026-01-10 14:46:32 | INFO  | Task c5a6b699-ef36-440a-9f5e-c325dd5ac1c4 is in state STARTED 2026-01-10 14:46:32.910855 | orchestrator | 2026-01-10 14:46:32 | INFO  | Task adbd6655-6165-4362-8d33-f83c47ce1fde is in state STARTED 2026-01-10 14:46:32.911594 | orchestrator | 2026-01-10 14:46:32 | INFO  | Task 3e2d7132-1b2e-427a-96db-eb49ff35fee2 is in state STARTED 2026-01-10 14:46:32.912524 | orchestrator | 2026-01-10 14:46:32 | INFO  | Task 314f12a4-1942-42a1-ba39-50021887af8d is in state STARTED 2026-01-10 14:46:32.913498 | orchestrator | 2026-01-10 14:46:32 | INFO  | Task 27a2dbc7-3cd7-4e7d-a96b-79d180378f23 is in state STARTED 2026-01-10 14:46:32.913525 | orchestrator | 2026-01-10 14:46:32 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:46:35.966969 | orchestrator | 2026-01-10 14:46:35 | INFO  | Task c5a6b699-ef36-440a-9f5e-c325dd5ac1c4 is in state STARTED 2026-01-10 14:46:35.968113 | orchestrator | 2026-01-10 14:46:35 | INFO  | Task adbd6655-6165-4362-8d33-f83c47ce1fde is in state STARTED 2026-01-10 14:46:35.970092 | orchestrator | 2026-01-10 14:46:35 | INFO  | Task 3e2d7132-1b2e-427a-96db-eb49ff35fee2 is in state STARTED 2026-01-10 14:46:35.970745 | orchestrator | 2026-01-10 14:46:35 | INFO  | Task 314f12a4-1942-42a1-ba39-50021887af8d is in state STARTED 2026-01-10 14:46:35.971460 | orchestrator | 2026-01-10 14:46:35 | INFO  | Task 27a2dbc7-3cd7-4e7d-a96b-79d180378f23 is in state STARTED 2026-01-10 14:46:35.971488 | orchestrator | 2026-01-10 14:46:35 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:46:39.007535 | orchestrator | 2026-01-10 14:46:39 | INFO  | Task c5a6b699-ef36-440a-9f5e-c325dd5ac1c4 is in state STARTED 2026-01-10 14:46:39.009345 | orchestrator | 2026-01-10 14:46:39 | INFO  | Task adbd6655-6165-4362-8d33-f83c47ce1fde is in state STARTED 2026-01-10 14:46:39.010533 | orchestrator | 2026-01-10 14:46:39 | INFO  | Task 3e2d7132-1b2e-427a-96db-eb49ff35fee2 is in state STARTED 2026-01-10 14:46:39.012335 | orchestrator | 2026-01-10 14:46:39 | INFO  | Task 314f12a4-1942-42a1-ba39-50021887af8d is in state STARTED 2026-01-10 14:46:39.014948 | orchestrator | 2026-01-10 14:46:39 | INFO  | Task 27a2dbc7-3cd7-4e7d-a96b-79d180378f23 is in state STARTED 2026-01-10 14:46:39.014980 | orchestrator | 2026-01-10 14:46:39 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:46:42.038321 | orchestrator | 2026-01-10 14:46:42 | INFO  | Task c5a6b699-ef36-440a-9f5e-c325dd5ac1c4 is in state STARTED 2026-01-10 14:46:42.040405 | orchestrator | 2026-01-10 14:46:42 | INFO  | Task adbd6655-6165-4362-8d33-f83c47ce1fde is in state STARTED 2026-01-10 14:46:42.041036 | orchestrator | 2026-01-10 14:46:42 | INFO  | Task 3e2d7132-1b2e-427a-96db-eb49ff35fee2 is in state STARTED 2026-01-10 14:46:42.041609 | orchestrator | 2026-01-10 14:46:42 | INFO  | Task 314f12a4-1942-42a1-ba39-50021887af8d is in state STARTED 2026-01-10 14:46:42.042374 | orchestrator | 2026-01-10 14:46:42 | INFO  | Task 27a2dbc7-3cd7-4e7d-a96b-79d180378f23 is in state STARTED 2026-01-10 14:46:42.042396 | orchestrator | 2026-01-10 14:46:42 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:46:45.066212 | orchestrator | 2026-01-10 14:46:45 | INFO  | Task c5a6b699-ef36-440a-9f5e-c325dd5ac1c4 is in state STARTED 2026-01-10 14:46:45.066703 | orchestrator | 2026-01-10 14:46:45 | INFO  | Task adbd6655-6165-4362-8d33-f83c47ce1fde is in state STARTED 2026-01-10 14:46:45.067569 | orchestrator | 2026-01-10 14:46:45 | INFO  | Task 3e2d7132-1b2e-427a-96db-eb49ff35fee2 is in state STARTED 2026-01-10 14:46:45.068410 | orchestrator | 2026-01-10 14:46:45 | INFO  | Task 314f12a4-1942-42a1-ba39-50021887af8d is in state STARTED 2026-01-10 14:46:45.069033 | orchestrator | 2026-01-10 14:46:45 | INFO  | Task 27a2dbc7-3cd7-4e7d-a96b-79d180378f23 is in state STARTED 2026-01-10 14:46:45.069095 | orchestrator | 2026-01-10 14:46:45 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:46:48.105773 | orchestrator | 2026-01-10 14:46:48 | INFO  | Task c5a6b699-ef36-440a-9f5e-c325dd5ac1c4 is in state STARTED 2026-01-10 14:46:48.106579 | orchestrator | 2026-01-10 14:46:48 | INFO  | Task adbd6655-6165-4362-8d33-f83c47ce1fde is in state STARTED 2026-01-10 14:46:48.107584 | orchestrator | 2026-01-10 14:46:48 | INFO  | Task 3e2d7132-1b2e-427a-96db-eb49ff35fee2 is in state STARTED 2026-01-10 14:46:48.108973 | orchestrator | 2026-01-10 14:46:48 | INFO  | Task 314f12a4-1942-42a1-ba39-50021887af8d is in state STARTED 2026-01-10 14:46:48.110541 | orchestrator | 2026-01-10 14:46:48 | INFO  | Task 27a2dbc7-3cd7-4e7d-a96b-79d180378f23 is in state STARTED 2026-01-10 14:46:48.110572 | orchestrator | 2026-01-10 14:46:48 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:46:51.165260 | orchestrator | 2026-01-10 14:46:51 | INFO  | Task c5a6b699-ef36-440a-9f5e-c325dd5ac1c4 is in state STARTED 2026-01-10 14:46:51.166164 | orchestrator | 2026-01-10 14:46:51 | INFO  | Task adbd6655-6165-4362-8d33-f83c47ce1fde is in state STARTED 2026-01-10 14:46:51.167722 | orchestrator | 2026-01-10 14:46:51 | INFO  | Task 3e2d7132-1b2e-427a-96db-eb49ff35fee2 is in state STARTED 2026-01-10 14:46:51.167869 | orchestrator | 2026-01-10 14:46:51 | INFO  | Task 314f12a4-1942-42a1-ba39-50021887af8d is in state STARTED 2026-01-10 14:46:51.169563 | orchestrator | 2026-01-10 14:46:51 | INFO  | Task 27a2dbc7-3cd7-4e7d-a96b-79d180378f23 is in state STARTED 2026-01-10 14:46:51.169594 | orchestrator | 2026-01-10 14:46:51 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:46:54.204810 | orchestrator | 2026-01-10 14:46:54 | INFO  | Task c5a6b699-ef36-440a-9f5e-c325dd5ac1c4 is in state STARTED 2026-01-10 14:46:54.205932 | orchestrator | 2026-01-10 14:46:54 | INFO  | Task adbd6655-6165-4362-8d33-f83c47ce1fde is in state SUCCESS 2026-01-10 14:46:54.206430 | orchestrator | 2026-01-10 14:46:54.206458 | orchestrator | 2026-01-10 14:46:54.206469 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-10 14:46:54.206481 | orchestrator | 2026-01-10 14:46:54.206491 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-10 14:46:54.206502 | orchestrator | Saturday 10 January 2026 14:45:31 +0000 (0:00:00.276) 0:00:00.276 ****** 2026-01-10 14:46:54.206512 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:46:54.206521 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:46:54.206530 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:46:54.206540 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:46:54.206549 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:46:54.206558 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:46:54.206568 | orchestrator | ok: [testbed-manager] 2026-01-10 14:46:54.206577 | orchestrator | 2026-01-10 14:46:54.206587 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-10 14:46:54.206597 | orchestrator | Saturday 10 January 2026 14:45:32 +0000 (0:00:01.137) 0:00:01.414 ****** 2026-01-10 14:46:54.206608 | orchestrator | ok: [testbed-node-0] => (item=enable_ceph_rgw_True) 2026-01-10 14:46:54.206619 | orchestrator | ok: [testbed-node-1] => (item=enable_ceph_rgw_True) 2026-01-10 14:46:54.206628 | orchestrator | ok: [testbed-node-2] => (item=enable_ceph_rgw_True) 2026-01-10 14:46:54.206639 | orchestrator | ok: [testbed-node-3] => (item=enable_ceph_rgw_True) 2026-01-10 14:46:54.206649 | orchestrator | ok: [testbed-node-4] => (item=enable_ceph_rgw_True) 2026-01-10 14:46:54.206659 | orchestrator | ok: [testbed-node-5] => (item=enable_ceph_rgw_True) 2026-01-10 14:46:54.206670 | orchestrator | ok: [testbed-manager] => (item=enable_ceph_rgw_True) 2026-01-10 14:46:54.206680 | orchestrator | 2026-01-10 14:46:54.206690 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2026-01-10 14:46:54.206697 | orchestrator | 2026-01-10 14:46:54.206703 | orchestrator | TASK [ceph-rgw : include_tasks] ************************************************ 2026-01-10 14:46:54.206709 | orchestrator | Saturday 10 January 2026 14:45:33 +0000 (0:00:01.000) 0:00:02.414 ****** 2026-01-10 14:46:54.206716 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2026-01-10 14:46:54.206723 | orchestrator | 2026-01-10 14:46:54.206729 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating/deleting services] ************* 2026-01-10 14:46:54.206736 | orchestrator | Saturday 10 January 2026 14:45:35 +0000 (0:00:02.194) 0:00:04.608 ****** 2026-01-10 14:46:54.206742 | orchestrator | changed: [testbed-node-0] => (item=swift (object-store)) 2026-01-10 14:46:54.206748 | orchestrator | 2026-01-10 14:46:54.206754 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating/deleting endpoints] ************ 2026-01-10 14:46:54.206760 | orchestrator | Saturday 10 January 2026 14:45:40 +0000 (0:00:04.629) 0:00:09.238 ****** 2026-01-10 14:46:54.206767 | orchestrator | changed: [testbed-node-0] => (item=swift -> https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> internal) 2026-01-10 14:46:54.206774 | orchestrator | changed: [testbed-node-0] => (item=swift -> https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> public) 2026-01-10 14:46:54.206780 | orchestrator | 2026-01-10 14:46:54.206786 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating projects] ********************** 2026-01-10 14:46:54.206792 | orchestrator | Saturday 10 January 2026 14:45:46 +0000 (0:00:06.548) 0:00:15.787 ****** 2026-01-10 14:46:54.206801 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-01-10 14:46:54.206809 | orchestrator | 2026-01-10 14:46:54.206815 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating users] ************************* 2026-01-10 14:46:54.206823 | orchestrator | Saturday 10 January 2026 14:45:51 +0000 (0:00:04.216) 0:00:20.003 ****** 2026-01-10 14:46:54.206835 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-01-10 14:46:54.206850 | orchestrator | changed: [testbed-node-0] => (item=ceph_rgw -> service) 2026-01-10 14:46:54.206873 | orchestrator | 2026-01-10 14:46:54.206884 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating roles] ************************* 2026-01-10 14:46:54.206894 | orchestrator | Saturday 10 January 2026 14:45:55 +0000 (0:00:04.316) 0:00:24.320 ****** 2026-01-10 14:46:54.206903 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-01-10 14:46:54.206909 | orchestrator | changed: [testbed-node-0] => (item=ResellerAdmin) 2026-01-10 14:46:54.206915 | orchestrator | 2026-01-10 14:46:54.206921 | orchestrator | TASK [service-ks-register : ceph-rgw | Granting/revoking user roles] *********** 2026-01-10 14:46:54.206927 | orchestrator | Saturday 10 January 2026 14:46:01 +0000 (0:00:06.040) 0:00:30.360 ****** 2026-01-10 14:46:54.206934 | orchestrator | changed: [testbed-node-0] => (item=ceph_rgw -> service -> admin) 2026-01-10 14:46:54.206940 | orchestrator | 2026-01-10 14:46:54.206946 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-10 14:46:54.206952 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-10 14:46:54.206958 | orchestrator | testbed-node-0 : ok=9  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-10 14:46:54.206965 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-10 14:46:54.206971 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-10 14:46:54.206977 | orchestrator | testbed-node-3 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-10 14:46:54.206992 | orchestrator | testbed-node-4 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-10 14:46:54.206998 | orchestrator | testbed-node-5 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-10 14:46:54.207004 | orchestrator | 2026-01-10 14:46:54.207010 | orchestrator | 2026-01-10 14:46:54.207017 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-10 14:46:54.207023 | orchestrator | Saturday 10 January 2026 14:46:06 +0000 (0:00:05.471) 0:00:35.832 ****** 2026-01-10 14:46:54.207029 | orchestrator | =============================================================================== 2026-01-10 14:46:54.207039 | orchestrator | service-ks-register : ceph-rgw | Creating/deleting endpoints ------------ 6.55s 2026-01-10 14:46:54.207053 | orchestrator | service-ks-register : ceph-rgw | Creating roles ------------------------- 6.04s 2026-01-10 14:46:54.207065 | orchestrator | service-ks-register : ceph-rgw | Granting/revoking user roles ----------- 5.47s 2026-01-10 14:46:54.207075 | orchestrator | service-ks-register : ceph-rgw | Creating/deleting services ------------- 4.63s 2026-01-10 14:46:54.207085 | orchestrator | service-ks-register : ceph-rgw | Creating users ------------------------- 4.32s 2026-01-10 14:46:54.207095 | orchestrator | service-ks-register : ceph-rgw | Creating projects ---------------------- 4.22s 2026-01-10 14:46:54.207106 | orchestrator | ceph-rgw : include_tasks ------------------------------------------------ 2.19s 2026-01-10 14:46:54.207116 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.14s 2026-01-10 14:46:54.207127 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.00s 2026-01-10 14:46:54.207137 | orchestrator | 2026-01-10 14:46:54.207148 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-01-10 14:46:54.207158 | orchestrator | 2.16.14 2026-01-10 14:46:54.207169 | orchestrator | 2026-01-10 14:46:54.207225 | orchestrator | PLAY [Bootstraph ceph dashboard] *********************************************** 2026-01-10 14:46:54.207237 | orchestrator | 2026-01-10 14:46:54.207247 | orchestrator | TASK [Disable the ceph dashboard] ********************************************** 2026-01-10 14:46:54.207258 | orchestrator | Saturday 10 January 2026 14:45:22 +0000 (0:00:00.266) 0:00:00.266 ****** 2026-01-10 14:46:54.207279 | orchestrator | changed: [testbed-manager] 2026-01-10 14:46:54.207290 | orchestrator | 2026-01-10 14:46:54.207300 | orchestrator | TASK [Set mgr/dashboard/ssl to false] ****************************************** 2026-01-10 14:46:54.207311 | orchestrator | Saturday 10 January 2026 14:45:24 +0000 (0:00:01.711) 0:00:01.977 ****** 2026-01-10 14:46:54.207321 | orchestrator | changed: [testbed-manager] 2026-01-10 14:46:54.207332 | orchestrator | 2026-01-10 14:46:54.207442 | orchestrator | TASK [Set mgr/dashboard/server_port to 7000] *********************************** 2026-01-10 14:46:54.207450 | orchestrator | Saturday 10 January 2026 14:45:25 +0000 (0:00:01.063) 0:00:03.041 ****** 2026-01-10 14:46:54.207456 | orchestrator | changed: [testbed-manager] 2026-01-10 14:46:54.207462 | orchestrator | 2026-01-10 14:46:54.207469 | orchestrator | TASK [Set mgr/dashboard/server_addr to 0.0.0.0] ******************************** 2026-01-10 14:46:54.207475 | orchestrator | Saturday 10 January 2026 14:45:26 +0000 (0:00:01.146) 0:00:04.188 ****** 2026-01-10 14:46:54.207481 | orchestrator | changed: [testbed-manager] 2026-01-10 14:46:54.207487 | orchestrator | 2026-01-10 14:46:54.207493 | orchestrator | TASK [Set mgr/dashboard/standby_behaviour to error] **************************** 2026-01-10 14:46:54.207499 | orchestrator | Saturday 10 January 2026 14:45:28 +0000 (0:00:01.473) 0:00:05.661 ****** 2026-01-10 14:46:54.207505 | orchestrator | changed: [testbed-manager] 2026-01-10 14:46:54.207511 | orchestrator | 2026-01-10 14:46:54.207517 | orchestrator | TASK [Set mgr/dashboard/standby_error_status_code to 404] ********************** 2026-01-10 14:46:54.207525 | orchestrator | Saturday 10 January 2026 14:45:30 +0000 (0:00:01.769) 0:00:07.431 ****** 2026-01-10 14:46:54.207535 | orchestrator | changed: [testbed-manager] 2026-01-10 14:46:54.207545 | orchestrator | 2026-01-10 14:46:54.207556 | orchestrator | TASK [Enable the ceph dashboard] *********************************************** 2026-01-10 14:46:54.207566 | orchestrator | Saturday 10 January 2026 14:45:31 +0000 (0:00:01.166) 0:00:08.597 ****** 2026-01-10 14:46:54.207577 | orchestrator | changed: [testbed-manager] 2026-01-10 14:46:54.207588 | orchestrator | 2026-01-10 14:46:54.207599 | orchestrator | TASK [Write ceph_dashboard_password to temporary file] ************************* 2026-01-10 14:46:54.207610 | orchestrator | Saturday 10 January 2026 14:45:33 +0000 (0:00:01.984) 0:00:10.582 ****** 2026-01-10 14:46:54.207620 | orchestrator | changed: [testbed-manager] 2026-01-10 14:46:54.207641 | orchestrator | 2026-01-10 14:46:54.207653 | orchestrator | TASK [Create admin user] ******************************************************* 2026-01-10 14:46:54.207659 | orchestrator | Saturday 10 January 2026 14:45:34 +0000 (0:00:00.947) 0:00:11.529 ****** 2026-01-10 14:46:54.207665 | orchestrator | changed: [testbed-manager] 2026-01-10 14:46:54.207672 | orchestrator | 2026-01-10 14:46:54.207680 | orchestrator | TASK [Remove temporary file for ceph_dashboard_password] *********************** 2026-01-10 14:46:54.207695 | orchestrator | Saturday 10 January 2026 14:46:28 +0000 (0:00:54.251) 0:01:05.781 ****** 2026-01-10 14:46:54.207710 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:46:54.207720 | orchestrator | 2026-01-10 14:46:54.207732 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-01-10 14:46:54.207744 | orchestrator | 2026-01-10 14:46:54.207756 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-01-10 14:46:54.207765 | orchestrator | Saturday 10 January 2026 14:46:28 +0000 (0:00:00.174) 0:01:05.955 ****** 2026-01-10 14:46:54.207772 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:46:54.207778 | orchestrator | 2026-01-10 14:46:54.207784 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-01-10 14:46:54.207790 | orchestrator | 2026-01-10 14:46:54.207796 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-01-10 14:46:54.207802 | orchestrator | Saturday 10 January 2026 14:46:40 +0000 (0:00:11.635) 0:01:17.591 ****** 2026-01-10 14:46:54.207808 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:46:54.207814 | orchestrator | 2026-01-10 14:46:54.207820 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-01-10 14:46:54.207827 | orchestrator | 2026-01-10 14:46:54.207833 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-01-10 14:46:54.207854 | orchestrator | Saturday 10 January 2026 14:46:41 +0000 (0:00:01.119) 0:01:18.710 ****** 2026-01-10 14:46:54.207861 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:46:54.207867 | orchestrator | 2026-01-10 14:46:54.207873 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-10 14:46:54.207879 | orchestrator | testbed-manager : ok=9  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-10 14:46:54.207886 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-10 14:46:54.207892 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-10 14:46:54.207898 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-10 14:46:54.207904 | orchestrator | 2026-01-10 14:46:54.207910 | orchestrator | 2026-01-10 14:46:54.207916 | orchestrator | 2026-01-10 14:46:54.207924 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-10 14:46:54.207934 | orchestrator | Saturday 10 January 2026 14:46:52 +0000 (0:00:11.115) 0:01:29.826 ****** 2026-01-10 14:46:54.207944 | orchestrator | =============================================================================== 2026-01-10 14:46:54.207954 | orchestrator | Create admin user ------------------------------------------------------ 54.25s 2026-01-10 14:46:54.207965 | orchestrator | Restart ceph manager service ------------------------------------------- 23.87s 2026-01-10 14:46:54.207975 | orchestrator | Enable the ceph dashboard ----------------------------------------------- 1.98s 2026-01-10 14:46:54.207981 | orchestrator | Set mgr/dashboard/standby_behaviour to error ---------------------------- 1.77s 2026-01-10 14:46:54.207988 | orchestrator | Disable the ceph dashboard ---------------------------------------------- 1.71s 2026-01-10 14:46:54.207993 | orchestrator | Set mgr/dashboard/server_addr to 0.0.0.0 -------------------------------- 1.47s 2026-01-10 14:46:54.207999 | orchestrator | Set mgr/dashboard/standby_error_status_code to 404 ---------------------- 1.17s 2026-01-10 14:46:54.208007 | orchestrator | Set mgr/dashboard/server_port to 7000 ----------------------------------- 1.15s 2026-01-10 14:46:54.208017 | orchestrator | Set mgr/dashboard/ssl to false ------------------------------------------ 1.06s 2026-01-10 14:46:54.208033 | orchestrator | Write ceph_dashboard_password to temporary file ------------------------- 0.95s 2026-01-10 14:46:54.208044 | orchestrator | Remove temporary file for ceph_dashboard_password ----------------------- 0.17s 2026-01-10 14:46:54.208054 | orchestrator | 2026-01-10 14:46:54 | INFO  | Task 3e2d7132-1b2e-427a-96db-eb49ff35fee2 is in state STARTED 2026-01-10 14:46:54.208754 | orchestrator | 2026-01-10 14:46:54 | INFO  | Task 314f12a4-1942-42a1-ba39-50021887af8d is in state STARTED 2026-01-10 14:46:54.211146 | orchestrator | 2026-01-10 14:46:54 | INFO  | Task 27a2dbc7-3cd7-4e7d-a96b-79d180378f23 is in state STARTED 2026-01-10 14:46:54.211198 | orchestrator | 2026-01-10 14:46:54 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:46:57.241666 | orchestrator | 2026-01-10 14:46:57 | INFO  | Task c5a6b699-ef36-440a-9f5e-c325dd5ac1c4 is in state STARTED 2026-01-10 14:46:57.243649 | orchestrator | 2026-01-10 14:46:57 | INFO  | Task 3e2d7132-1b2e-427a-96db-eb49ff35fee2 is in state STARTED 2026-01-10 14:46:57.244130 | orchestrator | 2026-01-10 14:46:57 | INFO  | Task 314f12a4-1942-42a1-ba39-50021887af8d is in state STARTED 2026-01-10 14:46:57.244847 | orchestrator | 2026-01-10 14:46:57 | INFO  | Task 27a2dbc7-3cd7-4e7d-a96b-79d180378f23 is in state STARTED 2026-01-10 14:46:57.244887 | orchestrator | 2026-01-10 14:46:57 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:47:00.282399 | orchestrator | 2026-01-10 14:47:00 | INFO  | Task c5a6b699-ef36-440a-9f5e-c325dd5ac1c4 is in state STARTED 2026-01-10 14:47:00.283316 | orchestrator | 2026-01-10 14:47:00 | INFO  | Task 3e2d7132-1b2e-427a-96db-eb49ff35fee2 is in state STARTED 2026-01-10 14:47:00.284514 | orchestrator | 2026-01-10 14:47:00 | INFO  | Task 314f12a4-1942-42a1-ba39-50021887af8d is in state STARTED 2026-01-10 14:47:00.286700 | orchestrator | 2026-01-10 14:47:00 | INFO  | Task 27a2dbc7-3cd7-4e7d-a96b-79d180378f23 is in state STARTED 2026-01-10 14:47:00.286746 | orchestrator | 2026-01-10 14:47:00 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:47:03.324463 | orchestrator | 2026-01-10 14:47:03 | INFO  | Task c5a6b699-ef36-440a-9f5e-c325dd5ac1c4 is in state STARTED 2026-01-10 14:47:03.324623 | orchestrator | 2026-01-10 14:47:03 | INFO  | Task 3e2d7132-1b2e-427a-96db-eb49ff35fee2 is in state STARTED 2026-01-10 14:47:03.325343 | orchestrator | 2026-01-10 14:47:03 | INFO  | Task 314f12a4-1942-42a1-ba39-50021887af8d is in state STARTED 2026-01-10 14:47:03.326254 | orchestrator | 2026-01-10 14:47:03 | INFO  | Task 27a2dbc7-3cd7-4e7d-a96b-79d180378f23 is in state STARTED 2026-01-10 14:47:03.326551 | orchestrator | 2026-01-10 14:47:03 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:47:06.356952 | orchestrator | 2026-01-10 14:47:06 | INFO  | Task c5a6b699-ef36-440a-9f5e-c325dd5ac1c4 is in state STARTED 2026-01-10 14:47:06.357043 | orchestrator | 2026-01-10 14:47:06 | INFO  | Task 3e2d7132-1b2e-427a-96db-eb49ff35fee2 is in state STARTED 2026-01-10 14:47:06.357052 | orchestrator | 2026-01-10 14:47:06 | INFO  | Task 314f12a4-1942-42a1-ba39-50021887af8d is in state STARTED 2026-01-10 14:47:06.357813 | orchestrator | 2026-01-10 14:47:06 | INFO  | Task 27a2dbc7-3cd7-4e7d-a96b-79d180378f23 is in state STARTED 2026-01-10 14:47:06.357861 | orchestrator | 2026-01-10 14:47:06 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:47:09.471864 | orchestrator | 2026-01-10 14:47:09 | INFO  | Task c5a6b699-ef36-440a-9f5e-c325dd5ac1c4 is in state STARTED 2026-01-10 14:47:09.471930 | orchestrator | 2026-01-10 14:47:09 | INFO  | Task 3e2d7132-1b2e-427a-96db-eb49ff35fee2 is in state STARTED 2026-01-10 14:47:09.471938 | orchestrator | 2026-01-10 14:47:09 | INFO  | Task 314f12a4-1942-42a1-ba39-50021887af8d is in state STARTED 2026-01-10 14:47:09.471944 | orchestrator | 2026-01-10 14:47:09 | INFO  | Task 27a2dbc7-3cd7-4e7d-a96b-79d180378f23 is in state STARTED 2026-01-10 14:47:09.471950 | orchestrator | 2026-01-10 14:47:09 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:47:12.496414 | orchestrator | 2026-01-10 14:47:12 | INFO  | Task c5a6b699-ef36-440a-9f5e-c325dd5ac1c4 is in state STARTED 2026-01-10 14:47:12.611641 | orchestrator | 2026-01-10 14:47:12 | INFO  | Task 3e2d7132-1b2e-427a-96db-eb49ff35fee2 is in state STARTED 2026-01-10 14:47:12.611674 | orchestrator | 2026-01-10 14:47:12 | INFO  | Task 314f12a4-1942-42a1-ba39-50021887af8d is in state STARTED 2026-01-10 14:47:12.611678 | orchestrator | 2026-01-10 14:47:12 | INFO  | Task 27a2dbc7-3cd7-4e7d-a96b-79d180378f23 is in state STARTED 2026-01-10 14:47:12.611682 | orchestrator | 2026-01-10 14:47:12 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:47:15.526769 | orchestrator | 2026-01-10 14:47:15 | INFO  | Task c5a6b699-ef36-440a-9f5e-c325dd5ac1c4 is in state STARTED 2026-01-10 14:47:15.527487 | orchestrator | 2026-01-10 14:47:15 | INFO  | Task 3e2d7132-1b2e-427a-96db-eb49ff35fee2 is in state STARTED 2026-01-10 14:47:15.531005 | orchestrator | 2026-01-10 14:47:15 | INFO  | Task 314f12a4-1942-42a1-ba39-50021887af8d is in state STARTED 2026-01-10 14:47:15.535541 | orchestrator | 2026-01-10 14:47:15 | INFO  | Task 27a2dbc7-3cd7-4e7d-a96b-79d180378f23 is in state STARTED 2026-01-10 14:47:15.536270 | orchestrator | 2026-01-10 14:47:15 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:47:18.568278 | orchestrator | 2026-01-10 14:47:18 | INFO  | Task c5a6b699-ef36-440a-9f5e-c325dd5ac1c4 is in state STARTED 2026-01-10 14:47:18.568530 | orchestrator | 2026-01-10 14:47:18 | INFO  | Task 3e2d7132-1b2e-427a-96db-eb49ff35fee2 is in state STARTED 2026-01-10 14:47:18.569677 | orchestrator | 2026-01-10 14:47:18 | INFO  | Task 314f12a4-1942-42a1-ba39-50021887af8d is in state STARTED 2026-01-10 14:47:18.571334 | orchestrator | 2026-01-10 14:47:18 | INFO  | Task 27a2dbc7-3cd7-4e7d-a96b-79d180378f23 is in state STARTED 2026-01-10 14:47:18.571373 | orchestrator | 2026-01-10 14:47:18 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:47:21.604355 | orchestrator | 2026-01-10 14:47:21 | INFO  | Task c5a6b699-ef36-440a-9f5e-c325dd5ac1c4 is in state STARTED 2026-01-10 14:47:21.605946 | orchestrator | 2026-01-10 14:47:21 | INFO  | Task 3e2d7132-1b2e-427a-96db-eb49ff35fee2 is in state STARTED 2026-01-10 14:47:21.607755 | orchestrator | 2026-01-10 14:47:21 | INFO  | Task 314f12a4-1942-42a1-ba39-50021887af8d is in state STARTED 2026-01-10 14:47:21.609171 | orchestrator | 2026-01-10 14:47:21 | INFO  | Task 27a2dbc7-3cd7-4e7d-a96b-79d180378f23 is in state STARTED 2026-01-10 14:47:21.609212 | orchestrator | 2026-01-10 14:47:21 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:47:24.654937 | orchestrator | 2026-01-10 14:47:24 | INFO  | Task c5a6b699-ef36-440a-9f5e-c325dd5ac1c4 is in state STARTED 2026-01-10 14:47:24.657482 | orchestrator | 2026-01-10 14:47:24 | INFO  | Task 3e2d7132-1b2e-427a-96db-eb49ff35fee2 is in state STARTED 2026-01-10 14:47:24.658698 | orchestrator | 2026-01-10 14:47:24 | INFO  | Task 314f12a4-1942-42a1-ba39-50021887af8d is in state STARTED 2026-01-10 14:47:24.661032 | orchestrator | 2026-01-10 14:47:24 | INFO  | Task 27a2dbc7-3cd7-4e7d-a96b-79d180378f23 is in state STARTED 2026-01-10 14:47:24.661073 | orchestrator | 2026-01-10 14:47:24 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:47:27.697385 | orchestrator | 2026-01-10 14:47:27 | INFO  | Task c5a6b699-ef36-440a-9f5e-c325dd5ac1c4 is in state STARTED 2026-01-10 14:47:27.699155 | orchestrator | 2026-01-10 14:47:27 | INFO  | Task 3e2d7132-1b2e-427a-96db-eb49ff35fee2 is in state STARTED 2026-01-10 14:47:27.701226 | orchestrator | 2026-01-10 14:47:27 | INFO  | Task 314f12a4-1942-42a1-ba39-50021887af8d is in state STARTED 2026-01-10 14:47:27.702356 | orchestrator | 2026-01-10 14:47:27 | INFO  | Task 27a2dbc7-3cd7-4e7d-a96b-79d180378f23 is in state STARTED 2026-01-10 14:47:27.702398 | orchestrator | 2026-01-10 14:47:27 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:47:30.747248 | orchestrator | 2026-01-10 14:47:30 | INFO  | Task c5a6b699-ef36-440a-9f5e-c325dd5ac1c4 is in state STARTED 2026-01-10 14:47:30.748028 | orchestrator | 2026-01-10 14:47:30 | INFO  | Task 3e2d7132-1b2e-427a-96db-eb49ff35fee2 is in state STARTED 2026-01-10 14:47:30.749455 | orchestrator | 2026-01-10 14:47:30 | INFO  | Task 314f12a4-1942-42a1-ba39-50021887af8d is in state STARTED 2026-01-10 14:47:30.752004 | orchestrator | 2026-01-10 14:47:30 | INFO  | Task 27a2dbc7-3cd7-4e7d-a96b-79d180378f23 is in state STARTED 2026-01-10 14:47:30.752081 | orchestrator | 2026-01-10 14:47:30 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:47:33.811846 | orchestrator | 2026-01-10 14:47:33 | INFO  | Task c5a6b699-ef36-440a-9f5e-c325dd5ac1c4 is in state STARTED 2026-01-10 14:47:33.812236 | orchestrator | 2026-01-10 14:47:33 | INFO  | Task 3e2d7132-1b2e-427a-96db-eb49ff35fee2 is in state STARTED 2026-01-10 14:47:33.813567 | orchestrator | 2026-01-10 14:47:33 | INFO  | Task 314f12a4-1942-42a1-ba39-50021887af8d is in state STARTED 2026-01-10 14:47:33.814395 | orchestrator | 2026-01-10 14:47:33 | INFO  | Task 27a2dbc7-3cd7-4e7d-a96b-79d180378f23 is in state STARTED 2026-01-10 14:47:33.814432 | orchestrator | 2026-01-10 14:47:33 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:47:36.852583 | orchestrator | 2026-01-10 14:47:36 | INFO  | Task c5a6b699-ef36-440a-9f5e-c325dd5ac1c4 is in state STARTED 2026-01-10 14:47:36.853290 | orchestrator | 2026-01-10 14:47:36 | INFO  | Task 3e2d7132-1b2e-427a-96db-eb49ff35fee2 is in state STARTED 2026-01-10 14:47:36.854362 | orchestrator | 2026-01-10 14:47:36 | INFO  | Task 314f12a4-1942-42a1-ba39-50021887af8d is in state STARTED 2026-01-10 14:47:36.855899 | orchestrator | 2026-01-10 14:47:36 | INFO  | Task 27a2dbc7-3cd7-4e7d-a96b-79d180378f23 is in state STARTED 2026-01-10 14:47:36.855928 | orchestrator | 2026-01-10 14:47:36 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:47:39.917744 | orchestrator | 2026-01-10 14:47:39 | INFO  | Task c5a6b699-ef36-440a-9f5e-c325dd5ac1c4 is in state STARTED 2026-01-10 14:47:39.920337 | orchestrator | 2026-01-10 14:47:39 | INFO  | Task 3e2d7132-1b2e-427a-96db-eb49ff35fee2 is in state STARTED 2026-01-10 14:47:39.923041 | orchestrator | 2026-01-10 14:47:39 | INFO  | Task 314f12a4-1942-42a1-ba39-50021887af8d is in state STARTED 2026-01-10 14:47:39.925107 | orchestrator | 2026-01-10 14:47:39 | INFO  | Task 27a2dbc7-3cd7-4e7d-a96b-79d180378f23 is in state STARTED 2026-01-10 14:47:39.925326 | orchestrator | 2026-01-10 14:47:39 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:47:42.976092 | orchestrator | 2026-01-10 14:47:42 | INFO  | Task c5a6b699-ef36-440a-9f5e-c325dd5ac1c4 is in state STARTED 2026-01-10 14:47:42.979239 | orchestrator | 2026-01-10 14:47:42 | INFO  | Task 3e2d7132-1b2e-427a-96db-eb49ff35fee2 is in state STARTED 2026-01-10 14:47:42.981510 | orchestrator | 2026-01-10 14:47:42 | INFO  | Task 314f12a4-1942-42a1-ba39-50021887af8d is in state STARTED 2026-01-10 14:47:42.983784 | orchestrator | 2026-01-10 14:47:42 | INFO  | Task 27a2dbc7-3cd7-4e7d-a96b-79d180378f23 is in state STARTED 2026-01-10 14:47:42.983837 | orchestrator | 2026-01-10 14:47:42 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:47:46.021731 | orchestrator | 2026-01-10 14:47:46 | INFO  | Task c5a6b699-ef36-440a-9f5e-c325dd5ac1c4 is in state STARTED 2026-01-10 14:47:46.023743 | orchestrator | 2026-01-10 14:47:46 | INFO  | Task 3e2d7132-1b2e-427a-96db-eb49ff35fee2 is in state STARTED 2026-01-10 14:47:46.025867 | orchestrator | 2026-01-10 14:47:46 | INFO  | Task 314f12a4-1942-42a1-ba39-50021887af8d is in state STARTED 2026-01-10 14:47:46.029708 | orchestrator | 2026-01-10 14:47:46 | INFO  | Task 27a2dbc7-3cd7-4e7d-a96b-79d180378f23 is in state STARTED 2026-01-10 14:47:46.031508 | orchestrator | 2026-01-10 14:47:46 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:47:49.079925 | orchestrator | 2026-01-10 14:47:49 | INFO  | Task c5a6b699-ef36-440a-9f5e-c325dd5ac1c4 is in state STARTED 2026-01-10 14:47:49.082202 | orchestrator | 2026-01-10 14:47:49 | INFO  | Task 3e2d7132-1b2e-427a-96db-eb49ff35fee2 is in state STARTED 2026-01-10 14:47:49.084092 | orchestrator | 2026-01-10 14:47:49 | INFO  | Task 314f12a4-1942-42a1-ba39-50021887af8d is in state STARTED 2026-01-10 14:47:49.086284 | orchestrator | 2026-01-10 14:47:49 | INFO  | Task 27a2dbc7-3cd7-4e7d-a96b-79d180378f23 is in state STARTED 2026-01-10 14:47:49.086336 | orchestrator | 2026-01-10 14:47:49 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:47:52.128785 | orchestrator | 2026-01-10 14:47:52 | INFO  | Task c5a6b699-ef36-440a-9f5e-c325dd5ac1c4 is in state STARTED 2026-01-10 14:47:52.129834 | orchestrator | 2026-01-10 14:47:52 | INFO  | Task 3e2d7132-1b2e-427a-96db-eb49ff35fee2 is in state STARTED 2026-01-10 14:47:52.130882 | orchestrator | 2026-01-10 14:47:52 | INFO  | Task 314f12a4-1942-42a1-ba39-50021887af8d is in state STARTED 2026-01-10 14:47:52.132159 | orchestrator | 2026-01-10 14:47:52 | INFO  | Task 27a2dbc7-3cd7-4e7d-a96b-79d180378f23 is in state STARTED 2026-01-10 14:47:52.132196 | orchestrator | 2026-01-10 14:47:52 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:47:55.168980 | orchestrator | 2026-01-10 14:47:55 | INFO  | Task c5a6b699-ef36-440a-9f5e-c325dd5ac1c4 is in state STARTED 2026-01-10 14:47:55.169043 | orchestrator | 2026-01-10 14:47:55 | INFO  | Task 3e2d7132-1b2e-427a-96db-eb49ff35fee2 is in state STARTED 2026-01-10 14:47:55.179406 | orchestrator | 2026-01-10 14:47:55 | INFO  | Task 314f12a4-1942-42a1-ba39-50021887af8d is in state STARTED 2026-01-10 14:47:55.179464 | orchestrator | 2026-01-10 14:47:55 | INFO  | Task 27a2dbc7-3cd7-4e7d-a96b-79d180378f23 is in state STARTED 2026-01-10 14:47:55.179472 | orchestrator | 2026-01-10 14:47:55 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:47:58.209820 | orchestrator | 2026-01-10 14:47:58 | INFO  | Task c5a6b699-ef36-440a-9f5e-c325dd5ac1c4 is in state STARTED 2026-01-10 14:47:58.209914 | orchestrator | 2026-01-10 14:47:58 | INFO  | Task 3e2d7132-1b2e-427a-96db-eb49ff35fee2 is in state STARTED 2026-01-10 14:47:58.210900 | orchestrator | 2026-01-10 14:47:58 | INFO  | Task 314f12a4-1942-42a1-ba39-50021887af8d is in state STARTED 2026-01-10 14:47:58.211697 | orchestrator | 2026-01-10 14:47:58 | INFO  | Task 27a2dbc7-3cd7-4e7d-a96b-79d180378f23 is in state STARTED 2026-01-10 14:47:58.213207 | orchestrator | 2026-01-10 14:47:58 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:48:01.254130 | orchestrator | 2026-01-10 14:48:01 | INFO  | Task c5a6b699-ef36-440a-9f5e-c325dd5ac1c4 is in state STARTED 2026-01-10 14:48:01.254905 | orchestrator | 2026-01-10 14:48:01 | INFO  | Task 3e2d7132-1b2e-427a-96db-eb49ff35fee2 is in state STARTED 2026-01-10 14:48:01.255977 | orchestrator | 2026-01-10 14:48:01 | INFO  | Task 314f12a4-1942-42a1-ba39-50021887af8d is in state STARTED 2026-01-10 14:48:01.256450 | orchestrator | 2026-01-10 14:48:01 | INFO  | Task 27a2dbc7-3cd7-4e7d-a96b-79d180378f23 is in state STARTED 2026-01-10 14:48:01.256542 | orchestrator | 2026-01-10 14:48:01 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:48:04.286488 | orchestrator | 2026-01-10 14:48:04 | INFO  | Task c5a6b699-ef36-440a-9f5e-c325dd5ac1c4 is in state STARTED 2026-01-10 14:48:04.287119 | orchestrator | 2026-01-10 14:48:04 | INFO  | Task 3e2d7132-1b2e-427a-96db-eb49ff35fee2 is in state STARTED 2026-01-10 14:48:04.287829 | orchestrator | 2026-01-10 14:48:04 | INFO  | Task 314f12a4-1942-42a1-ba39-50021887af8d is in state STARTED 2026-01-10 14:48:04.289911 | orchestrator | 2026-01-10 14:48:04 | INFO  | Task 27a2dbc7-3cd7-4e7d-a96b-79d180378f23 is in state STARTED 2026-01-10 14:48:04.289944 | orchestrator | 2026-01-10 14:48:04 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:48:07.343727 | orchestrator | 2026-01-10 14:48:07 | INFO  | Task c5a6b699-ef36-440a-9f5e-c325dd5ac1c4 is in state STARTED 2026-01-10 14:48:07.346658 | orchestrator | 2026-01-10 14:48:07 | INFO  | Task 3e2d7132-1b2e-427a-96db-eb49ff35fee2 is in state STARTED 2026-01-10 14:48:07.352028 | orchestrator | 2026-01-10 14:48:07 | INFO  | Task 314f12a4-1942-42a1-ba39-50021887af8d is in state STARTED 2026-01-10 14:48:07.354620 | orchestrator | 2026-01-10 14:48:07 | INFO  | Task 27a2dbc7-3cd7-4e7d-a96b-79d180378f23 is in state STARTED 2026-01-10 14:48:07.354671 | orchestrator | 2026-01-10 14:48:07 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:48:10.392725 | orchestrator | 2026-01-10 14:48:10 | INFO  | Task c5a6b699-ef36-440a-9f5e-c325dd5ac1c4 is in state STARTED 2026-01-10 14:48:10.394398 | orchestrator | 2026-01-10 14:48:10 | INFO  | Task 3e2d7132-1b2e-427a-96db-eb49ff35fee2 is in state STARTED 2026-01-10 14:48:10.395634 | orchestrator | 2026-01-10 14:48:10 | INFO  | Task 314f12a4-1942-42a1-ba39-50021887af8d is in state STARTED 2026-01-10 14:48:10.402210 | orchestrator | 2026-01-10 14:48:10 | INFO  | Task 27a2dbc7-3cd7-4e7d-a96b-79d180378f23 is in state STARTED 2026-01-10 14:48:10.402301 | orchestrator | 2026-01-10 14:48:10 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:48:13.433682 | orchestrator | 2026-01-10 14:48:13 | INFO  | Task c5a6b699-ef36-440a-9f5e-c325dd5ac1c4 is in state STARTED 2026-01-10 14:48:13.433774 | orchestrator | 2026-01-10 14:48:13 | INFO  | Task 3e2d7132-1b2e-427a-96db-eb49ff35fee2 is in state STARTED 2026-01-10 14:48:13.434613 | orchestrator | 2026-01-10 14:48:13 | INFO  | Task 314f12a4-1942-42a1-ba39-50021887af8d is in state STARTED 2026-01-10 14:48:13.435288 | orchestrator | 2026-01-10 14:48:13 | INFO  | Task 27a2dbc7-3cd7-4e7d-a96b-79d180378f23 is in state STARTED 2026-01-10 14:48:13.435328 | orchestrator | 2026-01-10 14:48:13 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:48:16.470048 | orchestrator | 2026-01-10 14:48:16 | INFO  | Task c5a6b699-ef36-440a-9f5e-c325dd5ac1c4 is in state STARTED 2026-01-10 14:48:16.471477 | orchestrator | 2026-01-10 14:48:16 | INFO  | Task 3e2d7132-1b2e-427a-96db-eb49ff35fee2 is in state STARTED 2026-01-10 14:48:16.471516 | orchestrator | 2026-01-10 14:48:16 | INFO  | Task 314f12a4-1942-42a1-ba39-50021887af8d is in state STARTED 2026-01-10 14:48:16.474712 | orchestrator | 2026-01-10 14:48:16 | INFO  | Task 27a2dbc7-3cd7-4e7d-a96b-79d180378f23 is in state STARTED 2026-01-10 14:48:16.474758 | orchestrator | 2026-01-10 14:48:16 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:48:19.520821 | orchestrator | 2026-01-10 14:48:19 | INFO  | Task c5a6b699-ef36-440a-9f5e-c325dd5ac1c4 is in state STARTED 2026-01-10 14:48:19.520869 | orchestrator | 2026-01-10 14:48:19 | INFO  | Task 3e2d7132-1b2e-427a-96db-eb49ff35fee2 is in state STARTED 2026-01-10 14:48:19.520874 | orchestrator | 2026-01-10 14:48:19 | INFO  | Task 314f12a4-1942-42a1-ba39-50021887af8d is in state STARTED 2026-01-10 14:48:19.521290 | orchestrator | 2026-01-10 14:48:19 | INFO  | Task 27a2dbc7-3cd7-4e7d-a96b-79d180378f23 is in state STARTED 2026-01-10 14:48:19.521375 | orchestrator | 2026-01-10 14:48:19 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:48:22.550992 | orchestrator | 2026-01-10 14:48:22 | INFO  | Task c5a6b699-ef36-440a-9f5e-c325dd5ac1c4 is in state STARTED 2026-01-10 14:48:22.552402 | orchestrator | 2026-01-10 14:48:22 | INFO  | Task 3e2d7132-1b2e-427a-96db-eb49ff35fee2 is in state STARTED 2026-01-10 14:48:22.554944 | orchestrator | 2026-01-10 14:48:22 | INFO  | Task 314f12a4-1942-42a1-ba39-50021887af8d is in state STARTED 2026-01-10 14:48:22.555037 | orchestrator | 2026-01-10 14:48:22 | INFO  | Task 27a2dbc7-3cd7-4e7d-a96b-79d180378f23 is in state STARTED 2026-01-10 14:48:22.555048 | orchestrator | 2026-01-10 14:48:22 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:48:25.596372 | orchestrator | 2026-01-10 14:48:25 | INFO  | Task c5a6b699-ef36-440a-9f5e-c325dd5ac1c4 is in state STARTED 2026-01-10 14:48:25.596872 | orchestrator | 2026-01-10 14:48:25 | INFO  | Task 3e2d7132-1b2e-427a-96db-eb49ff35fee2 is in state STARTED 2026-01-10 14:48:25.598279 | orchestrator | 2026-01-10 14:48:25 | INFO  | Task 314f12a4-1942-42a1-ba39-50021887af8d is in state STARTED 2026-01-10 14:48:25.599540 | orchestrator | 2026-01-10 14:48:25 | INFO  | Task 27a2dbc7-3cd7-4e7d-a96b-79d180378f23 is in state STARTED 2026-01-10 14:48:25.599566 | orchestrator | 2026-01-10 14:48:25 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:48:28.639321 | orchestrator | 2026-01-10 14:48:28 | INFO  | Task c5a6b699-ef36-440a-9f5e-c325dd5ac1c4 is in state STARTED 2026-01-10 14:48:28.642560 | orchestrator | 2026-01-10 14:48:28 | INFO  | Task 3e2d7132-1b2e-427a-96db-eb49ff35fee2 is in state STARTED 2026-01-10 14:48:28.645278 | orchestrator | 2026-01-10 14:48:28 | INFO  | Task 314f12a4-1942-42a1-ba39-50021887af8d is in state STARTED 2026-01-10 14:48:28.648688 | orchestrator | 2026-01-10 14:48:28 | INFO  | Task 27a2dbc7-3cd7-4e7d-a96b-79d180378f23 is in state STARTED 2026-01-10 14:48:28.648875 | orchestrator | 2026-01-10 14:48:28 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:48:31.689408 | orchestrator | 2026-01-10 14:48:31 | INFO  | Task c675ce34-5677-4951-8384-6a5ecff98e0c is in state STARTED 2026-01-10 14:48:31.699384 | orchestrator | 2026-01-10 14:48:31.699474 | orchestrator | 2026-01-10 14:48:31.699483 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-10 14:48:31.699491 | orchestrator | 2026-01-10 14:48:31.699496 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-10 14:48:31.699503 | orchestrator | Saturday 10 January 2026 14:45:22 +0000 (0:00:00.281) 0:00:00.281 ****** 2026-01-10 14:48:31.699509 | orchestrator | ok: [testbed-manager] 2026-01-10 14:48:31.699517 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:48:31.699523 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:48:31.699529 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:48:31.699535 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:48:31.699541 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:48:31.699546 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:48:31.699552 | orchestrator | 2026-01-10 14:48:31.699558 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-10 14:48:31.699564 | orchestrator | Saturday 10 January 2026 14:45:23 +0000 (0:00:01.065) 0:00:01.346 ****** 2026-01-10 14:48:31.699571 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2026-01-10 14:48:31.699578 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2026-01-10 14:48:31.699584 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2026-01-10 14:48:31.699589 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2026-01-10 14:48:31.699595 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2026-01-10 14:48:31.699603 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2026-01-10 14:48:31.699609 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2026-01-10 14:48:31.699616 | orchestrator | 2026-01-10 14:48:31.699623 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2026-01-10 14:48:31.699629 | orchestrator | 2026-01-10 14:48:31.699634 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-01-10 14:48:31.699640 | orchestrator | Saturday 10 January 2026 14:45:24 +0000 (0:00:00.759) 0:00:02.106 ****** 2026-01-10 14:48:31.699647 | orchestrator | included: /ansible/roles/prometheus/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-10 14:48:31.699655 | orchestrator | 2026-01-10 14:48:31.699661 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2026-01-10 14:48:31.699667 | orchestrator | Saturday 10 January 2026 14:45:26 +0000 (0:00:01.705) 0:00:03.811 ****** 2026-01-10 14:48:31.699680 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-server:2025.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-01-10 14:48:31.699721 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-10 14:48:31.699729 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-10 14:48:31.699754 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-10 14:48:31.699761 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-10 14:48:31.699769 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:48:31.699776 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-10 14:48:31.699790 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-10 14:48:31.699795 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:48:31.699800 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-10 14:48:31.699809 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2025.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-01-10 14:48:31.699814 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-10 14:48:31.699818 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:48:31.699821 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-10 14:48:31.699829 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:48:31.699833 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-10 14:48:31.699838 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-10 14:48:31.699842 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:48:31.699849 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-10 14:48:31.699853 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:48:31.699857 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-10 14:48:31.699865 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:48:31.700008 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:48:31.700380 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-10 14:48:31.700406 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-10 14:48:31.700413 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-10 14:48:31.700431 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-10 14:48:31.700437 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:48:31.700444 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:48:31.700462 | orchestrator | 2026-01-10 14:48:31.700469 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-01-10 14:48:31.700476 | orchestrator | Saturday 10 January 2026 14:45:30 +0000 (0:00:04.119) 0:00:07.931 ****** 2026-01-10 14:48:31.700484 | orchestrator | included: /ansible/roles/prometheus/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-10 14:48:31.700492 | orchestrator | 2026-01-10 14:48:31.700499 | orchestrator | TASK [service-cert-copy : prometheus | Copying over extra CA certificates] ***** 2026-01-10 14:48:31.700505 | orchestrator | Saturday 10 January 2026 14:45:32 +0000 (0:00:02.257) 0:00:10.189 ****** 2026-01-10 14:48:31.700513 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-server:2025.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-01-10 14:48:31.700520 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-10 14:48:31.700526 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-10 14:48:31.700541 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-10 14:48:31.700548 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-10 14:48:31.700562 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-10 14:48:31.700568 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-10 14:48:31.700575 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-10 14:48:31.700581 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-10 14:48:31.700588 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:48:31.700596 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:48:31.700607 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-10 14:48:31.700613 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:48:31.700624 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-10 14:48:31.700631 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2025.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-01-10 14:48:31.700638 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-10 14:48:31.700645 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-10 14:48:31.700655 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:48:31.700663 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:48:31.700673 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:48:31.700679 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:48:31.700686 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-10 14:48:31.700693 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-10 14:48:31.700699 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-10 14:48:31.700706 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-10 14:48:31.700717 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/l2026-01-10 14:48:31 | INFO  | Task c5a6b699-ef36-440a-9f5e-c325dd5ac1c4 is in state SUCCESS 2026-01-10 14:48:31.700725 | orchestrator | ib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-10 14:48:31.701241 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:48:31.701276 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:48:31.701284 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:48:31.701291 | orchestrator | 2026-01-10 14:48:31.701298 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS certificate] *** 2026-01-10 14:48:31.701306 | orchestrator | Saturday 10 January 2026 14:45:39 +0000 (0:00:06.958) 0:00:17.147 ****** 2026-01-10 14:48:31.701314 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-server:2025.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-01-10 14:48:31.701322 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-10 14:48:31.701329 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-10 14:48:31.701381 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:48:31.701391 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-10 14:48:31.701399 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:48:31.701406 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-10 14:48:31.701414 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:48:31.701422 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-10 14:48:31.701429 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-10 14:48:31.701436 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:48:31.701470 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-10 14:48:31.701477 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:48:31.701483 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:48:31.701491 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2025.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-01-10 14:48:31.701498 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-10 14:48:31.701504 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-10 14:48:31.701510 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:48:31.701548 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:48:31.701556 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:48:31.701562 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-10 14:48:31.701568 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:48:31.701574 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:48:31.701579 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:48:31.701585 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:48:31.701591 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-10 14:48:31.701597 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-10 14:48:31.701603 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:48:31.701613 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:48:31.701620 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-10 14:48:31.701683 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-10 14:48:31.701693 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:48:31.701720 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-10 14:48:31.701727 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-10 14:48:31.701734 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-10 14:48:31.701741 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:48:31.701747 | orchestrator | 2026-01-10 14:48:31.701753 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS key] *** 2026-01-10 14:48:31.701760 | orchestrator | Saturday 10 January 2026 14:45:42 +0000 (0:00:02.459) 0:00:19.606 ****** 2026-01-10 14:48:31.701767 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-server:2025.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-01-10 14:48:31.701781 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-10 14:48:31.701812 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-10 14:48:31.701821 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-10 14:48:31.701828 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-10 14:48:31.701836 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2025.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-01-10 14:48:31.701843 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:48:31.701857 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:48:31.701864 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:48:31.701889 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:48:31.701897 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-10 14:48:31.701903 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:48:31.701909 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:48:31.701915 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-10 14:48:31.701922 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-10 14:48:31.701928 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-10 14:48:31.701941 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-10 14:48:31.701948 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-10 14:48:31.701971 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-10 14:48:31.701979 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-10 14:48:31.701985 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-10 14:48:31.701992 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:48:31.701998 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:48:31.702004 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-10 14:48:31.702117 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:48:31.702131 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:48:31.702138 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:48:31.702146 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:48:31.702153 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:48:31.702191 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-10 14:48:31.702200 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:48:31.702206 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-10 14:48:31.702212 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:48:31.702218 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:48:31.702223 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-10 14:48:31.702236 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:48:31.702241 | orchestrator | 2026-01-10 14:48:31.702247 | orchestrator | TASK [prometheus : Copying over config.json files] ***************************** 2026-01-10 14:48:31.702253 | orchestrator | Saturday 10 January 2026 14:45:44 +0000 (0:00:02.759) 0:00:22.366 ****** 2026-01-10 14:48:31.702261 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-server:2025.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-01-10 14:48:31.702268 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-10 14:48:31.702296 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-10 14:48:31.702304 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-10 14:48:31.702311 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-10 14:48:31.702317 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-10 14:48:31.702329 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-10 14:48:31.702337 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-10 14:48:31.702344 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:48:31.702351 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:48:31.702381 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:48:31.702389 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-10 14:48:31.702397 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-10 14:48:31.702404 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-10 14:48:31.702417 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-10 14:48:31.702425 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:48:31.702433 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:48:31.702440 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:48:31.702454 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-10 14:48:31.702464 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2025.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-01-10 14:48:31.702477 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-10 14:48:31.702484 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-10 14:48:31.702490 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-10 14:48:31.702496 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-10 14:48:31.702502 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-10 14:48:31.702514 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:48:31.702522 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:48:31.702533 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:48:31.702540 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:48:31.702548 | orchestrator | 2026-01-10 14:48:31.702555 | orchestrator | TASK [prometheus : Find custom prometheus alert rules files] ******************* 2026-01-10 14:48:31.702562 | orchestrator | Saturday 10 January 2026 14:45:51 +0000 (0:00:06.186) 0:00:28.552 ****** 2026-01-10 14:48:31.702569 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-10 14:48:31.702578 | orchestrator | 2026-01-10 14:48:31.702586 | orchestrator | TASK [prometheus : Copying over custom prometheus alert rules files] *********** 2026-01-10 14:48:31.702595 | orchestrator | Saturday 10 January 2026 14:45:52 +0000 (0:00:01.341) 0:00:29.894 ****** 2026-01-10 14:48:31.702602 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:48:31.702609 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:48:31.702617 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:48:31.702624 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:48:31.702630 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:48:31.702637 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:48:31.702644 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:48:31.702651 | orchestrator | 2026-01-10 14:48:31.702658 | orchestrator | TASK [prometheus : Find prometheus common config overrides] ******************** 2026-01-10 14:48:31.702665 | orchestrator | Saturday 10 January 2026 14:45:53 +0000 (0:00:00.683) 0:00:30.577 ****** 2026-01-10 14:48:31.702673 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-10 14:48:31.702680 | orchestrator | 2026-01-10 14:48:31.702687 | orchestrator | TASK [prometheus : Find prometheus host config overrides] ********************** 2026-01-10 14:48:31.702694 | orchestrator | Saturday 10 January 2026 14:45:53 +0000 (0:00:00.761) 0:00:31.339 ****** 2026-01-10 14:48:31.702701 | orchestrator | [WARNING]: Skipped 2026-01-10 14:48:31.702708 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-10 14:48:31.702715 | orchestrator | manager/prometheus.yml.d' path due to this access issue: 2026-01-10 14:48:31.702720 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-10 14:48:31.702726 | orchestrator | manager/prometheus.yml.d' is not a directory 2026-01-10 14:48:31.702732 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-10 14:48:31.702738 | orchestrator | [WARNING]: Skipped 2026-01-10 14:48:31.702745 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-10 14:48:31.702751 | orchestrator | node-0/prometheus.yml.d' path due to this access issue: 2026-01-10 14:48:31.702756 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-10 14:48:31.702762 | orchestrator | node-0/prometheus.yml.d' is not a directory 2026-01-10 14:48:31.702769 | orchestrator | [WARNING]: Skipped 2026-01-10 14:48:31.702774 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-10 14:48:31.702780 | orchestrator | node-1/prometheus.yml.d' path due to this access issue: 2026-01-10 14:48:31.702787 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-10 14:48:31.702803 | orchestrator | node-1/prometheus.yml.d' is not a directory 2026-01-10 14:48:31.702811 | orchestrator | [WARNING]: Skipped 2026-01-10 14:48:31.702823 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-10 14:48:31.702830 | orchestrator | node-2/prometheus.yml.d' path due to this access issue: 2026-01-10 14:48:31.702836 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-10 14:48:31.702842 | orchestrator | node-2/prometheus.yml.d' is not a directory 2026-01-10 14:48:31.702848 | orchestrator | [WARNING]: Skipped 2026-01-10 14:48:31.702853 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-10 14:48:31.702859 | orchestrator | node-3/prometheus.yml.d' path due to this access issue: 2026-01-10 14:48:31.702865 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-10 14:48:31.702870 | orchestrator | node-3/prometheus.yml.d' is not a directory 2026-01-10 14:48:31.702877 | orchestrator | [WARNING]: Skipped 2026-01-10 14:48:31.702882 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-10 14:48:31.702888 | orchestrator | node-4/prometheus.yml.d' path due to this access issue: 2026-01-10 14:48:31.702894 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-10 14:48:31.702900 | orchestrator | node-4/prometheus.yml.d' is not a directory 2026-01-10 14:48:31.702905 | orchestrator | [WARNING]: Skipped 2026-01-10 14:48:31.702911 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-10 14:48:31.702917 | orchestrator | node-5/prometheus.yml.d' path due to this access issue: 2026-01-10 14:48:31.702922 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-10 14:48:31.702928 | orchestrator | node-5/prometheus.yml.d' is not a directory 2026-01-10 14:48:31.702934 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-10 14:48:31.702940 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-01-10 14:48:31.702946 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-01-10 14:48:31.702952 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-01-10 14:48:31.702958 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-01-10 14:48:31.702964 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-01-10 14:48:31.702970 | orchestrator | 2026-01-10 14:48:31.702976 | orchestrator | TASK [prometheus : Copying over prometheus config file] ************************ 2026-01-10 14:48:31.702982 | orchestrator | Saturday 10 January 2026 14:45:55 +0000 (0:00:01.733) 0:00:33.073 ****** 2026-01-10 14:48:31.702987 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-01-10 14:48:31.702994 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:48:31.703000 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-01-10 14:48:31.703005 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:48:31.703011 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-01-10 14:48:31.703017 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:48:31.703022 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-01-10 14:48:31.703029 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:48:31.703035 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-01-10 14:48:31.703040 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:48:31.703047 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-01-10 14:48:31.703053 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:48:31.703121 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2) 2026-01-10 14:48:31.703127 | orchestrator | 2026-01-10 14:48:31.703133 | orchestrator | TASK [prometheus : Copying over prometheus web config file] ******************** 2026-01-10 14:48:31.703146 | orchestrator | Saturday 10 January 2026 14:46:12 +0000 (0:00:16.463) 0:00:49.536 ****** 2026-01-10 14:48:31.703152 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-01-10 14:48:31.703158 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-01-10 14:48:31.703165 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:48:31.703171 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:48:31.703177 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-01-10 14:48:31.703183 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:48:31.703188 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-01-10 14:48:31.703195 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:48:31.703201 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-01-10 14:48:31.703208 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:48:31.703213 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-01-10 14:48:31.703220 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:48:31.703226 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2) 2026-01-10 14:48:31.703231 | orchestrator | 2026-01-10 14:48:31.703237 | orchestrator | TASK [prometheus : Copying over prometheus alertmanager config file] *********** 2026-01-10 14:48:31.703244 | orchestrator | Saturday 10 January 2026 14:46:16 +0000 (0:00:04.599) 0:00:54.136 ****** 2026-01-10 14:48:31.703251 | orchestrator | skipping: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-01-10 14:48:31.703266 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:48:31.703272 | orchestrator | skipping: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-01-10 14:48:31.703278 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:48:31.703284 | orchestrator | skipping: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-01-10 14:48:31.703291 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:48:31.703297 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml) 2026-01-10 14:48:31.703304 | orchestrator | skipping: [testbed-node-5] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-01-10 14:48:31.703310 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:48:31.703316 | orchestrator | skipping: [testbed-node-3] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-01-10 14:48:31.703322 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:48:31.703328 | orchestrator | skipping: [testbed-node-4] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-01-10 14:48:31.703334 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:48:31.703340 | orchestrator | 2026-01-10 14:48:31.703346 | orchestrator | TASK [prometheus : Find custom Alertmanager alert notification templates] ****** 2026-01-10 14:48:31.703352 | orchestrator | Saturday 10 January 2026 14:46:18 +0000 (0:00:02.229) 0:00:56.366 ****** 2026-01-10 14:48:31.703358 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-10 14:48:31.703364 | orchestrator | 2026-01-10 14:48:31.703370 | orchestrator | TASK [prometheus : Copying over custom Alertmanager alert notification templates] *** 2026-01-10 14:48:31.703376 | orchestrator | Saturday 10 January 2026 14:46:19 +0000 (0:00:00.818) 0:00:57.184 ****** 2026-01-10 14:48:31.703383 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:48:31.703388 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:48:31.703395 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:48:31.703407 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:48:31.703413 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:48:31.703419 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:48:31.703425 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:48:31.703432 | orchestrator | 2026-01-10 14:48:31.703437 | orchestrator | TASK [prometheus : Copying over my.cnf for mysqld_exporter] ******************** 2026-01-10 14:48:31.703444 | orchestrator | Saturday 10 January 2026 14:46:20 +0000 (0:00:00.997) 0:00:58.182 ****** 2026-01-10 14:48:31.703450 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:48:31.703456 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:48:31.703463 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:48:31.703468 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:48:31.703474 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:48:31.703480 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:48:31.703486 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:48:31.703492 | orchestrator | 2026-01-10 14:48:31.703498 | orchestrator | TASK [prometheus : Copying cloud config file for openstack exporter] *********** 2026-01-10 14:48:31.703504 | orchestrator | Saturday 10 January 2026 14:46:23 +0000 (0:00:03.226) 0:01:01.408 ****** 2026-01-10 14:48:31.703510 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-01-10 14:48:31.703516 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-01-10 14:48:31.703522 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:48:31.703527 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:48:31.703533 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-01-10 14:48:31.703539 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:48:31.703544 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-01-10 14:48:31.703550 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:48:31.703556 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-01-10 14:48:31.703561 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:48:31.703567 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-01-10 14:48:31.703573 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:48:31.703578 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-01-10 14:48:31.703583 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:48:31.703589 | orchestrator | 2026-01-10 14:48:31.703594 | orchestrator | TASK [prometheus : Copying config file for blackbox exporter] ****************** 2026-01-10 14:48:31.703600 | orchestrator | Saturday 10 January 2026 14:46:26 +0000 (0:00:02.835) 0:01:04.243 ****** 2026-01-10 14:48:31.703607 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-01-10 14:48:31.703614 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:48:31.703621 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-01-10 14:48:31.703627 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:48:31.703633 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-01-10 14:48:31.703639 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:48:31.703644 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-01-10 14:48:31.703649 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:48:31.703663 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-01-10 14:48:31.703670 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:48:31.703676 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2) 2026-01-10 14:48:31.703687 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-01-10 14:48:31.703693 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:48:31.703700 | orchestrator | 2026-01-10 14:48:31.703705 | orchestrator | TASK [prometheus : Find extra prometheus server config files] ****************** 2026-01-10 14:48:31.703711 | orchestrator | Saturday 10 January 2026 14:46:29 +0000 (0:00:02.742) 0:01:06.986 ****** 2026-01-10 14:48:31.703716 | orchestrator | [WARNING]: Skipped 2026-01-10 14:48:31.703722 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' path 2026-01-10 14:48:31.703727 | orchestrator | due to this access issue: 2026-01-10 14:48:31.703732 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' is 2026-01-10 14:48:31.703738 | orchestrator | not a directory 2026-01-10 14:48:31.703743 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-10 14:48:31.703748 | orchestrator | 2026-01-10 14:48:31.703754 | orchestrator | TASK [prometheus : Create subdirectories for extra config files] *************** 2026-01-10 14:48:31.703760 | orchestrator | Saturday 10 January 2026 14:46:30 +0000 (0:00:01.374) 0:01:08.361 ****** 2026-01-10 14:48:31.703766 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:48:31.703772 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:48:31.703777 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:48:31.703783 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:48:31.703788 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:48:31.703794 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:48:31.703799 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:48:31.703805 | orchestrator | 2026-01-10 14:48:31.703811 | orchestrator | TASK [prometheus : Template extra prometheus server config files] ************** 2026-01-10 14:48:31.703817 | orchestrator | Saturday 10 January 2026 14:46:32 +0000 (0:00:01.196) 0:01:09.557 ****** 2026-01-10 14:48:31.703822 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:48:31.703827 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:48:31.703833 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:48:31.703838 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:48:31.703844 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:48:31.703850 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:48:31.703855 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:48:31.703860 | orchestrator | 2026-01-10 14:48:31.703867 | orchestrator | TASK [service-check-containers : prometheus | Check containers] **************** 2026-01-10 14:48:31.703872 | orchestrator | Saturday 10 January 2026 14:46:33 +0000 (0:00:00.939) 0:01:10.496 ****** 2026-01-10 14:48:31.703880 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-server:2025.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-01-10 14:48:31.703888 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-10 14:48:31.703905 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-10 14:48:31.703918 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:48:31.703925 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-10 14:48:31.703931 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-10 14:48:31.703937 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-10 14:48:31.703943 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-10 14:48:31.703949 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:48:31.703955 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-10 14:48:31.703966 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-10 14:48:31.703979 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-10 14:48:31.703986 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:48:31.703992 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:48:31.703999 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-10 14:48:31.704005 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:48:31.704011 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-10 14:48:31.704026 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2025.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-01-10 14:48:31.704032 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-10 14:48:31.704038 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:48:31.704044 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-10 14:48:31.704050 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:48:31.704085 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-10 14:48:31.704091 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-10 14:48:31.704101 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-10 14:48:31.704110 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-10 14:48:31.704117 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:48:31.704123 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:48:31.704128 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-10 14:48:31.704134 | orchestrator | 2026-01-10 14:48:31.704140 | orchestrator | TASK [service-check-containers : prometheus | Notify handlers to restart containers] *** 2026-01-10 14:48:31.704146 | orchestrator | Saturday 10 January 2026 14:46:38 +0000 (0:00:05.233) 0:01:15.729 ****** 2026-01-10 14:48:31.704153 | orchestrator | changed: [testbed-manager] => { 2026-01-10 14:48:31.704159 | orchestrator |  "msg": "Notifying handlers" 2026-01-10 14:48:31.704165 | orchestrator | } 2026-01-10 14:48:31.704171 | orchestrator | changed: [testbed-node-0] => { 2026-01-10 14:48:31.704177 | orchestrator |  "msg": "Notifying handlers" 2026-01-10 14:48:31.704184 | orchestrator | } 2026-01-10 14:48:31.704189 | orchestrator | changed: [testbed-node-1] => { 2026-01-10 14:48:31.704195 | orchestrator |  "msg": "Notifying handlers" 2026-01-10 14:48:31.704200 | orchestrator | } 2026-01-10 14:48:31.704205 | orchestrator | changed: [testbed-node-2] => { 2026-01-10 14:48:31.704211 | orchestrator |  "msg": "Notifying handlers" 2026-01-10 14:48:31.704220 | orchestrator | } 2026-01-10 14:48:31.704227 | orchestrator | changed: [testbed-node-3] => { 2026-01-10 14:48:31.704232 | orchestrator |  "msg": "Notifying handlers" 2026-01-10 14:48:31.704238 | orchestrator | } 2026-01-10 14:48:31.704244 | orchestrator | changed: [testbed-node-4] => { 2026-01-10 14:48:31.704250 | orchestrator |  "msg": "Notifying handlers" 2026-01-10 14:48:31.704255 | orchestrator | } 2026-01-10 14:48:31.704260 | orchestrator | changed: [testbed-node-5] => { 2026-01-10 14:48:31.704266 | orchestrator |  "msg": "Notifying handlers" 2026-01-10 14:48:31.704272 | orchestrator | } 2026-01-10 14:48:31.704277 | orchestrator | 2026-01-10 14:48:31.704282 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-01-10 14:48:31.704288 | orchestrator | Saturday 10 January 2026 14:46:39 +0000 (0:00:00.906) 0:01:16.636 ****** 2026-01-10 14:48:31.704294 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-server:2025.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-01-10 14:48:31.704307 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-10 14:48:31.704313 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-10 14:48:31.704319 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2025.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-01-10 14:48:31.704330 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:48:31.704336 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-10 14:48:31.704342 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:48:31.704349 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:48:31.704355 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:48:31.704367 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-10 14:48:31.704374 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:48:31.704380 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-10 14:48:31.704387 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:48:31.704398 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:48:31.704404 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-10 14:48:31.704411 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:48:31.704418 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:48:31.704424 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-10 14:48:31.704434 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:48:31.704440 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:48:31.704446 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-10 14:48:31.704457 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-10 14:48:31.704464 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:48:31.704470 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:48:31.704475 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-10 14:48:31.704481 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-10 14:48:31.704487 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-10 14:48:31.704493 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:48:31.704499 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-10 14:48:31.704510 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-10 14:48:31.704516 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-10 14:48:31.704527 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:48:31.704534 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-10 14:48:31.704540 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-10 14:48:31.704546 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-10 14:48:31.704552 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:48:31.704558 | orchestrator | 2026-01-10 14:48:31.704563 | orchestrator | TASK [prometheus : Creating prometheus database user and setting permissions] *** 2026-01-10 14:48:31.704569 | orchestrator | Saturday 10 January 2026 14:46:41 +0000 (0:00:02.583) 0:01:19.219 ****** 2026-01-10 14:48:31.704576 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-01-10 14:48:31.704583 | orchestrator | skipping: [testbed-manager] 2026-01-10 14:48:31.704590 | orchestrator | 2026-01-10 14:48:31.704597 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-01-10 14:48:31.704602 | orchestrator | Saturday 10 January 2026 14:46:43 +0000 (0:00:01.517) 0:01:20.737 ****** 2026-01-10 14:48:31.704608 | orchestrator | 2026-01-10 14:48:31.704614 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-01-10 14:48:31.704620 | orchestrator | Saturday 10 January 2026 14:46:43 +0000 (0:00:00.137) 0:01:20.875 ****** 2026-01-10 14:48:31.704626 | orchestrator | 2026-01-10 14:48:31.704632 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-01-10 14:48:31.704638 | orchestrator | Saturday 10 January 2026 14:46:43 +0000 (0:00:00.102) 0:01:20.977 ****** 2026-01-10 14:48:31.704644 | orchestrator | 2026-01-10 14:48:31.704650 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-01-10 14:48:31.704655 | orchestrator | Saturday 10 January 2026 14:46:43 +0000 (0:00:00.115) 0:01:21.093 ****** 2026-01-10 14:48:31.704661 | orchestrator | 2026-01-10 14:48:31.704669 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-01-10 14:48:31.704675 | orchestrator | Saturday 10 January 2026 14:46:43 +0000 (0:00:00.128) 0:01:21.221 ****** 2026-01-10 14:48:31.704682 | orchestrator | 2026-01-10 14:48:31.704689 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-01-10 14:48:31.704695 | orchestrator | Saturday 10 January 2026 14:46:43 +0000 (0:00:00.129) 0:01:21.350 ****** 2026-01-10 14:48:31.704700 | orchestrator | 2026-01-10 14:48:31.704706 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-01-10 14:48:31.704713 | orchestrator | Saturday 10 January 2026 14:46:44 +0000 (0:00:00.490) 0:01:21.841 ****** 2026-01-10 14:48:31.704725 | orchestrator | 2026-01-10 14:48:31.704737 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-server container] ************* 2026-01-10 14:48:31.704742 | orchestrator | Saturday 10 January 2026 14:46:44 +0000 (0:00:00.100) 0:01:21.942 ****** 2026-01-10 14:48:31.704749 | orchestrator | changed: [testbed-manager] 2026-01-10 14:48:31.704756 | orchestrator | 2026-01-10 14:48:31.704762 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-node-exporter container] ****** 2026-01-10 14:48:31.704768 | orchestrator | Saturday 10 January 2026 14:47:02 +0000 (0:00:18.095) 0:01:40.038 ****** 2026-01-10 14:48:31.704774 | orchestrator | changed: [testbed-manager] 2026-01-10 14:48:31.704780 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:48:31.704787 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:48:31.704792 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:48:31.704798 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:48:31.704804 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:48:31.704810 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:48:31.704816 | orchestrator | 2026-01-10 14:48:31.704821 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-mysqld-exporter container] **** 2026-01-10 14:48:31.704828 | orchestrator | Saturday 10 January 2026 14:47:16 +0000 (0:00:14.112) 0:01:54.150 ****** 2026-01-10 14:48:31.704833 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:48:31.704839 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:48:31.704845 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:48:31.704851 | orchestrator | 2026-01-10 14:48:31.704856 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-memcached-exporter container] *** 2026-01-10 14:48:31.704863 | orchestrator | Saturday 10 January 2026 14:47:27 +0000 (0:00:11.234) 0:02:05.385 ****** 2026-01-10 14:48:31.704869 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:48:31.704875 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:48:31.704881 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:48:31.704886 | orchestrator | 2026-01-10 14:48:31.704893 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-cadvisor container] *********** 2026-01-10 14:48:31.704898 | orchestrator | Saturday 10 January 2026 14:47:32 +0000 (0:00:04.973) 0:02:10.358 ****** 2026-01-10 14:48:31.704904 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:48:31.704911 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:48:31.704917 | orchestrator | changed: [testbed-manager] 2026-01-10 14:48:31.704922 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:48:31.704928 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:48:31.704935 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:48:31.704940 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:48:31.704946 | orchestrator | 2026-01-10 14:48:31.704952 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-alertmanager container] ******* 2026-01-10 14:48:31.704958 | orchestrator | Saturday 10 January 2026 14:47:48 +0000 (0:00:15.551) 0:02:25.910 ****** 2026-01-10 14:48:31.704964 | orchestrator | changed: [testbed-manager] 2026-01-10 14:48:31.704970 | orchestrator | 2026-01-10 14:48:31.704976 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-elasticsearch-exporter container] *** 2026-01-10 14:48:31.704983 | orchestrator | Saturday 10 January 2026 14:48:01 +0000 (0:00:13.356) 0:02:39.267 ****** 2026-01-10 14:48:31.704989 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:48:31.704995 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:48:31.705001 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:48:31.705007 | orchestrator | 2026-01-10 14:48:31.705013 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-blackbox-exporter container] *** 2026-01-10 14:48:31.705019 | orchestrator | Saturday 10 January 2026 14:48:12 +0000 (0:00:10.493) 0:02:49.760 ****** 2026-01-10 14:48:31.705024 | orchestrator | changed: [testbed-manager] 2026-01-10 14:48:31.705031 | orchestrator | 2026-01-10 14:48:31.705036 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-libvirt-exporter container] *** 2026-01-10 14:48:31.705042 | orchestrator | Saturday 10 January 2026 14:48:17 +0000 (0:00:05.392) 0:02:55.153 ****** 2026-01-10 14:48:31.705048 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:48:31.705112 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:48:31.705120 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:48:31.705126 | orchestrator | 2026-01-10 14:48:31.705133 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-10 14:48:31.705140 | orchestrator | testbed-manager : ok=23  changed=14  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2026-01-10 14:48:31.705147 | orchestrator | testbed-node-0 : ok=16  changed=11  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-01-10 14:48:31.705154 | orchestrator | testbed-node-1 : ok=16  changed=11  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-01-10 14:48:31.705160 | orchestrator | testbed-node-2 : ok=16  changed=11  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-01-10 14:48:31.705167 | orchestrator | testbed-node-3 : ok=13  changed=8  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2026-01-10 14:48:31.705174 | orchestrator | testbed-node-4 : ok=13  changed=8  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2026-01-10 14:48:31.705184 | orchestrator | testbed-node-5 : ok=13  changed=8  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2026-01-10 14:48:31.705190 | orchestrator | 2026-01-10 14:48:31.705196 | orchestrator | 2026-01-10 14:48:31.705202 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-10 14:48:31.705209 | orchestrator | Saturday 10 January 2026 14:48:29 +0000 (0:00:11.876) 0:03:07.029 ****** 2026-01-10 14:48:31.705215 | orchestrator | =============================================================================== 2026-01-10 14:48:31.705222 | orchestrator | prometheus : Restart prometheus-server container ----------------------- 18.10s 2026-01-10 14:48:31.705236 | orchestrator | prometheus : Copying over prometheus config file ----------------------- 16.46s 2026-01-10 14:48:31.705242 | orchestrator | prometheus : Restart prometheus-cadvisor container --------------------- 15.55s 2026-01-10 14:48:31.705248 | orchestrator | prometheus : Restart prometheus-node-exporter container ---------------- 14.11s 2026-01-10 14:48:31.705255 | orchestrator | prometheus : Restart prometheus-alertmanager container ----------------- 13.36s 2026-01-10 14:48:31.705261 | orchestrator | prometheus : Restart prometheus-libvirt-exporter container ------------- 11.88s 2026-01-10 14:48:31.705267 | orchestrator | prometheus : Restart prometheus-mysqld-exporter container -------------- 11.23s 2026-01-10 14:48:31.705274 | orchestrator | prometheus : Restart prometheus-elasticsearch-exporter container ------- 10.49s 2026-01-10 14:48:31.705281 | orchestrator | service-cert-copy : prometheus | Copying over extra CA certificates ----- 6.96s 2026-01-10 14:48:31.705288 | orchestrator | prometheus : Copying over config.json files ----------------------------- 6.19s 2026-01-10 14:48:31.705295 | orchestrator | prometheus : Restart prometheus-blackbox-exporter container ------------- 5.39s 2026-01-10 14:48:31.705302 | orchestrator | service-check-containers : prometheus | Check containers ---------------- 5.23s 2026-01-10 14:48:31.705309 | orchestrator | prometheus : Restart prometheus-memcached-exporter container ------------ 4.97s 2026-01-10 14:48:31.705315 | orchestrator | prometheus : Copying over prometheus web config file -------------------- 4.60s 2026-01-10 14:48:31.705322 | orchestrator | prometheus : Ensuring config directories exist -------------------------- 4.12s 2026-01-10 14:48:31.705328 | orchestrator | prometheus : Copying over my.cnf for mysqld_exporter -------------------- 3.23s 2026-01-10 14:48:31.705335 | orchestrator | prometheus : Copying cloud config file for openstack exporter ----------- 2.84s 2026-01-10 14:48:31.705342 | orchestrator | service-cert-copy : prometheus | Copying over backend internal TLS key --- 2.76s 2026-01-10 14:48:31.705349 | orchestrator | prometheus : Copying config file for blackbox exporter ------------------ 2.74s 2026-01-10 14:48:31.705356 | orchestrator | service-check-containers : Include tasks -------------------------------- 2.58s 2026-01-10 14:48:31.705370 | orchestrator | 2026-01-10 14:48:31 | INFO  | Task 3e2d7132-1b2e-427a-96db-eb49ff35fee2 is in state STARTED 2026-01-10 14:48:31.705377 | orchestrator | 2026-01-10 14:48:31 | INFO  | Task 314f12a4-1942-42a1-ba39-50021887af8d is in state STARTED 2026-01-10 14:48:31.705383 | orchestrator | 2026-01-10 14:48:31 | INFO  | Task 27a2dbc7-3cd7-4e7d-a96b-79d180378f23 is in state STARTED 2026-01-10 14:48:31.705389 | orchestrator | 2026-01-10 14:48:31 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:48:34.745258 | orchestrator | 2026-01-10 14:48:34 | INFO  | Task c675ce34-5677-4951-8384-6a5ecff98e0c is in state STARTED 2026-01-10 14:48:34.746239 | orchestrator | 2026-01-10 14:48:34 | INFO  | Task 3e2d7132-1b2e-427a-96db-eb49ff35fee2 is in state STARTED 2026-01-10 14:48:34.747918 | orchestrator | 2026-01-10 14:48:34 | INFO  | Task 314f12a4-1942-42a1-ba39-50021887af8d is in state STARTED 2026-01-10 14:48:34.749317 | orchestrator | 2026-01-10 14:48:34 | INFO  | Task 27a2dbc7-3cd7-4e7d-a96b-79d180378f23 is in state STARTED 2026-01-10 14:48:34.749360 | orchestrator | 2026-01-10 14:48:34 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:48:37.804942 | orchestrator | 2026-01-10 14:48:37 | INFO  | Task c675ce34-5677-4951-8384-6a5ecff98e0c is in state STARTED 2026-01-10 14:48:37.808233 | orchestrator | 2026-01-10 14:48:37 | INFO  | Task 3e2d7132-1b2e-427a-96db-eb49ff35fee2 is in state SUCCESS 2026-01-10 14:48:37.810223 | orchestrator | 2026-01-10 14:48:37.810313 | orchestrator | 2026-01-10 14:48:37.810324 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-10 14:48:37.810333 | orchestrator | 2026-01-10 14:48:37.810341 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-10 14:48:37.810349 | orchestrator | Saturday 10 January 2026 14:45:31 +0000 (0:00:00.537) 0:00:00.537 ****** 2026-01-10 14:48:37.810356 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:48:37.810363 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:48:37.810371 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:48:37.810378 | orchestrator | 2026-01-10 14:48:37.810385 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-10 14:48:37.810392 | orchestrator | Saturday 10 January 2026 14:45:32 +0000 (0:00:00.356) 0:00:00.894 ****** 2026-01-10 14:48:37.810400 | orchestrator | ok: [testbed-node-0] => (item=enable_glance_True) 2026-01-10 14:48:37.810408 | orchestrator | ok: [testbed-node-1] => (item=enable_glance_True) 2026-01-10 14:48:37.810416 | orchestrator | ok: [testbed-node-2] => (item=enable_glance_True) 2026-01-10 14:48:37.810423 | orchestrator | 2026-01-10 14:48:37.810431 | orchestrator | PLAY [Apply role glance] ******************************************************* 2026-01-10 14:48:37.810439 | orchestrator | 2026-01-10 14:48:37.810446 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-01-10 14:48:37.810454 | orchestrator | Saturday 10 January 2026 14:45:32 +0000 (0:00:00.529) 0:00:01.424 ****** 2026-01-10 14:48:37.810461 | orchestrator | included: /ansible/roles/glance/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:48:37.810469 | orchestrator | 2026-01-10 14:48:37.810477 | orchestrator | TASK [service-ks-register : glance | Creating/deleting services] *************** 2026-01-10 14:48:37.810484 | orchestrator | Saturday 10 January 2026 14:45:33 +0000 (0:00:01.162) 0:00:02.587 ****** 2026-01-10 14:48:37.810492 | orchestrator | changed: [testbed-node-0] => (item=glance (image)) 2026-01-10 14:48:37.810500 | orchestrator | 2026-01-10 14:48:37.810507 | orchestrator | TASK [service-ks-register : glance | Creating/deleting endpoints] ************** 2026-01-10 14:48:37.810516 | orchestrator | Saturday 10 January 2026 14:45:38 +0000 (0:00:05.246) 0:00:07.834 ****** 2026-01-10 14:48:37.810524 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api-int.testbed.osism.xyz:9292 -> internal) 2026-01-10 14:48:37.810532 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api.testbed.osism.xyz:9292 -> public) 2026-01-10 14:48:37.810557 | orchestrator | 2026-01-10 14:48:37.810565 | orchestrator | TASK [service-ks-register : glance | Creating projects] ************************ 2026-01-10 14:48:37.810573 | orchestrator | Saturday 10 January 2026 14:45:45 +0000 (0:00:06.176) 0:00:14.010 ****** 2026-01-10 14:48:37.810581 | orchestrator | changed: [testbed-node-0] => (item=service) 2026-01-10 14:48:37.810588 | orchestrator | 2026-01-10 14:48:37.810596 | orchestrator | TASK [service-ks-register : glance | Creating users] *************************** 2026-01-10 14:48:37.810603 | orchestrator | Saturday 10 January 2026 14:45:49 +0000 (0:00:03.941) 0:00:17.952 ****** 2026-01-10 14:48:37.810611 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-01-10 14:48:37.810619 | orchestrator | changed: [testbed-node-0] => (item=glance -> service) 2026-01-10 14:48:37.810626 | orchestrator | 2026-01-10 14:48:37.810633 | orchestrator | TASK [service-ks-register : glance | Creating roles] *************************** 2026-01-10 14:48:37.810641 | orchestrator | Saturday 10 January 2026 14:45:53 +0000 (0:00:04.774) 0:00:22.727 ****** 2026-01-10 14:48:37.810649 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-01-10 14:48:37.810656 | orchestrator | 2026-01-10 14:48:37.810664 | orchestrator | TASK [service-ks-register : glance | Granting/revoking user roles] ************* 2026-01-10 14:48:37.810671 | orchestrator | Saturday 10 January 2026 14:45:57 +0000 (0:00:03.233) 0:00:25.960 ****** 2026-01-10 14:48:37.810679 | orchestrator | changed: [testbed-node-0] => (item=glance -> service -> admin) 2026-01-10 14:48:37.810686 | orchestrator | 2026-01-10 14:48:37.810694 | orchestrator | TASK [glance : Ensuring config directories exist] ****************************** 2026-01-10 14:48:37.810701 | orchestrator | Saturday 10 January 2026 14:46:00 +0000 (0:00:03.433) 0:00:29.394 ****** 2026-01-10 14:48:37.810738 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-10 14:48:37.810749 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-10 14:48:37.810763 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-10 14:48:37.810772 | orchestrator | 2026-01-10 14:48:37.810779 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-01-10 14:48:37.810786 | orchestrator | Saturday 10 January 2026 14:46:04 +0000 (0:00:03.472) 0:00:32.867 ****** 2026-01-10 14:48:37.810798 | orchestrator | included: /ansible/roles/glance/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:48:37.810805 | orchestrator | 2026-01-10 14:48:37.810812 | orchestrator | TASK [glance : Ensuring glance service ceph config subdir exists] ************** 2026-01-10 14:48:37.810820 | orchestrator | Saturday 10 January 2026 14:46:04 +0000 (0:00:00.714) 0:00:33.582 ****** 2026-01-10 14:48:37.810827 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:48:37.810834 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:48:37.810840 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:48:37.810846 | orchestrator | 2026-01-10 14:48:37.810852 | orchestrator | TASK [glance : Copy over multiple ceph configs for Glance] ********************* 2026-01-10 14:48:37.810858 | orchestrator | Saturday 10 January 2026 14:46:08 +0000 (0:00:04.066) 0:00:37.648 ****** 2026-01-10 14:48:37.810869 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'pool': 'images', 'user': 'glance', 'enabled': True}) 2026-01-10 14:48:37.810878 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'pool': 'images', 'user': 'glance', 'enabled': True}) 2026-01-10 14:48:37.810886 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'pool': 'images', 'user': 'glance', 'enabled': True}) 2026-01-10 14:48:37.810893 | orchestrator | 2026-01-10 14:48:37.810899 | orchestrator | TASK [glance : Copy over ceph Glance keyrings] ********************************* 2026-01-10 14:48:37.810906 | orchestrator | Saturday 10 January 2026 14:46:10 +0000 (0:00:01.477) 0:00:39.125 ****** 2026-01-10 14:48:37.810913 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'pool': 'images', 'user': 'glance', 'enabled': True}) 2026-01-10 14:48:37.810920 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'pool': 'images', 'user': 'glance', 'enabled': True}) 2026-01-10 14:48:37.810927 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'pool': 'images', 'user': 'glance', 'enabled': True}) 2026-01-10 14:48:37.810932 | orchestrator | 2026-01-10 14:48:37.810939 | orchestrator | TASK [glance : Ensuring config directory has correct owner and permission] ***** 2026-01-10 14:48:37.810945 | orchestrator | Saturday 10 January 2026 14:46:11 +0000 (0:00:01.096) 0:00:40.222 ****** 2026-01-10 14:48:37.810952 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:48:37.810959 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:48:37.810965 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:48:37.810972 | orchestrator | 2026-01-10 14:48:37.810979 | orchestrator | TASK [glance : Check if policies shall be overwritten] ************************* 2026-01-10 14:48:37.810986 | orchestrator | Saturday 10 January 2026 14:46:11 +0000 (0:00:00.614) 0:00:40.836 ****** 2026-01-10 14:48:37.810993 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:48:37.811000 | orchestrator | 2026-01-10 14:48:37.811007 | orchestrator | TASK [glance : Set glance policy file] ***************************************** 2026-01-10 14:48:37.811014 | orchestrator | Saturday 10 January 2026 14:46:12 +0000 (0:00:00.245) 0:00:41.082 ****** 2026-01-10 14:48:37.811021 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:48:37.811028 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:48:37.811035 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:48:37.811041 | orchestrator | 2026-01-10 14:48:37.811060 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-01-10 14:48:37.811067 | orchestrator | Saturday 10 January 2026 14:46:12 +0000 (0:00:00.461) 0:00:41.543 ****** 2026-01-10 14:48:37.811074 | orchestrator | included: /ansible/roles/glance/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:48:37.811081 | orchestrator | 2026-01-10 14:48:37.811088 | orchestrator | TASK [service-cert-copy : glance | Copying over extra CA certificates] ********* 2026-01-10 14:48:37.811094 | orchestrator | Saturday 10 January 2026 14:46:13 +0000 (0:00:00.920) 0:00:42.464 ****** 2026-01-10 14:48:37.811107 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-10 14:48:37.811120 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-10 14:48:37.811129 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-10 14:48:37.811140 | orchestrator | 2026-01-10 14:48:37.811148 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS certificate] *** 2026-01-10 14:48:37.811155 | orchestrator | Saturday 10 January 2026 14:46:19 +0000 (0:00:05.615) 0:00:48.079 ****** 2026-01-10 14:48:37.811167 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-01-10 14:48:37.811174 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:48:37.811182 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-01-10 14:48:37.811193 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:48:37.811206 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-01-10 14:48:37.811214 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:48:37.811221 | orchestrator | 2026-01-10 14:48:37.811228 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS key] ****** 2026-01-10 14:48:37.811235 | orchestrator | Saturday 10 January 2026 14:46:23 +0000 (0:00:04.544) 0:00:52.624 ****** 2026-01-10 14:48:37.811242 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-01-10 14:48:37.811250 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:48:37.811263 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-01-10 14:48:37.811277 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:48:37.811285 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-01-10 14:48:37.811293 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:48:37.811300 | orchestrator | 2026-01-10 14:48:37.811307 | orchestrator | TASK [glance : Creating TLS backend PEM File] ********************************** 2026-01-10 14:48:37.811314 | orchestrator | Saturday 10 January 2026 14:46:30 +0000 (0:00:06.337) 0:00:58.961 ****** 2026-01-10 14:48:37.811321 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:48:37.811327 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:48:37.811334 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:48:37.811341 | orchestrator | 2026-01-10 14:48:37.811348 | orchestrator | TASK [glance : Copying over config.json files for services] ******************** 2026-01-10 14:48:37.811359 | orchestrator | Saturday 10 January 2026 14:46:35 +0000 (0:00:05.229) 0:01:04.191 ****** 2026-01-10 14:48:37.811381 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-10 14:48:37.811390 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-10 14:48:37.811401 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-10 14:48:37.811414 | orchestrator | 2026-01-10 14:48:37.811426 | orchestrator | TASK [glance : Copying over glance-api.conf] *********************************** 2026-01-10 14:48:37.811434 | orchestrator | Saturday 10 January 2026 14:46:39 +0000 (0:00:04.253) 0:01:08.445 ****** 2026-01-10 14:48:37.811441 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:48:37.811448 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:48:37.811455 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:48:37.811462 | orchestrator | 2026-01-10 14:48:37.811469 | orchestrator | TASK [glance : Copying over glance-cache.conf for glance_api] ****************** 2026-01-10 14:48:37.811476 | orchestrator | Saturday 10 January 2026 14:46:45 +0000 (0:00:06.160) 0:01:14.606 ****** 2026-01-10 14:48:37.811483 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:48:37.811490 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:48:37.811497 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:48:37.811504 | orchestrator | 2026-01-10 14:48:37.811511 | orchestrator | TASK [glance : Copying over glance-image-import.conf] ************************** 2026-01-10 14:48:37.811518 | orchestrator | Saturday 10 January 2026 14:46:49 +0000 (0:00:03.339) 0:01:17.945 ****** 2026-01-10 14:48:37.811525 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:48:37.811532 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:48:37.811539 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:48:37.811548 | orchestrator | 2026-01-10 14:48:37.811556 | orchestrator | TASK [glance : Copying over property-protections-rules.conf] ******************* 2026-01-10 14:48:37.811563 | orchestrator | Saturday 10 January 2026 14:46:53 +0000 (0:00:04.823) 0:01:22.769 ****** 2026-01-10 14:48:37.811570 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:48:37.811577 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:48:37.811584 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:48:37.811591 | orchestrator | 2026-01-10 14:48:37.811598 | orchestrator | TASK [glance : Copying over existing policy file] ****************************** 2026-01-10 14:48:37.811605 | orchestrator | Saturday 10 January 2026 14:46:59 +0000 (0:00:05.489) 0:01:28.259 ****** 2026-01-10 14:48:37.811611 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:48:37.811618 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:48:37.811626 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:48:37.811633 | orchestrator | 2026-01-10 14:48:37.811639 | orchestrator | TASK [glance : Copying over glance-haproxy-tls.cfg] **************************** 2026-01-10 14:48:37.811647 | orchestrator | Saturday 10 January 2026 14:46:59 +0000 (0:00:00.449) 0:01:28.709 ****** 2026-01-10 14:48:37.811654 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-01-10 14:48:37.811661 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:48:37.811668 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-01-10 14:48:37.811680 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:48:37.811687 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-01-10 14:48:37.811694 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:48:37.811701 | orchestrator | 2026-01-10 14:48:37.811708 | orchestrator | TASK [glance : Generating 'hostnqn' file for glance_api] *********************** 2026-01-10 14:48:37.811715 | orchestrator | Saturday 10 January 2026 14:47:05 +0000 (0:00:05.908) 0:01:34.617 ****** 2026-01-10 14:48:37.811722 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:48:37.811729 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:48:37.811736 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:48:37.811743 | orchestrator | 2026-01-10 14:48:37.811750 | orchestrator | TASK [service-check-containers : glance | Check containers] ******************** 2026-01-10 14:48:37.811757 | orchestrator | Saturday 10 January 2026 14:47:12 +0000 (0:00:06.792) 0:01:41.409 ****** 2026-01-10 14:48:37.811772 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-10 14:48:37.811781 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-10 14:48:37.811801 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-10 14:48:37.811809 | orchestrator | 2026-01-10 14:48:37.811816 | orchestrator | TASK [service-check-containers : glance | Notify handlers to restart containers] *** 2026-01-10 14:48:37.811823 | orchestrator | Saturday 10 January 2026 14:47:16 +0000 (0:00:03.778) 0:01:45.187 ****** 2026-01-10 14:48:37.811830 | orchestrator | changed: [testbed-node-0] => { 2026-01-10 14:48:37.811837 | orchestrator |  "msg": "Notifying handlers" 2026-01-10 14:48:37.811844 | orchestrator | } 2026-01-10 14:48:37.811851 | orchestrator | changed: [testbed-node-1] => { 2026-01-10 14:48:37.811858 | orchestrator |  "msg": "Notifying handlers" 2026-01-10 14:48:37.811865 | orchestrator | } 2026-01-10 14:48:37.811872 | orchestrator | changed: [testbed-node-2] => { 2026-01-10 14:48:37.811879 | orchestrator |  "msg": "Notifying handlers" 2026-01-10 14:48:37.811886 | orchestrator | } 2026-01-10 14:48:37.811893 | orchestrator | 2026-01-10 14:48:37.811900 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-01-10 14:48:37.811911 | orchestrator | Saturday 10 January 2026 14:47:16 +0000 (0:00:00.304) 0:01:45.492 ****** 2026-01-10 14:48:37.811918 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-01-10 14:48:37.811930 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:48:37.811939 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-01-10 14:48:37.811946 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:48:37.811957 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-01-10 14:48:37.811966 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:48:37.811972 | orchestrator | 2026-01-10 14:48:37.811977 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-01-10 14:48:37.811983 | orchestrator | Saturday 10 January 2026 14:47:21 +0000 (0:00:04.403) 0:01:49.896 ****** 2026-01-10 14:48:37.811990 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:48:37.811995 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:48:37.812001 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:48:37.812007 | orchestrator | 2026-01-10 14:48:37.812013 | orchestrator | TASK [glance : Creating Glance database] *************************************** 2026-01-10 14:48:37.812020 | orchestrator | Saturday 10 January 2026 14:47:21 +0000 (0:00:00.556) 0:01:50.452 ****** 2026-01-10 14:48:37.812026 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:48:37.812033 | orchestrator | 2026-01-10 14:48:37.812039 | orchestrator | TASK [glance : Creating Glance database user and setting permissions] ********** 2026-01-10 14:48:37.812045 | orchestrator | Saturday 10 January 2026 14:47:23 +0000 (0:00:02.114) 0:01:52.566 ****** 2026-01-10 14:48:37.812104 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:48:37.812112 | orchestrator | 2026-01-10 14:48:37.812119 | orchestrator | TASK [glance : Enable log_bin_trust_function_creators function] **************** 2026-01-10 14:48:37.812125 | orchestrator | Saturday 10 January 2026 14:47:26 +0000 (0:00:02.353) 0:01:54.920 ****** 2026-01-10 14:48:37.812144 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:48:37.812151 | orchestrator | 2026-01-10 14:48:37.812158 | orchestrator | TASK [glance : Running Glance bootstrap container] ***************************** 2026-01-10 14:48:37.812165 | orchestrator | Saturday 10 January 2026 14:47:28 +0000 (0:00:02.355) 0:01:57.276 ****** 2026-01-10 14:48:37.812172 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:48:37.812179 | orchestrator | 2026-01-10 14:48:37.812185 | orchestrator | TASK [glance : Disable log_bin_trust_function_creators function] *************** 2026-01-10 14:48:37.812193 | orchestrator | Saturday 10 January 2026 14:47:55 +0000 (0:00:27.313) 0:02:24.590 ****** 2026-01-10 14:48:37.812200 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:48:37.812207 | orchestrator | 2026-01-10 14:48:37.812214 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-01-10 14:48:37.812220 | orchestrator | Saturday 10 January 2026 14:47:57 +0000 (0:00:01.907) 0:02:26.497 ****** 2026-01-10 14:48:37.812226 | orchestrator | 2026-01-10 14:48:37.812232 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-01-10 14:48:37.812237 | orchestrator | Saturday 10 January 2026 14:47:57 +0000 (0:00:00.071) 0:02:26.568 ****** 2026-01-10 14:48:37.812243 | orchestrator | 2026-01-10 14:48:37.812249 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-01-10 14:48:37.812255 | orchestrator | Saturday 10 January 2026 14:47:57 +0000 (0:00:00.069) 0:02:26.637 ****** 2026-01-10 14:48:37.812261 | orchestrator | 2026-01-10 14:48:37.812267 | orchestrator | RUNNING HANDLER [glance : Restart glance-api container] ************************ 2026-01-10 14:48:37.812272 | orchestrator | Saturday 10 January 2026 14:47:57 +0000 (0:00:00.070) 0:02:26.708 ****** 2026-01-10 14:48:37.812278 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:48:37.812284 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:48:37.812290 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:48:37.812296 | orchestrator | 2026-01-10 14:48:37.812302 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-10 14:48:37.812308 | orchestrator | testbed-node-0 : ok=28  changed=21  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-01-10 14:48:37.812323 | orchestrator | testbed-node-1 : ok=17  changed=11  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-01-10 14:48:37.812328 | orchestrator | testbed-node-2 : ok=17  changed=11  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-01-10 14:48:37.812334 | orchestrator | 2026-01-10 14:48:37.812340 | orchestrator | 2026-01-10 14:48:37.812345 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-10 14:48:37.812351 | orchestrator | Saturday 10 January 2026 14:48:36 +0000 (0:00:38.602) 0:03:05.311 ****** 2026-01-10 14:48:37.812363 | orchestrator | =============================================================================== 2026-01-10 14:48:37.812369 | orchestrator | glance : Restart glance-api container ---------------------------------- 38.60s 2026-01-10 14:48:37.812375 | orchestrator | glance : Running Glance bootstrap container ---------------------------- 27.31s 2026-01-10 14:48:37.812380 | orchestrator | glance : Generating 'hostnqn' file for glance_api ----------------------- 6.79s 2026-01-10 14:48:37.812386 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS key ------ 6.34s 2026-01-10 14:48:37.812392 | orchestrator | service-ks-register : glance | Creating/deleting endpoints -------------- 6.18s 2026-01-10 14:48:37.812397 | orchestrator | glance : Copying over glance-api.conf ----------------------------------- 6.16s 2026-01-10 14:48:37.812402 | orchestrator | glance : Copying over glance-haproxy-tls.cfg ---------------------------- 5.91s 2026-01-10 14:48:37.812408 | orchestrator | service-cert-copy : glance | Copying over extra CA certificates --------- 5.62s 2026-01-10 14:48:37.812414 | orchestrator | glance : Copying over property-protections-rules.conf ------------------- 5.49s 2026-01-10 14:48:37.812419 | orchestrator | service-ks-register : glance | Creating/deleting services --------------- 5.25s 2026-01-10 14:48:37.812425 | orchestrator | glance : Creating TLS backend PEM File ---------------------------------- 5.23s 2026-01-10 14:48:37.812430 | orchestrator | glance : Copying over glance-image-import.conf -------------------------- 4.82s 2026-01-10 14:48:37.812436 | orchestrator | service-ks-register : glance | Creating users --------------------------- 4.77s 2026-01-10 14:48:37.812441 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS certificate --- 4.54s 2026-01-10 14:48:37.812447 | orchestrator | service-check-containers : Include tasks -------------------------------- 4.40s 2026-01-10 14:48:37.812453 | orchestrator | glance : Copying over config.json files for services -------------------- 4.25s 2026-01-10 14:48:37.812458 | orchestrator | glance : Ensuring glance service ceph config subdir exists -------------- 4.07s 2026-01-10 14:48:37.812464 | orchestrator | service-ks-register : glance | Creating projects ------------------------ 3.94s 2026-01-10 14:48:37.812469 | orchestrator | service-check-containers : glance | Check containers -------------------- 3.78s 2026-01-10 14:48:37.812475 | orchestrator | glance : Ensuring config directories exist ------------------------------ 3.47s 2026-01-10 14:48:37.812481 | orchestrator | 2026-01-10 14:48:37 | INFO  | Task 314f12a4-1942-42a1-ba39-50021887af8d is in state STARTED 2026-01-10 14:48:37.812488 | orchestrator | 2026-01-10 14:48:37 | INFO  | Task 27a2dbc7-3cd7-4e7d-a96b-79d180378f23 is in state STARTED 2026-01-10 14:48:37.812494 | orchestrator | 2026-01-10 14:48:37 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:48:40.868086 | orchestrator | 2026-01-10 14:48:40 | INFO  | Task defe58c1-4fee-4c60-93fd-ab3fd2b5526f is in state STARTED 2026-01-10 14:48:40.870479 | orchestrator | 2026-01-10 14:48:40 | INFO  | Task c675ce34-5677-4951-8384-6a5ecff98e0c is in state STARTED 2026-01-10 14:48:40.871901 | orchestrator | 2026-01-10 14:48:40 | INFO  | Task 314f12a4-1942-42a1-ba39-50021887af8d is in state STARTED 2026-01-10 14:48:40.873770 | orchestrator | 2026-01-10 14:48:40 | INFO  | Task 27a2dbc7-3cd7-4e7d-a96b-79d180378f23 is in state STARTED 2026-01-10 14:48:40.873824 | orchestrator | 2026-01-10 14:48:40 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:48:43.924596 | orchestrator | 2026-01-10 14:48:43 | INFO  | Task defe58c1-4fee-4c60-93fd-ab3fd2b5526f is in state STARTED 2026-01-10 14:48:43.927012 | orchestrator | 2026-01-10 14:48:43 | INFO  | Task c675ce34-5677-4951-8384-6a5ecff98e0c is in state STARTED 2026-01-10 14:48:43.928927 | orchestrator | 2026-01-10 14:48:43 | INFO  | Task 314f12a4-1942-42a1-ba39-50021887af8d is in state STARTED 2026-01-10 14:48:43.930834 | orchestrator | 2026-01-10 14:48:43 | INFO  | Task 27a2dbc7-3cd7-4e7d-a96b-79d180378f23 is in state STARTED 2026-01-10 14:48:43.930872 | orchestrator | 2026-01-10 14:48:43 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:48:46.969564 | orchestrator | 2026-01-10 14:48:46 | INFO  | Task defe58c1-4fee-4c60-93fd-ab3fd2b5526f is in state STARTED 2026-01-10 14:48:46.970246 | orchestrator | 2026-01-10 14:48:46 | INFO  | Task c675ce34-5677-4951-8384-6a5ecff98e0c is in state STARTED 2026-01-10 14:48:46.971032 | orchestrator | 2026-01-10 14:48:46 | INFO  | Task 314f12a4-1942-42a1-ba39-50021887af8d is in state STARTED 2026-01-10 14:48:46.972694 | orchestrator | 2026-01-10 14:48:46 | INFO  | Task 27a2dbc7-3cd7-4e7d-a96b-79d180378f23 is in state STARTED 2026-01-10 14:48:46.972722 | orchestrator | 2026-01-10 14:48:46 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:48:50.026853 | orchestrator | 2026-01-10 14:48:50 | INFO  | Task defe58c1-4fee-4c60-93fd-ab3fd2b5526f is in state STARTED 2026-01-10 14:48:50.030068 | orchestrator | 2026-01-10 14:48:50 | INFO  | Task c675ce34-5677-4951-8384-6a5ecff98e0c is in state STARTED 2026-01-10 14:48:50.032474 | orchestrator | 2026-01-10 14:48:50 | INFO  | Task 314f12a4-1942-42a1-ba39-50021887af8d is in state STARTED 2026-01-10 14:48:50.035448 | orchestrator | 2026-01-10 14:48:50 | INFO  | Task 27a2dbc7-3cd7-4e7d-a96b-79d180378f23 is in state STARTED 2026-01-10 14:48:50.035508 | orchestrator | 2026-01-10 14:48:50 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:48:53.085814 | orchestrator | 2026-01-10 14:48:53 | INFO  | Task f7ad5e30-77df-4be0-ac5d-16d0a294adc9 is in state STARTED 2026-01-10 14:48:53.088561 | orchestrator | 2026-01-10 14:48:53 | INFO  | Task defe58c1-4fee-4c60-93fd-ab3fd2b5526f is in state STARTED 2026-01-10 14:48:53.091384 | orchestrator | 2026-01-10 14:48:53 | INFO  | Task c675ce34-5677-4951-8384-6a5ecff98e0c is in state STARTED 2026-01-10 14:48:53.096252 | orchestrator | 2026-01-10 14:48:53 | INFO  | Task 314f12a4-1942-42a1-ba39-50021887af8d is in state SUCCESS 2026-01-10 14:48:53.096611 | orchestrator | 2026-01-10 14:48:53.098856 | orchestrator | 2026-01-10 14:48:53.098889 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-10 14:48:53.098893 | orchestrator | 2026-01-10 14:48:53.098897 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-10 14:48:53.098900 | orchestrator | Saturday 10 January 2026 14:45:35 +0000 (0:00:00.250) 0:00:00.250 ****** 2026-01-10 14:48:53.098903 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:48:53.098909 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:48:53.098914 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:48:53.098919 | orchestrator | 2026-01-10 14:48:53.098925 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-10 14:48:53.098931 | orchestrator | Saturday 10 January 2026 14:45:36 +0000 (0:00:00.272) 0:00:00.523 ****** 2026-01-10 14:48:53.098936 | orchestrator | ok: [testbed-node-0] => (item=enable_cinder_True) 2026-01-10 14:48:53.098942 | orchestrator | ok: [testbed-node-1] => (item=enable_cinder_True) 2026-01-10 14:48:53.098948 | orchestrator | ok: [testbed-node-2] => (item=enable_cinder_True) 2026-01-10 14:48:53.098953 | orchestrator | 2026-01-10 14:48:53.098959 | orchestrator | PLAY [Apply role cinder] ******************************************************* 2026-01-10 14:48:53.098988 | orchestrator | 2026-01-10 14:48:53.098994 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-01-10 14:48:53.098999 | orchestrator | Saturday 10 January 2026 14:45:36 +0000 (0:00:00.384) 0:00:00.908 ****** 2026-01-10 14:48:53.099003 | orchestrator | included: /ansible/roles/cinder/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:48:53.099006 | orchestrator | 2026-01-10 14:48:53.099009 | orchestrator | TASK [service-ks-register : cinder | Creating/deleting services] *************** 2026-01-10 14:48:53.099013 | orchestrator | Saturday 10 January 2026 14:45:37 +0000 (0:00:00.530) 0:00:01.438 ****** 2026-01-10 14:48:53.099016 | orchestrator | changed: [testbed-node-0] => (item=cinder (block-storage)) 2026-01-10 14:48:53.099019 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 (volumev3)) 2026-01-10 14:48:53.099022 | orchestrator | 2026-01-10 14:48:53.099025 | orchestrator | TASK [service-ks-register : cinder | Creating/deleting endpoints] ************** 2026-01-10 14:48:53.099041 | orchestrator | Saturday 10 January 2026 14:45:44 +0000 (0:00:07.451) 0:00:08.890 ****** 2026-01-10 14:48:53.099044 | orchestrator | changed: [testbed-node-0] => (item=cinder -> https://api-int.testbed.osism.xyz:8776/v3 -> internal) 2026-01-10 14:48:53.099048 | orchestrator | changed: [testbed-node-0] => (item=cinder -> https://api.testbed.osism.xyz:8776/v3 -> public) 2026-01-10 14:48:53.099051 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s -> internal) 2026-01-10 14:48:53.099055 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s -> public) 2026-01-10 14:48:53.099058 | orchestrator | 2026-01-10 14:48:53.099061 | orchestrator | TASK [service-ks-register : cinder | Creating projects] ************************ 2026-01-10 14:48:53.099064 | orchestrator | Saturday 10 January 2026 14:45:57 +0000 (0:00:13.193) 0:00:22.083 ****** 2026-01-10 14:48:53.099067 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-01-10 14:48:53.099071 | orchestrator | 2026-01-10 14:48:53.099074 | orchestrator | TASK [service-ks-register : cinder | Creating users] *************************** 2026-01-10 14:48:53.099077 | orchestrator | Saturday 10 January 2026 14:46:00 +0000 (0:00:02.876) 0:00:24.959 ****** 2026-01-10 14:48:53.099080 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-01-10 14:48:53.099083 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service) 2026-01-10 14:48:53.099086 | orchestrator | 2026-01-10 14:48:53.099096 | orchestrator | TASK [service-ks-register : cinder | Creating roles] *************************** 2026-01-10 14:48:53.099099 | orchestrator | Saturday 10 January 2026 14:46:04 +0000 (0:00:03.638) 0:00:28.598 ****** 2026-01-10 14:48:53.099102 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-01-10 14:48:53.099105 | orchestrator | 2026-01-10 14:48:53.099108 | orchestrator | TASK [service-ks-register : cinder | Granting/revoking user roles] ************* 2026-01-10 14:48:53.099111 | orchestrator | Saturday 10 January 2026 14:46:07 +0000 (0:00:03.532) 0:00:32.131 ****** 2026-01-10 14:48:53.099114 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> admin) 2026-01-10 14:48:53.099117 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> service) 2026-01-10 14:48:53.099120 | orchestrator | 2026-01-10 14:48:53.099123 | orchestrator | TASK [cinder : Ensuring config directories exist] ****************************** 2026-01-10 14:48:53.099126 | orchestrator | Saturday 10 January 2026 14:46:14 +0000 (0:00:07.063) 0:00:39.194 ****** 2026-01-10 14:48:53.099140 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-10 14:48:53.099149 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-10 14:48:53.099153 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-10 14:48:53.099159 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-01-10 14:48:53.099163 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-01-10 14:48:53.099167 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-01-10 14:48:53.099198 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-01-10 14:48:53.099204 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-01-10 14:48:53.099207 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-01-10 14:48:53.099213 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-01-10 14:48:53.099217 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-01-10 14:48:53.099220 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-01-10 14:48:53.099226 | orchestrator | 2026-01-10 14:48:53.099231 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-01-10 14:48:53.099235 | orchestrator | Saturday 10 January 2026 14:46:17 +0000 (0:00:02.756) 0:00:41.950 ****** 2026-01-10 14:48:53.099238 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:48:53.099241 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:48:53.099244 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:48:53.099247 | orchestrator | 2026-01-10 14:48:53.099250 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-01-10 14:48:53.099253 | orchestrator | Saturday 10 January 2026 14:46:18 +0000 (0:00:00.745) 0:00:42.696 ****** 2026-01-10 14:48:53.099256 | orchestrator | included: /ansible/roles/cinder/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:48:53.099259 | orchestrator | 2026-01-10 14:48:53.099262 | orchestrator | TASK [cinder : Ensuring cinder service ceph config subdirs exists] ************* 2026-01-10 14:48:53.099265 | orchestrator | Saturday 10 January 2026 14:46:19 +0000 (0:00:00.721) 0:00:43.417 ****** 2026-01-10 14:48:53.099268 | orchestrator | changed: [testbed-node-0] => (item=cinder-volume) 2026-01-10 14:48:53.099272 | orchestrator | changed: [testbed-node-1] => (item=cinder-volume) 2026-01-10 14:48:53.099275 | orchestrator | changed: [testbed-node-2] => (item=cinder-volume) 2026-01-10 14:48:53.099278 | orchestrator | changed: [testbed-node-0] => (item=cinder-backup) 2026-01-10 14:48:53.099281 | orchestrator | changed: [testbed-node-1] => (item=cinder-backup) 2026-01-10 14:48:53.099284 | orchestrator | changed: [testbed-node-2] => (item=cinder-backup) 2026-01-10 14:48:53.099290 | orchestrator | 2026-01-10 14:48:53.099295 | orchestrator | TASK [cinder : Copying over multiple ceph.conf for cinder services] ************ 2026-01-10 14:48:53.099301 | orchestrator | Saturday 10 January 2026 14:46:21 +0000 (0:00:02.203) 0:00:45.621 ****** 2026-01-10 14:48:53.099306 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}])  2026-01-10 14:48:53.099315 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}])  2026-01-10 14:48:53.099327 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}])  2026-01-10 14:48:53.099331 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}])  2026-01-10 14:48:53.099335 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}])  2026-01-10 14:48:53.099340 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}])  2026-01-10 14:48:53.099346 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}])  2026-01-10 14:48:53.099351 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}])  2026-01-10 14:48:53.099355 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}])  2026-01-10 14:48:53.099360 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}])  2026-01-10 14:48:53.099366 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}])  2026-01-10 14:48:53.099371 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}])  2026-01-10 14:48:53.099375 | orchestrator | changed: [testbed-node-1] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}]) 2026-01-10 14:48:53.099379 | orchestrator | changed: [testbed-node-2] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}]) 2026-01-10 14:48:53.099384 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}]) 2026-01-10 14:48:53.099389 | orchestrator | ok: [testbed-node-2] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}]) 2026-01-10 14:48:53.099395 | orchestrator | ok: [testbed-node-1] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}]) 2026-01-10 14:48:53.099398 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}]) 2026-01-10 14:48:53.099458 | orchestrator | changed: [testbed-node-2] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}]) 2026-01-10 14:48:53.099466 | orchestrator | changed: [testbed-node-1] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}]) 2026-01-10 14:48:53.099499 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}]) 2026-01-10 14:48:53.099755 | orchestrator | ok: [testbed-node-2] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}]) 2026-01-10 14:48:53.099767 | orchestrator | ok: [testbed-node-1] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}]) 2026-01-10 14:48:53.099771 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}]) 2026-01-10 14:48:53.099774 | orchestrator | 2026-01-10 14:48:53.099778 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-volume] ***************** 2026-01-10 14:48:53.099781 | orchestrator | Saturday 10 January 2026 14:46:29 +0000 (0:00:08.251) 0:00:53.872 ****** 2026-01-10 14:48:53.099790 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}) 2026-01-10 14:48:53.099796 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}) 2026-01-10 14:48:53.099805 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}) 2026-01-10 14:48:53.099808 | orchestrator | 2026-01-10 14:48:53.099811 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-backup] ***************** 2026-01-10 14:48:53.099814 | orchestrator | Saturday 10 January 2026 14:46:31 +0000 (0:00:01.923) 0:00:55.795 ****** 2026-01-10 14:48:53.099817 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}) 2026-01-10 14:48:53.099821 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}) 2026-01-10 14:48:53.099824 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}) 2026-01-10 14:48:53.099843 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}) 2026-01-10 14:48:53.099847 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}) 2026-01-10 14:48:53.099850 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}) 2026-01-10 14:48:53.099853 | orchestrator | 2026-01-10 14:48:53.099867 | orchestrator | TASK [cinder : Ensuring config directory has correct owner and permission] ***** 2026-01-10 14:48:53.099870 | orchestrator | Saturday 10 January 2026 14:46:34 +0000 (0:00:03.331) 0:00:59.127 ****** 2026-01-10 14:48:53.099874 | orchestrator | ok: [testbed-node-0] => (item=cinder-volume) 2026-01-10 14:48:53.099877 | orchestrator | ok: [testbed-node-0] => (item=cinder-backup) 2026-01-10 14:48:53.099880 | orchestrator | ok: [testbed-node-1] => (item=cinder-volume) 2026-01-10 14:48:53.099887 | orchestrator | ok: [testbed-node-2] => (item=cinder-volume) 2026-01-10 14:48:53.099890 | orchestrator | ok: [testbed-node-1] => (item=cinder-backup) 2026-01-10 14:48:53.099893 | orchestrator | ok: [testbed-node-2] => (item=cinder-backup) 2026-01-10 14:48:53.099896 | orchestrator | 2026-01-10 14:48:53.099899 | orchestrator | TASK [cinder : Check if policies shall be overwritten] ************************* 2026-01-10 14:48:53.099902 | orchestrator | Saturday 10 January 2026 14:46:36 +0000 (0:00:01.421) 0:01:00.548 ****** 2026-01-10 14:48:53.099928 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:48:53.099933 | orchestrator | 2026-01-10 14:48:53.099936 | orchestrator | TASK [cinder : Set cinder policy file] ***************************************** 2026-01-10 14:48:53.099939 | orchestrator | Saturday 10 January 2026 14:46:36 +0000 (0:00:00.118) 0:01:00.666 ****** 2026-01-10 14:48:53.099942 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:48:53.100383 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:48:53.100390 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:48:53.100393 | orchestrator | 2026-01-10 14:48:53.100397 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-01-10 14:48:53.100400 | orchestrator | Saturday 10 January 2026 14:46:36 +0000 (0:00:00.312) 0:01:00.979 ****** 2026-01-10 14:48:53.100403 | orchestrator | included: /ansible/roles/cinder/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:48:53.100406 | orchestrator | 2026-01-10 14:48:53.100409 | orchestrator | TASK [service-cert-copy : cinder | Copying over extra CA certificates] ********* 2026-01-10 14:48:53.100417 | orchestrator | Saturday 10 January 2026 14:46:37 +0000 (0:00:00.882) 0:01:01.862 ****** 2026-01-10 14:48:53.100422 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-10 14:48:53.100430 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-10 14:48:53.100450 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-10 14:48:53.100455 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-01-10 14:48:53.100459 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-01-10 14:48:53.100465 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-01-10 14:48:53.100470 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-01-10 14:48:53.100474 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-01-10 14:48:53.100477 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-01-10 14:48:53.100489 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-01-10 14:48:53.100496 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-01-10 14:48:53.100499 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-01-10 14:48:53.100502 | orchestrator | 2026-01-10 14:48:53.100505 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS certificate] *** 2026-01-10 14:48:53.100508 | orchestrator | Saturday 10 January 2026 14:46:41 +0000 (0:00:03.740) 0:01:05.603 ****** 2026-01-10 14:48:53.100513 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-10 14:48:53.100517 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-10 14:48:53.100544 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-10 14:48:53.100551 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-10 14:48:53.100555 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:48:53.100558 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-10 14:48:53.100564 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-10 14:48:53.100568 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-10 14:48:53.100585 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-10 14:48:53.100591 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:48:53.100597 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-10 14:48:53.100609 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-10 14:48:53.100614 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-10 14:48:53.100619 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-10 14:48:53.100622 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:48:53.100626 | orchestrator | 2026-01-10 14:48:53.100629 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS key] ****** 2026-01-10 14:48:53.100632 | orchestrator | Saturday 10 January 2026 14:46:42 +0000 (0:00:00.998) 0:01:06.601 ****** 2026-01-10 14:48:53.100638 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-10 14:48:53.100645 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-10 14:48:53.100648 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-10 14:48:53.100651 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-10 14:48:53.100655 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:48:53.100660 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-10 14:48:53.100663 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-10 14:48:53.100672 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-10 14:48:53.100696 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-10 14:48:53.100701 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:48:53.100706 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-10 14:48:53.100714 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-10 14:48:53.100719 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-10 14:48:53.100732 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-10 14:48:53.100738 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:48:53.100742 | orchestrator | 2026-01-10 14:48:53.100747 | orchestrator | TASK [cinder : Copying over config.json files for services] ******************** 2026-01-10 14:48:53.100752 | orchestrator | Saturday 10 January 2026 14:46:43 +0000 (0:00:01.562) 0:01:08.163 ****** 2026-01-10 14:48:53.100757 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-10 14:48:53.100765 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-10 14:48:53.100772 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-10 14:48:53.100783 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-01-10 14:48:53.100787 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-01-10 14:48:53.100790 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-01-10 14:48:53.100793 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-01-10 14:48:53.100798 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-01-10 14:48:53.100801 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-01-10 14:48:53.100808 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-01-10 14:48:53.100812 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-01-10 14:48:53.100815 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-01-10 14:48:53.100818 | orchestrator | 2026-01-10 14:48:53.100821 | orchestrator | TASK [cinder : Copying over cinder-wsgi.conf] ********************************** 2026-01-10 14:48:53.100824 | orchestrator | Saturday 10 January 2026 14:46:47 +0000 (0:00:04.168) 0:01:12.332 ****** 2026-01-10 14:48:53.100828 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2026-01-10 14:48:53.100832 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:48:53.100835 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2026-01-10 14:48:53.100838 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:48:53.100841 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2026-01-10 14:48:53.100844 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:48:53.100847 | orchestrator | 2026-01-10 14:48:53.100850 | orchestrator | TASK [Configure uWSGI for Cinder] ********************************************** 2026-01-10 14:48:53.100853 | orchestrator | Saturday 10 January 2026 14:46:48 +0000 (0:00:00.953) 0:01:13.286 ****** 2026-01-10 14:48:53.100856 | orchestrator | included: service-uwsgi-config for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:48:53.100859 | orchestrator | 2026-01-10 14:48:53.100862 | orchestrator | TASK [service-uwsgi-config : Copying over cinder-api uWSGI config] ************* 2026-01-10 14:48:53.100865 | orchestrator | Saturday 10 January 2026 14:46:50 +0000 (0:00:01.824) 0:01:15.110 ****** 2026-01-10 14:48:53.100868 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:48:53.100873 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:48:53.100878 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:48:53.100883 | orchestrator | 2026-01-10 14:48:53.100890 | orchestrator | TASK [cinder : Copying over cinder.conf] *************************************** 2026-01-10 14:48:53.100898 | orchestrator | Saturday 10 January 2026 14:46:52 +0000 (0:00:02.126) 0:01:17.236 ****** 2026-01-10 14:48:53.100903 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-10 14:48:53.100912 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-10 14:48:53.100918 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-10 14:48:53.100923 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-01-10 14:48:53.100930 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-01-10 14:48:53.100939 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-01-10 14:48:53.100952 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-01-10 14:48:53.100958 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-01-10 14:48:53.100964 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-01-10 14:48:53.100967 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-01-10 14:48:53.100973 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-01-10 14:48:53.100979 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-01-10 14:48:53.100982 | orchestrator | 2026-01-10 14:48:53.100985 | orchestrator | TASK [cinder : Generating 'hostnqn' file for cinder_volume] ******************** 2026-01-10 14:48:53.100989 | orchestrator | Saturday 10 January 2026 14:47:11 +0000 (0:00:18.428) 0:01:35.665 ****** 2026-01-10 14:48:53.100999 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:48:53.101002 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:48:53.101005 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:48:53.101008 | orchestrator | 2026-01-10 14:48:53.101011 | orchestrator | TASK [cinder : Copying over existing policy file] ****************************** 2026-01-10 14:48:53.101014 | orchestrator | Saturday 10 January 2026 14:47:12 +0000 (0:00:01.530) 0:01:37.195 ****** 2026-01-10 14:48:53.101020 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-10 14:48:53.101024 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-10 14:48:53.101041 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-10 14:48:53.101049 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-10 14:48:53.101052 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:48:53.101056 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-10 14:48:53.101062 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-10 14:48:53.101065 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-10 14:48:53.101068 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-10 14:48:53.101074 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:48:53.101079 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-10 14:48:53.101082 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-10 14:48:53.101089 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-10 14:48:53.101092 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-10 14:48:53.101095 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:48:53.101098 | orchestrator | 2026-01-10 14:48:53.101101 | orchestrator | TASK [cinder : Copying over nfs_shares files for cinder_volume] **************** 2026-01-10 14:48:53.101104 | orchestrator | Saturday 10 January 2026 14:47:13 +0000 (0:00:00.805) 0:01:38.001 ****** 2026-01-10 14:48:53.101107 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:48:53.101110 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:48:53.101113 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:48:53.101116 | orchestrator | 2026-01-10 14:48:53.101119 | orchestrator | TASK [service-check-containers : cinder | Check containers] ******************** 2026-01-10 14:48:53.101124 | orchestrator | Saturday 10 January 2026 14:47:14 +0000 (0:00:00.363) 0:01:38.365 ****** 2026-01-10 14:48:53.101127 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-10 14:48:53.101132 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-10 14:48:53.101138 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-10 14:48:53.101142 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-01-10 14:48:53.101145 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-01-10 14:48:53.101150 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-01-10 14:48:53.101156 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-01-10 14:48:53.101159 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-01-10 14:48:53.101165 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-01-10 14:48:53.101168 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-01-10 14:48:53.101173 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-01-10 14:48:53.101177 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-01-10 14:48:53.101180 | orchestrator | 2026-01-10 14:48:53.101183 | orchestrator | TASK [service-check-containers : cinder | Notify handlers to restart containers] *** 2026-01-10 14:48:53.101186 | orchestrator | Saturday 10 January 2026 14:47:17 +0000 (0:00:03.164) 0:01:41.529 ****** 2026-01-10 14:48:53.101191 | orchestrator | changed: [testbed-node-0] => { 2026-01-10 14:48:53.101194 | orchestrator |  "msg": "Notifying handlers" 2026-01-10 14:48:53.101198 | orchestrator | } 2026-01-10 14:48:53.101201 | orchestrator | changed: [testbed-node-1] => { 2026-01-10 14:48:53.101204 | orchestrator |  "msg": "Notifying handlers" 2026-01-10 14:48:53.101208 | orchestrator | } 2026-01-10 14:48:53.101211 | orchestrator | changed: [testbed-node-2] => { 2026-01-10 14:48:53.101215 | orchestrator |  "msg": "Notifying handlers" 2026-01-10 14:48:53.101218 | orchestrator | } 2026-01-10 14:48:53.101222 | orchestrator | 2026-01-10 14:48:53.101225 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-01-10 14:48:53.101229 | orchestrator | Saturday 10 January 2026 14:47:18 +0000 (0:00:00.965) 0:01:42.495 ****** 2026-01-10 14:48:53.101233 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-10 14:48:53.101239 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-10 14:48:53.101244 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-10 14:48:53.101248 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-10 14:48:53.101252 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:48:53.101261 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-10 14:48:53.101266 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-10 14:48:53.101272 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-10 14:48:53.101278 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-10 14:48:53.101281 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:48:53.101285 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-10 14:48:53.101291 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-10 14:48:53.101295 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-10 14:48:53.101299 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-10 14:48:53.101302 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:48:53.101305 | orchestrator | 2026-01-10 14:48:53.101310 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-01-10 14:48:53.101316 | orchestrator | Saturday 10 January 2026 14:47:19 +0000 (0:00:01.596) 0:01:44.092 ****** 2026-01-10 14:48:53.101320 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:48:53.101323 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:48:53.101326 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:48:53.101330 | orchestrator | 2026-01-10 14:48:53.101333 | orchestrator | TASK [cinder : Creating Cinder database] *************************************** 2026-01-10 14:48:53.101337 | orchestrator | Saturday 10 January 2026 14:47:20 +0000 (0:00:00.292) 0:01:44.384 ****** 2026-01-10 14:48:53.101340 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:48:53.101343 | orchestrator | 2026-01-10 14:48:53.101347 | orchestrator | TASK [cinder : Creating Cinder database user and setting permissions] ********** 2026-01-10 14:48:53.101350 | orchestrator | Saturday 10 January 2026 14:47:22 +0000 (0:00:01.971) 0:01:46.355 ****** 2026-01-10 14:48:53.101354 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:48:53.101357 | orchestrator | 2026-01-10 14:48:53.101360 | orchestrator | TASK [cinder : Running Cinder bootstrap container] ***************************** 2026-01-10 14:48:53.101364 | orchestrator | Saturday 10 January 2026 14:47:24 +0000 (0:00:02.431) 0:01:48.786 ****** 2026-01-10 14:48:53.101367 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:48:53.101371 | orchestrator | 2026-01-10 14:48:53.101374 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-01-10 14:48:53.101378 | orchestrator | Saturday 10 January 2026 14:47:43 +0000 (0:00:19.406) 0:02:08.193 ****** 2026-01-10 14:48:53.101381 | orchestrator | 2026-01-10 14:48:53.101385 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-01-10 14:48:53.101388 | orchestrator | Saturday 10 January 2026 14:47:43 +0000 (0:00:00.068) 0:02:08.262 ****** 2026-01-10 14:48:53.101391 | orchestrator | 2026-01-10 14:48:53.101395 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-01-10 14:48:53.101399 | orchestrator | Saturday 10 January 2026 14:47:43 +0000 (0:00:00.069) 0:02:08.331 ****** 2026-01-10 14:48:53.101404 | orchestrator | 2026-01-10 14:48:53.101409 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-api container] ************************ 2026-01-10 14:48:53.101417 | orchestrator | Saturday 10 January 2026 14:47:44 +0000 (0:00:00.077) 0:02:08.408 ****** 2026-01-10 14:48:53.101423 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:48:53.101427 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:48:53.101432 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:48:53.101436 | orchestrator | 2026-01-10 14:48:53.101441 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-scheduler container] ****************** 2026-01-10 14:48:53.101446 | orchestrator | Saturday 10 January 2026 14:48:09 +0000 (0:00:25.368) 0:02:33.776 ****** 2026-01-10 14:48:53.101451 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:48:53.101456 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:48:53.101461 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:48:53.101466 | orchestrator | 2026-01-10 14:48:53.101471 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-volume container] ********************* 2026-01-10 14:48:53.101477 | orchestrator | Saturday 10 January 2026 14:48:15 +0000 (0:00:06.513) 0:02:40.289 ****** 2026-01-10 14:48:53.101481 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:48:53.101487 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:48:53.101492 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:48:53.101497 | orchestrator | 2026-01-10 14:48:53.101502 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-backup container] ********************* 2026-01-10 14:48:53.101507 | orchestrator | Saturday 10 January 2026 14:48:44 +0000 (0:00:28.584) 0:03:08.874 ****** 2026-01-10 14:48:53.101512 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:48:53.101517 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:48:53.101522 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:48:53.101527 | orchestrator | 2026-01-10 14:48:53.101532 | orchestrator | RUNNING HANDLER [cinder : Wait for cinder services to update service versions] *** 2026-01-10 14:48:53.101541 | orchestrator | Saturday 10 January 2026 14:48:51 +0000 (0:00:06.880) 0:03:15.755 ****** 2026-01-10 14:48:53.101550 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:48:53.101555 | orchestrator | 2026-01-10 14:48:53.101561 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-10 14:48:53.101567 | orchestrator | testbed-node-0 : ok=32  changed=23  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-01-10 14:48:53.101573 | orchestrator | testbed-node-1 : ok=23  changed=16  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-01-10 14:48:53.101578 | orchestrator | testbed-node-2 : ok=23  changed=16  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-01-10 14:48:53.101583 | orchestrator | 2026-01-10 14:48:53.101588 | orchestrator | 2026-01-10 14:48:53.101593 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-10 14:48:53.101598 | orchestrator | Saturday 10 January 2026 14:48:51 +0000 (0:00:00.253) 0:03:16.009 ****** 2026-01-10 14:48:53.101603 | orchestrator | =============================================================================== 2026-01-10 14:48:53.101608 | orchestrator | cinder : Restart cinder-volume container ------------------------------- 28.58s 2026-01-10 14:48:53.101635 | orchestrator | cinder : Restart cinder-api container ---------------------------------- 25.37s 2026-01-10 14:48:53.101640 | orchestrator | cinder : Running Cinder bootstrap container ---------------------------- 19.41s 2026-01-10 14:48:53.101645 | orchestrator | cinder : Copying over cinder.conf -------------------------------------- 18.43s 2026-01-10 14:48:53.101650 | orchestrator | service-ks-register : cinder | Creating/deleting endpoints ------------- 13.19s 2026-01-10 14:48:53.101654 | orchestrator | cinder : Copying over multiple ceph.conf for cinder services ------------ 8.25s 2026-01-10 14:48:53.101659 | orchestrator | service-ks-register : cinder | Creating/deleting services --------------- 7.45s 2026-01-10 14:48:53.101663 | orchestrator | service-ks-register : cinder | Granting/revoking user roles ------------- 7.06s 2026-01-10 14:48:53.101673 | orchestrator | cinder : Restart cinder-backup container -------------------------------- 6.88s 2026-01-10 14:48:53.101681 | orchestrator | cinder : Restart cinder-scheduler container ----------------------------- 6.51s 2026-01-10 14:48:53.101686 | orchestrator | cinder : Copying over config.json files for services -------------------- 4.17s 2026-01-10 14:48:53.101692 | orchestrator | service-cert-copy : cinder | Copying over extra CA certificates --------- 3.74s 2026-01-10 14:48:53.101697 | orchestrator | service-ks-register : cinder | Creating users --------------------------- 3.64s 2026-01-10 14:48:53.101702 | orchestrator | service-ks-register : cinder | Creating roles --------------------------- 3.53s 2026-01-10 14:48:53.101707 | orchestrator | cinder : Copy over Ceph keyring files for cinder-backup ----------------- 3.33s 2026-01-10 14:48:53.101712 | orchestrator | service-check-containers : cinder | Check containers -------------------- 3.16s 2026-01-10 14:48:53.101716 | orchestrator | service-ks-register : cinder | Creating projects ------------------------ 2.88s 2026-01-10 14:48:53.101721 | orchestrator | cinder : Ensuring config directories exist ------------------------------ 2.76s 2026-01-10 14:48:53.101726 | orchestrator | cinder : Creating Cinder database user and setting permissions ---------- 2.43s 2026-01-10 14:48:53.101731 | orchestrator | cinder : Ensuring cinder service ceph config subdirs exists ------------- 2.20s 2026-01-10 14:48:53.101736 | orchestrator | 2026-01-10 14:48:53 | INFO  | Task 27a2dbc7-3cd7-4e7d-a96b-79d180378f23 is in state STARTED 2026-01-10 14:48:53.101741 | orchestrator | 2026-01-10 14:48:53 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:48:56.145342 | orchestrator | 2026-01-10 14:48:56 | INFO  | Task f7ad5e30-77df-4be0-ac5d-16d0a294adc9 is in state STARTED 2026-01-10 14:48:56.147475 | orchestrator | 2026-01-10 14:48:56 | INFO  | Task defe58c1-4fee-4c60-93fd-ab3fd2b5526f is in state STARTED 2026-01-10 14:48:56.149900 | orchestrator | 2026-01-10 14:48:56 | INFO  | Task c675ce34-5677-4951-8384-6a5ecff98e0c is in state STARTED 2026-01-10 14:48:56.151512 | orchestrator | 2026-01-10 14:48:56 | INFO  | Task 27a2dbc7-3cd7-4e7d-a96b-79d180378f23 is in state STARTED 2026-01-10 14:48:56.151589 | orchestrator | 2026-01-10 14:48:56 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:48:59.189921 | orchestrator | 2026-01-10 14:48:59 | INFO  | Task f7ad5e30-77df-4be0-ac5d-16d0a294adc9 is in state STARTED 2026-01-10 14:48:59.195478 | orchestrator | 2026-01-10 14:48:59 | INFO  | Task defe58c1-4fee-4c60-93fd-ab3fd2b5526f is in state STARTED 2026-01-10 14:48:59.197623 | orchestrator | 2026-01-10 14:48:59 | INFO  | Task c675ce34-5677-4951-8384-6a5ecff98e0c is in state STARTED 2026-01-10 14:48:59.200427 | orchestrator | 2026-01-10 14:48:59 | INFO  | Task 27a2dbc7-3cd7-4e7d-a96b-79d180378f23 is in state STARTED 2026-01-10 14:48:59.201069 | orchestrator | 2026-01-10 14:48:59 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:49:02.251842 | orchestrator | 2026-01-10 14:49:02 | INFO  | Task f7ad5e30-77df-4be0-ac5d-16d0a294adc9 is in state STARTED 2026-01-10 14:49:02.253672 | orchestrator | 2026-01-10 14:49:02 | INFO  | Task defe58c1-4fee-4c60-93fd-ab3fd2b5526f is in state STARTED 2026-01-10 14:49:02.255783 | orchestrator | 2026-01-10 14:49:02 | INFO  | Task c675ce34-5677-4951-8384-6a5ecff98e0c is in state STARTED 2026-01-10 14:49:02.257816 | orchestrator | 2026-01-10 14:49:02 | INFO  | Task 27a2dbc7-3cd7-4e7d-a96b-79d180378f23 is in state STARTED 2026-01-10 14:49:02.257859 | orchestrator | 2026-01-10 14:49:02 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:49:05.313576 | orchestrator | 2026-01-10 14:49:05 | INFO  | Task f7ad5e30-77df-4be0-ac5d-16d0a294adc9 is in state STARTED 2026-01-10 14:49:05.317191 | orchestrator | 2026-01-10 14:49:05 | INFO  | Task defe58c1-4fee-4c60-93fd-ab3fd2b5526f is in state STARTED 2026-01-10 14:49:05.318543 | orchestrator | 2026-01-10 14:49:05 | INFO  | Task c675ce34-5677-4951-8384-6a5ecff98e0c is in state STARTED 2026-01-10 14:49:05.320545 | orchestrator | 2026-01-10 14:49:05 | INFO  | Task 27a2dbc7-3cd7-4e7d-a96b-79d180378f23 is in state STARTED 2026-01-10 14:49:05.320583 | orchestrator | 2026-01-10 14:49:05 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:49:08.363601 | orchestrator | 2026-01-10 14:49:08 | INFO  | Task f7ad5e30-77df-4be0-ac5d-16d0a294adc9 is in state STARTED 2026-01-10 14:49:08.366154 | orchestrator | 2026-01-10 14:49:08 | INFO  | Task defe58c1-4fee-4c60-93fd-ab3fd2b5526f is in state STARTED 2026-01-10 14:49:08.367990 | orchestrator | 2026-01-10 14:49:08 | INFO  | Task c675ce34-5677-4951-8384-6a5ecff98e0c is in state STARTED 2026-01-10 14:49:08.370267 | orchestrator | 2026-01-10 14:49:08 | INFO  | Task 27a2dbc7-3cd7-4e7d-a96b-79d180378f23 is in state STARTED 2026-01-10 14:49:08.370327 | orchestrator | 2026-01-10 14:49:08 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:49:11.419091 | orchestrator | 2026-01-10 14:49:11 | INFO  | Task f7ad5e30-77df-4be0-ac5d-16d0a294adc9 is in state STARTED 2026-01-10 14:49:11.419774 | orchestrator | 2026-01-10 14:49:11 | INFO  | Task defe58c1-4fee-4c60-93fd-ab3fd2b5526f is in state STARTED 2026-01-10 14:49:11.421944 | orchestrator | 2026-01-10 14:49:11 | INFO  | Task c675ce34-5677-4951-8384-6a5ecff98e0c is in state STARTED 2026-01-10 14:49:11.424548 | orchestrator | 2026-01-10 14:49:11 | INFO  | Task 27a2dbc7-3cd7-4e7d-a96b-79d180378f23 is in state STARTED 2026-01-10 14:49:11.424624 | orchestrator | 2026-01-10 14:49:11 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:49:14.458889 | orchestrator | 2026-01-10 14:49:14 | INFO  | Task f7ad5e30-77df-4be0-ac5d-16d0a294adc9 is in state STARTED 2026-01-10 14:49:14.459557 | orchestrator | 2026-01-10 14:49:14 | INFO  | Task defe58c1-4fee-4c60-93fd-ab3fd2b5526f is in state STARTED 2026-01-10 14:49:14.460441 | orchestrator | 2026-01-10 14:49:14 | INFO  | Task c675ce34-5677-4951-8384-6a5ecff98e0c is in state STARTED 2026-01-10 14:49:14.461410 | orchestrator | 2026-01-10 14:49:14 | INFO  | Task 27a2dbc7-3cd7-4e7d-a96b-79d180378f23 is in state STARTED 2026-01-10 14:49:14.462921 | orchestrator | 2026-01-10 14:49:14 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:49:17.500284 | orchestrator | 2026-01-10 14:49:17 | INFO  | Task f7ad5e30-77df-4be0-ac5d-16d0a294adc9 is in state STARTED 2026-01-10 14:49:17.502139 | orchestrator | 2026-01-10 14:49:17 | INFO  | Task defe58c1-4fee-4c60-93fd-ab3fd2b5526f is in state STARTED 2026-01-10 14:49:17.502859 | orchestrator | 2026-01-10 14:49:17 | INFO  | Task c675ce34-5677-4951-8384-6a5ecff98e0c is in state STARTED 2026-01-10 14:49:17.503776 | orchestrator | 2026-01-10 14:49:17 | INFO  | Task 27a2dbc7-3cd7-4e7d-a96b-79d180378f23 is in state STARTED 2026-01-10 14:49:17.503799 | orchestrator | 2026-01-10 14:49:17 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:49:20.544238 | orchestrator | 2026-01-10 14:49:20 | INFO  | Task f7ad5e30-77df-4be0-ac5d-16d0a294adc9 is in state STARTED 2026-01-10 14:49:20.546555 | orchestrator | 2026-01-10 14:49:20 | INFO  | Task defe58c1-4fee-4c60-93fd-ab3fd2b5526f is in state STARTED 2026-01-10 14:49:20.547910 | orchestrator | 2026-01-10 14:49:20 | INFO  | Task c675ce34-5677-4951-8384-6a5ecff98e0c is in state STARTED 2026-01-10 14:49:20.548694 | orchestrator | 2026-01-10 14:49:20 | INFO  | Task 27a2dbc7-3cd7-4e7d-a96b-79d180378f23 is in state STARTED 2026-01-10 14:49:20.549131 | orchestrator | 2026-01-10 14:49:20 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:49:23.582305 | orchestrator | 2026-01-10 14:49:23 | INFO  | Task f7ad5e30-77df-4be0-ac5d-16d0a294adc9 is in state STARTED 2026-01-10 14:49:23.585966 | orchestrator | 2026-01-10 14:49:23 | INFO  | Task defe58c1-4fee-4c60-93fd-ab3fd2b5526f is in state STARTED 2026-01-10 14:49:23.588728 | orchestrator | 2026-01-10 14:49:23 | INFO  | Task c675ce34-5677-4951-8384-6a5ecff98e0c is in state STARTED 2026-01-10 14:49:23.590379 | orchestrator | 2026-01-10 14:49:23 | INFO  | Task 27a2dbc7-3cd7-4e7d-a96b-79d180378f23 is in state STARTED 2026-01-10 14:49:23.590431 | orchestrator | 2026-01-10 14:49:23 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:49:26.622054 | orchestrator | 2026-01-10 14:49:26 | INFO  | Task f7ad5e30-77df-4be0-ac5d-16d0a294adc9 is in state STARTED 2026-01-10 14:49:26.622400 | orchestrator | 2026-01-10 14:49:26 | INFO  | Task defe58c1-4fee-4c60-93fd-ab3fd2b5526f is in state STARTED 2026-01-10 14:49:26.624387 | orchestrator | 2026-01-10 14:49:26 | INFO  | Task c675ce34-5677-4951-8384-6a5ecff98e0c is in state STARTED 2026-01-10 14:49:26.625898 | orchestrator | 2026-01-10 14:49:26 | INFO  | Task 27a2dbc7-3cd7-4e7d-a96b-79d180378f23 is in state STARTED 2026-01-10 14:49:26.625921 | orchestrator | 2026-01-10 14:49:26 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:49:29.657586 | orchestrator | 2026-01-10 14:49:29 | INFO  | Task f7ad5e30-77df-4be0-ac5d-16d0a294adc9 is in state STARTED 2026-01-10 14:49:29.658513 | orchestrator | 2026-01-10 14:49:29 | INFO  | Task defe58c1-4fee-4c60-93fd-ab3fd2b5526f is in state STARTED 2026-01-10 14:49:29.659475 | orchestrator | 2026-01-10 14:49:29 | INFO  | Task c675ce34-5677-4951-8384-6a5ecff98e0c is in state STARTED 2026-01-10 14:49:29.660560 | orchestrator | 2026-01-10 14:49:29 | INFO  | Task 27a2dbc7-3cd7-4e7d-a96b-79d180378f23 is in state STARTED 2026-01-10 14:49:29.661822 | orchestrator | 2026-01-10 14:49:29 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:49:32.689414 | orchestrator | 2026-01-10 14:49:32 | INFO  | Task f7ad5e30-77df-4be0-ac5d-16d0a294adc9 is in state STARTED 2026-01-10 14:49:32.689664 | orchestrator | 2026-01-10 14:49:32 | INFO  | Task defe58c1-4fee-4c60-93fd-ab3fd2b5526f is in state STARTED 2026-01-10 14:49:32.690663 | orchestrator | 2026-01-10 14:49:32 | INFO  | Task c675ce34-5677-4951-8384-6a5ecff98e0c is in state STARTED 2026-01-10 14:49:32.691561 | orchestrator | 2026-01-10 14:49:32 | INFO  | Task 27a2dbc7-3cd7-4e7d-a96b-79d180378f23 is in state STARTED 2026-01-10 14:49:32.691603 | orchestrator | 2026-01-10 14:49:32 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:49:35.725954 | orchestrator | 2026-01-10 14:49:35 | INFO  | Task f7ad5e30-77df-4be0-ac5d-16d0a294adc9 is in state STARTED 2026-01-10 14:49:35.726620 | orchestrator | 2026-01-10 14:49:35 | INFO  | Task defe58c1-4fee-4c60-93fd-ab3fd2b5526f is in state STARTED 2026-01-10 14:49:35.727704 | orchestrator | 2026-01-10 14:49:35 | INFO  | Task c675ce34-5677-4951-8384-6a5ecff98e0c is in state STARTED 2026-01-10 14:49:35.728716 | orchestrator | 2026-01-10 14:49:35 | INFO  | Task 27a2dbc7-3cd7-4e7d-a96b-79d180378f23 is in state STARTED 2026-01-10 14:49:35.728747 | orchestrator | 2026-01-10 14:49:35 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:49:38.755883 | orchestrator | 2026-01-10 14:49:38 | INFO  | Task f7ad5e30-77df-4be0-ac5d-16d0a294adc9 is in state STARTED 2026-01-10 14:49:38.756884 | orchestrator | 2026-01-10 14:49:38 | INFO  | Task defe58c1-4fee-4c60-93fd-ab3fd2b5526f is in state STARTED 2026-01-10 14:49:38.759344 | orchestrator | 2026-01-10 14:49:38 | INFO  | Task c675ce34-5677-4951-8384-6a5ecff98e0c is in state STARTED 2026-01-10 14:49:38.760555 | orchestrator | 2026-01-10 14:49:38 | INFO  | Task 27a2dbc7-3cd7-4e7d-a96b-79d180378f23 is in state STARTED 2026-01-10 14:49:38.760603 | orchestrator | 2026-01-10 14:49:38 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:49:41.803865 | orchestrator | 2026-01-10 14:49:41 | INFO  | Task f7ad5e30-77df-4be0-ac5d-16d0a294adc9 is in state STARTED 2026-01-10 14:49:41.804461 | orchestrator | 2026-01-10 14:49:41 | INFO  | Task defe58c1-4fee-4c60-93fd-ab3fd2b5526f is in state STARTED 2026-01-10 14:49:41.805215 | orchestrator | 2026-01-10 14:49:41 | INFO  | Task c675ce34-5677-4951-8384-6a5ecff98e0c is in state STARTED 2026-01-10 14:49:41.806789 | orchestrator | 2026-01-10 14:49:41 | INFO  | Task 27a2dbc7-3cd7-4e7d-a96b-79d180378f23 is in state STARTED 2026-01-10 14:49:41.806824 | orchestrator | 2026-01-10 14:49:41 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:49:44.828771 | orchestrator | 2026-01-10 14:49:44 | INFO  | Task f7ad5e30-77df-4be0-ac5d-16d0a294adc9 is in state STARTED 2026-01-10 14:49:44.828813 | orchestrator | 2026-01-10 14:49:44 | INFO  | Task defe58c1-4fee-4c60-93fd-ab3fd2b5526f is in state STARTED 2026-01-10 14:49:44.828822 | orchestrator | 2026-01-10 14:49:44 | INFO  | Task c675ce34-5677-4951-8384-6a5ecff98e0c is in state STARTED 2026-01-10 14:49:44.829185 | orchestrator | 2026-01-10 14:49:44 | INFO  | Task 27a2dbc7-3cd7-4e7d-a96b-79d180378f23 is in state STARTED 2026-01-10 14:49:44.829216 | orchestrator | 2026-01-10 14:49:44 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:49:47.857771 | orchestrator | 2026-01-10 14:49:47 | INFO  | Task f7ad5e30-77df-4be0-ac5d-16d0a294adc9 is in state STARTED 2026-01-10 14:49:47.858277 | orchestrator | 2026-01-10 14:49:47 | INFO  | Task defe58c1-4fee-4c60-93fd-ab3fd2b5526f is in state STARTED 2026-01-10 14:49:47.859014 | orchestrator | 2026-01-10 14:49:47 | INFO  | Task c675ce34-5677-4951-8384-6a5ecff98e0c is in state STARTED 2026-01-10 14:49:47.859799 | orchestrator | 2026-01-10 14:49:47 | INFO  | Task 27a2dbc7-3cd7-4e7d-a96b-79d180378f23 is in state STARTED 2026-01-10 14:49:47.859863 | orchestrator | 2026-01-10 14:49:47 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:49:50.881103 | orchestrator | 2026-01-10 14:49:50 | INFO  | Task f7ad5e30-77df-4be0-ac5d-16d0a294adc9 is in state STARTED 2026-01-10 14:49:50.884403 | orchestrator | 2026-01-10 14:49:50 | INFO  | Task defe58c1-4fee-4c60-93fd-ab3fd2b5526f is in state STARTED 2026-01-10 14:49:50.885110 | orchestrator | 2026-01-10 14:49:50 | INFO  | Task c675ce34-5677-4951-8384-6a5ecff98e0c is in state STARTED 2026-01-10 14:49:50.887025 | orchestrator | 2026-01-10 14:49:50 | INFO  | Task 27a2dbc7-3cd7-4e7d-a96b-79d180378f23 is in state STARTED 2026-01-10 14:49:50.887091 | orchestrator | 2026-01-10 14:49:50 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:49:53.913357 | orchestrator | 2026-01-10 14:49:53 | INFO  | Task f7ad5e30-77df-4be0-ac5d-16d0a294adc9 is in state STARTED 2026-01-10 14:49:53.914219 | orchestrator | 2026-01-10 14:49:53 | INFO  | Task defe58c1-4fee-4c60-93fd-ab3fd2b5526f is in state STARTED 2026-01-10 14:49:53.914901 | orchestrator | 2026-01-10 14:49:53 | INFO  | Task c675ce34-5677-4951-8384-6a5ecff98e0c is in state STARTED 2026-01-10 14:49:53.915600 | orchestrator | 2026-01-10 14:49:53 | INFO  | Task 27a2dbc7-3cd7-4e7d-a96b-79d180378f23 is in state STARTED 2026-01-10 14:49:53.915651 | orchestrator | 2026-01-10 14:49:53 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:49:56.940517 | orchestrator | 2026-01-10 14:49:56 | INFO  | Task f7ad5e30-77df-4be0-ac5d-16d0a294adc9 is in state STARTED 2026-01-10 14:49:56.940842 | orchestrator | 2026-01-10 14:49:56 | INFO  | Task defe58c1-4fee-4c60-93fd-ab3fd2b5526f is in state STARTED 2026-01-10 14:49:56.941390 | orchestrator | 2026-01-10 14:49:56 | INFO  | Task c675ce34-5677-4951-8384-6a5ecff98e0c is in state STARTED 2026-01-10 14:49:56.942121 | orchestrator | 2026-01-10 14:49:56 | INFO  | Task 27a2dbc7-3cd7-4e7d-a96b-79d180378f23 is in state STARTED 2026-01-10 14:49:56.942138 | orchestrator | 2026-01-10 14:49:56 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:49:59.965837 | orchestrator | 2026-01-10 14:49:59 | INFO  | Task f7ad5e30-77df-4be0-ac5d-16d0a294adc9 is in state STARTED 2026-01-10 14:49:59.966266 | orchestrator | 2026-01-10 14:49:59 | INFO  | Task defe58c1-4fee-4c60-93fd-ab3fd2b5526f is in state STARTED 2026-01-10 14:49:59.966982 | orchestrator | 2026-01-10 14:49:59 | INFO  | Task c675ce34-5677-4951-8384-6a5ecff98e0c is in state STARTED 2026-01-10 14:49:59.968758 | orchestrator | 2026-01-10 14:49:59 | INFO  | Task 27a2dbc7-3cd7-4e7d-a96b-79d180378f23 is in state STARTED 2026-01-10 14:49:59.968792 | orchestrator | 2026-01-10 14:49:59 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:50:03.001212 | orchestrator | 2026-01-10 14:50:03 | INFO  | Task f7ad5e30-77df-4be0-ac5d-16d0a294adc9 is in state STARTED 2026-01-10 14:50:03.003102 | orchestrator | 2026-01-10 14:50:03 | INFO  | Task defe58c1-4fee-4c60-93fd-ab3fd2b5526f is in state STARTED 2026-01-10 14:50:03.004655 | orchestrator | 2026-01-10 14:50:03 | INFO  | Task c675ce34-5677-4951-8384-6a5ecff98e0c is in state STARTED 2026-01-10 14:50:03.006144 | orchestrator | 2026-01-10 14:50:03 | INFO  | Task 27a2dbc7-3cd7-4e7d-a96b-79d180378f23 is in state STARTED 2026-01-10 14:50:03.006182 | orchestrator | 2026-01-10 14:50:03 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:50:06.040126 | orchestrator | 2026-01-10 14:50:06 | INFO  | Task f7ad5e30-77df-4be0-ac5d-16d0a294adc9 is in state STARTED 2026-01-10 14:50:06.040437 | orchestrator | 2026-01-10 14:50:06 | INFO  | Task defe58c1-4fee-4c60-93fd-ab3fd2b5526f is in state STARTED 2026-01-10 14:50:06.042322 | orchestrator | 2026-01-10 14:50:06 | INFO  | Task c675ce34-5677-4951-8384-6a5ecff98e0c is in state STARTED 2026-01-10 14:50:06.043508 | orchestrator | 2026-01-10 14:50:06 | INFO  | Task 27a2dbc7-3cd7-4e7d-a96b-79d180378f23 is in state STARTED 2026-01-10 14:50:06.044722 | orchestrator | 2026-01-10 14:50:06 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:50:09.083230 | orchestrator | 2026-01-10 14:50:09 | INFO  | Task f7ad5e30-77df-4be0-ac5d-16d0a294adc9 is in state STARTED 2026-01-10 14:50:09.084836 | orchestrator | 2026-01-10 14:50:09 | INFO  | Task defe58c1-4fee-4c60-93fd-ab3fd2b5526f is in state STARTED 2026-01-10 14:50:09.086342 | orchestrator | 2026-01-10 14:50:09 | INFO  | Task c675ce34-5677-4951-8384-6a5ecff98e0c is in state STARTED 2026-01-10 14:50:09.086385 | orchestrator | 2026-01-10 14:50:09 | INFO  | Task 27a2dbc7-3cd7-4e7d-a96b-79d180378f23 is in state STARTED 2026-01-10 14:50:09.086569 | orchestrator | 2026-01-10 14:50:09 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:50:12.109068 | orchestrator | 2026-01-10 14:50:12 | INFO  | Task f7ad5e30-77df-4be0-ac5d-16d0a294adc9 is in state STARTED 2026-01-10 14:50:12.109515 | orchestrator | 2026-01-10 14:50:12 | INFO  | Task defe58c1-4fee-4c60-93fd-ab3fd2b5526f is in state STARTED 2026-01-10 14:50:12.110186 | orchestrator | 2026-01-10 14:50:12 | INFO  | Task c675ce34-5677-4951-8384-6a5ecff98e0c is in state STARTED 2026-01-10 14:50:12.110903 | orchestrator | 2026-01-10 14:50:12 | INFO  | Task 27a2dbc7-3cd7-4e7d-a96b-79d180378f23 is in state STARTED 2026-01-10 14:50:12.110956 | orchestrator | 2026-01-10 14:50:12 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:50:15.137832 | orchestrator | 2026-01-10 14:50:15 | INFO  | Task f7ad5e30-77df-4be0-ac5d-16d0a294adc9 is in state STARTED 2026-01-10 14:50:15.138664 | orchestrator | 2026-01-10 14:50:15 | INFO  | Task defe58c1-4fee-4c60-93fd-ab3fd2b5526f is in state STARTED 2026-01-10 14:50:15.139486 | orchestrator | 2026-01-10 14:50:15 | INFO  | Task c675ce34-5677-4951-8384-6a5ecff98e0c is in state STARTED 2026-01-10 14:50:15.140900 | orchestrator | 2026-01-10 14:50:15 | INFO  | Task 27a2dbc7-3cd7-4e7d-a96b-79d180378f23 is in state STARTED 2026-01-10 14:50:15.140964 | orchestrator | 2026-01-10 14:50:15 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:50:18.163087 | orchestrator | 2026-01-10 14:50:18 | INFO  | Task f7ad5e30-77df-4be0-ac5d-16d0a294adc9 is in state STARTED 2026-01-10 14:50:18.163595 | orchestrator | 2026-01-10 14:50:18 | INFO  | Task defe58c1-4fee-4c60-93fd-ab3fd2b5526f is in state STARTED 2026-01-10 14:50:18.164316 | orchestrator | 2026-01-10 14:50:18 | INFO  | Task c675ce34-5677-4951-8384-6a5ecff98e0c is in state STARTED 2026-01-10 14:50:18.165151 | orchestrator | 2026-01-10 14:50:18 | INFO  | Task 27a2dbc7-3cd7-4e7d-a96b-79d180378f23 is in state STARTED 2026-01-10 14:50:18.165174 | orchestrator | 2026-01-10 14:50:18 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:50:21.191007 | orchestrator | 2026-01-10 14:50:21 | INFO  | Task f7ad5e30-77df-4be0-ac5d-16d0a294adc9 is in state STARTED 2026-01-10 14:50:21.191308 | orchestrator | 2026-01-10 14:50:21 | INFO  | Task defe58c1-4fee-4c60-93fd-ab3fd2b5526f is in state STARTED 2026-01-10 14:50:21.191792 | orchestrator | 2026-01-10 14:50:21 | INFO  | Task c675ce34-5677-4951-8384-6a5ecff98e0c is in state STARTED 2026-01-10 14:50:21.192351 | orchestrator | 2026-01-10 14:50:21 | INFO  | Task 27a2dbc7-3cd7-4e7d-a96b-79d180378f23 is in state STARTED 2026-01-10 14:50:21.192383 | orchestrator | 2026-01-10 14:50:21 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:50:24.214464 | orchestrator | 2026-01-10 14:50:24 | INFO  | Task f7ad5e30-77df-4be0-ac5d-16d0a294adc9 is in state STARTED 2026-01-10 14:50:24.214776 | orchestrator | 2026-01-10 14:50:24 | INFO  | Task defe58c1-4fee-4c60-93fd-ab3fd2b5526f is in state STARTED 2026-01-10 14:50:24.216938 | orchestrator | 2026-01-10 14:50:24 | INFO  | Task c675ce34-5677-4951-8384-6a5ecff98e0c is in state STARTED 2026-01-10 14:50:24.217427 | orchestrator | 2026-01-10 14:50:24 | INFO  | Task 27a2dbc7-3cd7-4e7d-a96b-79d180378f23 is in state STARTED 2026-01-10 14:50:24.217441 | orchestrator | 2026-01-10 14:50:24 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:50:27.245391 | orchestrator | 2026-01-10 14:50:27 | INFO  | Task f7ad5e30-77df-4be0-ac5d-16d0a294adc9 is in state STARTED 2026-01-10 14:50:27.246353 | orchestrator | 2026-01-10 14:50:27 | INFO  | Task defe58c1-4fee-4c60-93fd-ab3fd2b5526f is in state STARTED 2026-01-10 14:50:27.246580 | orchestrator | 2026-01-10 14:50:27 | INFO  | Task c675ce34-5677-4951-8384-6a5ecff98e0c is in state STARTED 2026-01-10 14:50:27.248700 | orchestrator | 2026-01-10 14:50:27 | INFO  | Task 27a2dbc7-3cd7-4e7d-a96b-79d180378f23 is in state STARTED 2026-01-10 14:50:27.248895 | orchestrator | 2026-01-10 14:50:27 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:50:30.287794 | orchestrator | 2026-01-10 14:50:30 | INFO  | Task f7ad5e30-77df-4be0-ac5d-16d0a294adc9 is in state STARTED 2026-01-10 14:50:30.287850 | orchestrator | 2026-01-10 14:50:30 | INFO  | Task defe58c1-4fee-4c60-93fd-ab3fd2b5526f is in state STARTED 2026-01-10 14:50:30.287857 | orchestrator | 2026-01-10 14:50:30 | INFO  | Task c675ce34-5677-4951-8384-6a5ecff98e0c is in state STARTED 2026-01-10 14:50:30.288246 | orchestrator | 2026-01-10 14:50:30 | INFO  | Task 27a2dbc7-3cd7-4e7d-a96b-79d180378f23 is in state STARTED 2026-01-10 14:50:30.288281 | orchestrator | 2026-01-10 14:50:30 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:50:33.313455 | orchestrator | 2026-01-10 14:50:33 | INFO  | Task f7ad5e30-77df-4be0-ac5d-16d0a294adc9 is in state STARTED 2026-01-10 14:50:33.313961 | orchestrator | 2026-01-10 14:50:33 | INFO  | Task defe58c1-4fee-4c60-93fd-ab3fd2b5526f is in state STARTED 2026-01-10 14:50:33.315194 | orchestrator | 2026-01-10 14:50:33 | INFO  | Task c675ce34-5677-4951-8384-6a5ecff98e0c is in state STARTED 2026-01-10 14:50:33.317112 | orchestrator | 2026-01-10 14:50:33 | INFO  | Task 27a2dbc7-3cd7-4e7d-a96b-79d180378f23 is in state STARTED 2026-01-10 14:50:33.317156 | orchestrator | 2026-01-10 14:50:33 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:50:36.343065 | orchestrator | 2026-01-10 14:50:36 | INFO  | Task f7ad5e30-77df-4be0-ac5d-16d0a294adc9 is in state STARTED 2026-01-10 14:50:36.344963 | orchestrator | 2026-01-10 14:50:36 | INFO  | Task defe58c1-4fee-4c60-93fd-ab3fd2b5526f is in state STARTED 2026-01-10 14:50:36.347495 | orchestrator | 2026-01-10 14:50:36 | INFO  | Task c675ce34-5677-4951-8384-6a5ecff98e0c is in state STARTED 2026-01-10 14:50:36.350369 | orchestrator | 2026-01-10 14:50:36 | INFO  | Task 27a2dbc7-3cd7-4e7d-a96b-79d180378f23 is in state STARTED 2026-01-10 14:50:36.350559 | orchestrator | 2026-01-10 14:50:36 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:50:39.386405 | orchestrator | 2026-01-10 14:50:39 | INFO  | Task f7ad5e30-77df-4be0-ac5d-16d0a294adc9 is in state STARTED 2026-01-10 14:50:39.386846 | orchestrator | 2026-01-10 14:50:39 | INFO  | Task defe58c1-4fee-4c60-93fd-ab3fd2b5526f is in state STARTED 2026-01-10 14:50:39.387806 | orchestrator | 2026-01-10 14:50:39 | INFO  | Task c675ce34-5677-4951-8384-6a5ecff98e0c is in state STARTED 2026-01-10 14:50:39.389973 | orchestrator | 2026-01-10 14:50:39 | INFO  | Task 27a2dbc7-3cd7-4e7d-a96b-79d180378f23 is in state STARTED 2026-01-10 14:50:39.390091 | orchestrator | 2026-01-10 14:50:39 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:50:42.416916 | orchestrator | 2026-01-10 14:50:42 | INFO  | Task f7ad5e30-77df-4be0-ac5d-16d0a294adc9 is in state STARTED 2026-01-10 14:50:42.417094 | orchestrator | 2026-01-10 14:50:42 | INFO  | Task defe58c1-4fee-4c60-93fd-ab3fd2b5526f is in state STARTED 2026-01-10 14:50:42.418248 | orchestrator | 2026-01-10 14:50:42 | INFO  | Task c675ce34-5677-4951-8384-6a5ecff98e0c is in state STARTED 2026-01-10 14:50:42.418751 | orchestrator | 2026-01-10 14:50:42 | INFO  | Task 27a2dbc7-3cd7-4e7d-a96b-79d180378f23 is in state STARTED 2026-01-10 14:50:42.418872 | orchestrator | 2026-01-10 14:50:42 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:50:45.444872 | orchestrator | 2026-01-10 14:50:45 | INFO  | Task f7ad5e30-77df-4be0-ac5d-16d0a294adc9 is in state STARTED 2026-01-10 14:50:45.445496 | orchestrator | 2026-01-10 14:50:45 | INFO  | Task defe58c1-4fee-4c60-93fd-ab3fd2b5526f is in state STARTED 2026-01-10 14:50:45.446280 | orchestrator | 2026-01-10 14:50:45 | INFO  | Task c675ce34-5677-4951-8384-6a5ecff98e0c is in state STARTED 2026-01-10 14:50:45.446962 | orchestrator | 2026-01-10 14:50:45 | INFO  | Task 27a2dbc7-3cd7-4e7d-a96b-79d180378f23 is in state STARTED 2026-01-10 14:50:45.446986 | orchestrator | 2026-01-10 14:50:45 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:50:48.474219 | orchestrator | 2026-01-10 14:50:48 | INFO  | Task f7ad5e30-77df-4be0-ac5d-16d0a294adc9 is in state STARTED 2026-01-10 14:50:48.475726 | orchestrator | 2026-01-10 14:50:48 | INFO  | Task defe58c1-4fee-4c60-93fd-ab3fd2b5526f is in state SUCCESS 2026-01-10 14:50:48.476530 | orchestrator | 2026-01-10 14:50:48.476558 | orchestrator | 2026-01-10 14:50:48.476563 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-10 14:50:48.476568 | orchestrator | 2026-01-10 14:50:48.476572 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-10 14:50:48.476576 | orchestrator | Saturday 10 January 2026 14:48:41 +0000 (0:00:00.270) 0:00:00.270 ****** 2026-01-10 14:50:48.476580 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:50:48.476585 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:50:48.476597 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:50:48.476601 | orchestrator | 2026-01-10 14:50:48.476605 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-10 14:50:48.476609 | orchestrator | Saturday 10 January 2026 14:48:41 +0000 (0:00:00.338) 0:00:00.608 ****** 2026-01-10 14:50:48.476613 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2026-01-10 14:50:48.476617 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2026-01-10 14:50:48.476621 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2026-01-10 14:50:48.476625 | orchestrator | 2026-01-10 14:50:48.476629 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2026-01-10 14:50:48.476633 | orchestrator | 2026-01-10 14:50:48.476636 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-01-10 14:50:48.476640 | orchestrator | Saturday 10 January 2026 14:48:42 +0000 (0:00:00.457) 0:00:01.066 ****** 2026-01-10 14:50:48.476645 | orchestrator | included: /ansible/roles/barbican/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:50:48.476649 | orchestrator | 2026-01-10 14:50:48.476653 | orchestrator | TASK [service-ks-register : barbican | Creating/deleting services] ************* 2026-01-10 14:50:48.476657 | orchestrator | Saturday 10 January 2026 14:48:42 +0000 (0:00:00.547) 0:00:01.614 ****** 2026-01-10 14:50:48.476661 | orchestrator | changed: [testbed-node-0] => (item=barbican (key-manager)) 2026-01-10 14:50:48.476664 | orchestrator | 2026-01-10 14:50:48.476668 | orchestrator | TASK [service-ks-register : barbican | Creating/deleting endpoints] ************ 2026-01-10 14:50:48.476683 | orchestrator | Saturday 10 January 2026 14:48:45 +0000 (0:00:03.157) 0:00:04.771 ****** 2026-01-10 14:50:48.476690 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api-int.testbed.osism.xyz:9311 -> internal) 2026-01-10 14:50:48.476697 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api.testbed.osism.xyz:9311 -> public) 2026-01-10 14:50:48.476702 | orchestrator | 2026-01-10 14:50:48.476709 | orchestrator | TASK [service-ks-register : barbican | Creating projects] ********************** 2026-01-10 14:50:48.476715 | orchestrator | Saturday 10 January 2026 14:48:52 +0000 (0:00:06.420) 0:00:11.192 ****** 2026-01-10 14:50:48.476721 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-01-10 14:50:48.476728 | orchestrator | 2026-01-10 14:50:48.476733 | orchestrator | TASK [service-ks-register : barbican | Creating users] ************************* 2026-01-10 14:50:48.476736 | orchestrator | Saturday 10 January 2026 14:48:55 +0000 (0:00:03.378) 0:00:14.570 ****** 2026-01-10 14:50:48.476740 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-01-10 14:50:48.476744 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service) 2026-01-10 14:50:48.476748 | orchestrator | 2026-01-10 14:50:48.476751 | orchestrator | TASK [service-ks-register : barbican | Creating roles] ************************* 2026-01-10 14:50:48.476755 | orchestrator | Saturday 10 January 2026 14:48:59 +0000 (0:00:03.792) 0:00:18.363 ****** 2026-01-10 14:50:48.476764 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-01-10 14:50:48.476768 | orchestrator | changed: [testbed-node-0] => (item=key-manager:service-admin) 2026-01-10 14:50:48.476772 | orchestrator | changed: [testbed-node-0] => (item=creator) 2026-01-10 14:50:48.476776 | orchestrator | changed: [testbed-node-0] => (item=observer) 2026-01-10 14:50:48.476780 | orchestrator | changed: [testbed-node-0] => (item=audit) 2026-01-10 14:50:48.476783 | orchestrator | 2026-01-10 14:50:48.476787 | orchestrator | TASK [service-ks-register : barbican | Granting/revoking user roles] *********** 2026-01-10 14:50:48.476822 | orchestrator | Saturday 10 January 2026 14:49:16 +0000 (0:00:17.168) 0:00:35.532 ****** 2026-01-10 14:50:48.476826 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service -> admin) 2026-01-10 14:50:48.476830 | orchestrator | 2026-01-10 14:50:48.476834 | orchestrator | TASK [barbican : Ensuring config directories exist] **************************** 2026-01-10 14:50:48.476838 | orchestrator | Saturday 10 January 2026 14:49:20 +0000 (0:00:03.875) 0:00:39.407 ****** 2026-01-10 14:50:48.476866 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-10 14:50:48.476977 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-10 14:50:48.476991 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-01-10 14:50:48.476996 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-10 14:50:48.477000 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-01-10 14:50:48.477005 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-01-10 14:50:48.477014 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-01-10 14:50:48.477021 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-01-10 14:50:48.477026 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-01-10 14:50:48.477030 | orchestrator | 2026-01-10 14:50:48.477034 | orchestrator | TASK [barbican : Ensuring vassals config directories exist] ******************** 2026-01-10 14:50:48.477037 | orchestrator | Saturday 10 January 2026 14:49:22 +0000 (0:00:02.201) 0:00:41.608 ****** 2026-01-10 14:50:48.477041 | orchestrator | changed: [testbed-node-0] => (item=barbican-api/vassals) 2026-01-10 14:50:48.477045 | orchestrator | changed: [testbed-node-1] => (item=barbican-api/vassals) 2026-01-10 14:50:48.477049 | orchestrator | changed: [testbed-node-2] => (item=barbican-api/vassals) 2026-01-10 14:50:48.477053 | orchestrator | 2026-01-10 14:50:48.477056 | orchestrator | TASK [barbican : Check if policies shall be overwritten] *********************** 2026-01-10 14:50:48.477060 | orchestrator | Saturday 10 January 2026 14:49:23 +0000 (0:00:01.208) 0:00:42.817 ****** 2026-01-10 14:50:48.477065 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:50:48.477071 | orchestrator | 2026-01-10 14:50:48.477089 | orchestrator | TASK [barbican : Set barbican policy file] ************************************* 2026-01-10 14:50:48.477095 | orchestrator | Saturday 10 January 2026 14:49:24 +0000 (0:00:00.128) 0:00:42.945 ****** 2026-01-10 14:50:48.477101 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:50:48.477107 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:50:48.477112 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:50:48.477118 | orchestrator | 2026-01-10 14:50:48.477124 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-01-10 14:50:48.477371 | orchestrator | Saturday 10 January 2026 14:49:24 +0000 (0:00:00.537) 0:00:43.482 ****** 2026-01-10 14:50:48.477380 | orchestrator | included: /ansible/roles/barbican/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:50:48.477384 | orchestrator | 2026-01-10 14:50:48.477389 | orchestrator | TASK [service-cert-copy : barbican | Copying over extra CA certificates] ******* 2026-01-10 14:50:48.477393 | orchestrator | Saturday 10 January 2026 14:49:25 +0000 (0:00:01.097) 0:00:44.580 ****** 2026-01-10 14:50:48.477399 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-10 14:50:48.477427 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-10 14:50:48.477434 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-10 14:50:48.477439 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-01-10 14:50:48.477444 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-01-10 14:50:48.477448 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-01-10 14:50:48.477461 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-01-10 14:50:48.477466 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-01-10 14:50:48.477470 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-01-10 14:50:48.477473 | orchestrator | 2026-01-10 14:50:48.477477 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS certificate] *** 2026-01-10 14:50:48.477481 | orchestrator | Saturday 10 January 2026 14:49:29 +0000 (0:00:03.646) 0:00:48.227 ****** 2026-01-10 14:50:48.477485 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-10 14:50:48.477489 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-10 14:50:48.477499 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-10 14:50:48.477503 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:50:48.477510 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-10 14:50:48.477514 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-10 14:50:48.477518 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-10 14:50:48.477522 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:50:48.477526 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-10 14:50:48.477535 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-10 14:50:48.477541 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-10 14:50:48.477545 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:50:48.477549 | orchestrator | 2026-01-10 14:50:48.477553 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS key] **** 2026-01-10 14:50:48.477557 | orchestrator | Saturday 10 January 2026 14:49:30 +0000 (0:00:00.994) 0:00:49.222 ****** 2026-01-10 14:50:48.477561 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-10 14:50:48.477565 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-10 14:50:48.477569 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-10 14:50:48.477579 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:50:48.477586 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-10 14:50:48.477593 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-10 14:50:48.477597 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-10 14:50:48.477601 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:50:48.477605 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-10 14:50:48.477609 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-10 14:50:48.477615 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-10 14:50:48.477619 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:50:48.477623 | orchestrator | 2026-01-10 14:50:48.477627 | orchestrator | TASK [barbican : Copying over config.json files for services] ****************** 2026-01-10 14:50:48.477631 | orchestrator | Saturday 10 January 2026 14:49:32 +0000 (0:00:01.728) 0:00:50.950 ****** 2026-01-10 14:50:48.477640 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-10 14:50:48.477644 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-10 14:50:48.477649 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-10 14:50:48.477655 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-01-10 14:50:48.477661 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-01-10 14:50:48.477667 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-01-10 14:50:48.477671 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-01-10 14:50:48.477675 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-01-10 14:50:48.477708 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-01-10 14:50:48.477715 | orchestrator | 2026-01-10 14:50:48.477719 | orchestrator | TASK [barbican : Copying over barbican-api.ini] ******************************** 2026-01-10 14:50:48.477723 | orchestrator | Saturday 10 January 2026 14:49:35 +0000 (0:00:03.625) 0:00:54.575 ****** 2026-01-10 14:50:48.477727 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:50:48.477731 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:50:48.477735 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:50:48.477738 | orchestrator | 2026-01-10 14:50:48.477742 | orchestrator | TASK [barbican : Checking whether barbican-api-paste.ini file exists] ********** 2026-01-10 14:50:48.477746 | orchestrator | Saturday 10 January 2026 14:49:39 +0000 (0:00:04.229) 0:00:58.804 ****** 2026-01-10 14:50:48.477750 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-10 14:50:48.477754 | orchestrator | 2026-01-10 14:50:48.477757 | orchestrator | TASK [barbican : Copying over barbican-api-paste.ini] ************************** 2026-01-10 14:50:48.477761 | orchestrator | Saturday 10 January 2026 14:49:41 +0000 (0:00:01.053) 0:00:59.858 ****** 2026-01-10 14:50:48.477765 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:50:48.477768 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:50:48.477772 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:50:48.477783 | orchestrator | 2026-01-10 14:50:48.477787 | orchestrator | TASK [barbican : Copying over barbican.conf] *********************************** 2026-01-10 14:50:48.477790 | orchestrator | Saturday 10 January 2026 14:49:41 +0000 (0:00:00.683) 0:01:00.542 ****** 2026-01-10 14:50:48.477799 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-10 14:50:48.477804 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-10 14:50:48.477808 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-10 14:50:48.477816 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-01-10 14:50:48.477822 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-01-10 14:50:48.477837 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-01-10 14:50:48.477848 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-01-10 14:50:48.477855 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-01-10 14:50:48.477865 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-01-10 14:50:48.477906 | orchestrator | 2026-01-10 14:50:48.477914 | orchestrator | TASK [barbican : Copying over existing policy file] **************************** 2026-01-10 14:50:48.477921 | orchestrator | Saturday 10 January 2026 14:49:51 +0000 (0:00:09.903) 0:01:10.445 ****** 2026-01-10 14:50:48.477928 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-10 14:50:48.477947 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-10 14:50:48.477962 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-10 14:50:48.477969 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:50:48.477976 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-10 14:50:48.477988 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-10 14:50:48.477994 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-10 14:50:48.478000 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:50:48.478005 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-10 14:50:48.478039 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-10 14:50:48.478045 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-10 14:50:48.478052 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:50:48.478056 | orchestrator | 2026-01-10 14:50:48.478062 | orchestrator | TASK [service-check-containers : barbican | Check containers] ****************** 2026-01-10 14:50:48.478071 | orchestrator | Saturday 10 January 2026 14:49:52 +0000 (0:00:01.391) 0:01:11.837 ****** 2026-01-10 14:50:48.478079 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-10 14:50:48.478086 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-10 14:50:48.478104 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-10 14:50:48.478110 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-01-10 14:50:48.478120 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-01-10 14:50:48.478126 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-01-10 14:50:48.478131 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-01-10 14:50:48.478137 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-01-10 14:50:48.478147 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-01-10 14:50:48.478153 | orchestrator | 2026-01-10 14:50:48.478160 | orchestrator | TASK [service-check-containers : barbican | Notify handlers to restart containers] *** 2026-01-10 14:50:48.478167 | orchestrator | Saturday 10 January 2026 14:49:57 +0000 (0:00:04.475) 0:01:16.313 ****** 2026-01-10 14:50:48.478175 | orchestrator | changed: [testbed-node-0] => { 2026-01-10 14:50:48.478181 | orchestrator |  "msg": "Notifying handlers" 2026-01-10 14:50:48.478188 | orchestrator | } 2026-01-10 14:50:48.478194 | orchestrator | changed: [testbed-node-1] => { 2026-01-10 14:50:48.478200 | orchestrator |  "msg": "Notifying handlers" 2026-01-10 14:50:48.478212 | orchestrator | } 2026-01-10 14:50:48.478219 | orchestrator | changed: [testbed-node-2] => { 2026-01-10 14:50:48.478225 | orchestrator |  "msg": "Notifying handlers" 2026-01-10 14:50:48.478231 | orchestrator | } 2026-01-10 14:50:48.478238 | orchestrator | 2026-01-10 14:50:48.478243 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-01-10 14:50:48.478249 | orchestrator | Saturday 10 January 2026 14:49:58 +0000 (0:00:00.636) 0:01:16.949 ****** 2026-01-10 14:50:48.478255 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-10 14:50:48.478262 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-10 14:50:48.478268 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-10 14:50:48.478274 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:50:48.478285 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-10 14:50:48.478296 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-10 14:50:48.478306 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-10 14:50:48.478313 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:50:48.478319 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-10 14:50:48.478326 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-10 14:50:48.478332 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-10 14:50:48.478338 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:50:48.478345 | orchestrator | 2026-01-10 14:50:48.478352 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-01-10 14:50:48.478359 | orchestrator | Saturday 10 January 2026 14:50:00 +0000 (0:00:02.215) 0:01:19.165 ****** 2026-01-10 14:50:48.478365 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:50:48.478376 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:50:48.478383 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:50:48.478390 | orchestrator | 2026-01-10 14:50:48.478397 | orchestrator | TASK [barbican : Creating barbican database] *********************************** 2026-01-10 14:50:48.478406 | orchestrator | Saturday 10 January 2026 14:50:01 +0000 (0:00:01.133) 0:01:20.298 ****** 2026-01-10 14:50:48.478410 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:50:48.478415 | orchestrator | 2026-01-10 14:50:48.478419 | orchestrator | TASK [barbican : Creating barbican database user and setting permissions] ****** 2026-01-10 14:50:48.478423 | orchestrator | Saturday 10 January 2026 14:50:03 +0000 (0:00:02.277) 0:01:22.576 ****** 2026-01-10 14:50:48.478427 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:50:48.478432 | orchestrator | 2026-01-10 14:50:48.478439 | orchestrator | TASK [barbican : Running barbican bootstrap container] ************************* 2026-01-10 14:50:48.478443 | orchestrator | Saturday 10 January 2026 14:50:05 +0000 (0:00:02.201) 0:01:24.778 ****** 2026-01-10 14:50:48.478447 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:50:48.478451 | orchestrator | 2026-01-10 14:50:48.478456 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-01-10 14:50:48.478460 | orchestrator | Saturday 10 January 2026 14:50:17 +0000 (0:00:11.567) 0:01:36.346 ****** 2026-01-10 14:50:48.478464 | orchestrator | 2026-01-10 14:50:48.478469 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-01-10 14:50:48.478473 | orchestrator | Saturday 10 January 2026 14:50:17 +0000 (0:00:00.066) 0:01:36.413 ****** 2026-01-10 14:50:48.478477 | orchestrator | 2026-01-10 14:50:48.478482 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-01-10 14:50:48.478486 | orchestrator | Saturday 10 January 2026 14:50:17 +0000 (0:00:00.169) 0:01:36.582 ****** 2026-01-10 14:50:48.478490 | orchestrator | 2026-01-10 14:50:48.478495 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-api container] ******************** 2026-01-10 14:50:48.478499 | orchestrator | Saturday 10 January 2026 14:50:17 +0000 (0:00:00.134) 0:01:36.716 ****** 2026-01-10 14:50:48.478503 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:50:48.478507 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:50:48.478512 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:50:48.478516 | orchestrator | 2026-01-10 14:50:48.478520 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-keystone-listener container] ****** 2026-01-10 14:50:48.478524 | orchestrator | Saturday 10 January 2026 14:50:30 +0000 (0:00:12.236) 0:01:48.953 ****** 2026-01-10 14:50:48.478529 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:50:48.478533 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:50:48.478537 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:50:48.478542 | orchestrator | 2026-01-10 14:50:48.478546 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-worker container] ***************** 2026-01-10 14:50:48.478550 | orchestrator | Saturday 10 January 2026 14:50:37 +0000 (0:00:07.392) 0:01:56.346 ****** 2026-01-10 14:50:48.478555 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:50:48.478561 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:50:48.478570 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:50:48.478578 | orchestrator | 2026-01-10 14:50:48.478584 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-10 14:50:48.478592 | orchestrator | testbed-node-0 : ok=25  changed=19  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-01-10 14:50:48.478599 | orchestrator | testbed-node-1 : ok=15  changed=11  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-10 14:50:48.478605 | orchestrator | testbed-node-2 : ok=15  changed=11  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-10 14:50:48.478611 | orchestrator | 2026-01-10 14:50:48.478618 | orchestrator | 2026-01-10 14:50:48.478624 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-10 14:50:48.478631 | orchestrator | Saturday 10 January 2026 14:50:45 +0000 (0:00:07.965) 0:02:04.312 ****** 2026-01-10 14:50:48.478724 | orchestrator | =============================================================================== 2026-01-10 14:50:48.478729 | orchestrator | service-ks-register : barbican | Creating roles ------------------------ 17.17s 2026-01-10 14:50:48.478734 | orchestrator | barbican : Restart barbican-api container ------------------------------ 12.24s 2026-01-10 14:50:48.478738 | orchestrator | barbican : Running barbican bootstrap container ------------------------ 11.57s 2026-01-10 14:50:48.478743 | orchestrator | barbican : Copying over barbican.conf ----------------------------------- 9.90s 2026-01-10 14:50:48.478747 | orchestrator | barbican : Restart barbican-worker container ---------------------------- 7.97s 2026-01-10 14:50:48.478752 | orchestrator | barbican : Restart barbican-keystone-listener container ----------------- 7.39s 2026-01-10 14:50:48.478759 | orchestrator | service-ks-register : barbican | Creating/deleting endpoints ------------ 6.42s 2026-01-10 14:50:48.478767 | orchestrator | service-check-containers : barbican | Check containers ------------------ 4.48s 2026-01-10 14:50:48.478776 | orchestrator | barbican : Copying over barbican-api.ini -------------------------------- 4.23s 2026-01-10 14:50:48.478783 | orchestrator | service-ks-register : barbican | Granting/revoking user roles ----------- 3.88s 2026-01-10 14:50:48.478789 | orchestrator | service-ks-register : barbican | Creating users ------------------------- 3.79s 2026-01-10 14:50:48.478795 | orchestrator | service-cert-copy : barbican | Copying over extra CA certificates ------- 3.65s 2026-01-10 14:50:48.478801 | orchestrator | barbican : Copying over config.json files for services ------------------ 3.63s 2026-01-10 14:50:48.478806 | orchestrator | service-ks-register : barbican | Creating projects ---------------------- 3.38s 2026-01-10 14:50:48.478812 | orchestrator | service-ks-register : barbican | Creating/deleting services ------------- 3.16s 2026-01-10 14:50:48.478818 | orchestrator | barbican : Creating barbican database ----------------------------------- 2.28s 2026-01-10 14:50:48.478824 | orchestrator | service-check-containers : Include tasks -------------------------------- 2.22s 2026-01-10 14:50:48.478829 | orchestrator | barbican : Creating barbican database user and setting permissions ------ 2.20s 2026-01-10 14:50:48.478835 | orchestrator | barbican : Ensuring config directories exist ---------------------------- 2.20s 2026-01-10 14:50:48.478847 | orchestrator | service-cert-copy : barbican | Copying over backend internal TLS key ---- 1.73s 2026-01-10 14:50:48.478854 | orchestrator | 2026-01-10 14:50:48 | INFO  | Task c675ce34-5677-4951-8384-6a5ecff98e0c is in state STARTED 2026-01-10 14:50:48.478861 | orchestrator | 2026-01-10 14:50:48 | INFO  | Task 27a2dbc7-3cd7-4e7d-a96b-79d180378f23 is in state STARTED 2026-01-10 14:50:48.478916 | orchestrator | 2026-01-10 14:50:48 | INFO  | Task 160fbb83-a71f-4bd1-9296-f8505d911308 is in state STARTED 2026-01-10 14:50:48.478969 | orchestrator | 2026-01-10 14:50:48 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:50:51.537764 | orchestrator | 2026-01-10 14:50:51 | INFO  | Task f7ad5e30-77df-4be0-ac5d-16d0a294adc9 is in state STARTED 2026-01-10 14:50:51.537943 | orchestrator | 2026-01-10 14:50:51 | INFO  | Task c675ce34-5677-4951-8384-6a5ecff98e0c is in state STARTED 2026-01-10 14:50:51.541222 | orchestrator | 2026-01-10 14:50:51 | INFO  | Task 27a2dbc7-3cd7-4e7d-a96b-79d180378f23 is in state STARTED 2026-01-10 14:50:51.541477 | orchestrator | 2026-01-10 14:50:51 | INFO  | Task 160fbb83-a71f-4bd1-9296-f8505d911308 is in state STARTED 2026-01-10 14:50:51.541493 | orchestrator | 2026-01-10 14:50:51 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:50:54.569779 | orchestrator | 2026-01-10 14:50:54 | INFO  | Task f7ad5e30-77df-4be0-ac5d-16d0a294adc9 is in state STARTED 2026-01-10 14:50:54.570325 | orchestrator | 2026-01-10 14:50:54 | INFO  | Task c675ce34-5677-4951-8384-6a5ecff98e0c is in state STARTED 2026-01-10 14:50:54.571521 | orchestrator | 2026-01-10 14:50:54 | INFO  | Task 27a2dbc7-3cd7-4e7d-a96b-79d180378f23 is in state STARTED 2026-01-10 14:50:54.571663 | orchestrator | 2026-01-10 14:50:54 | INFO  | Task 160fbb83-a71f-4bd1-9296-f8505d911308 is in state STARTED 2026-01-10 14:50:54.571741 | orchestrator | 2026-01-10 14:50:54 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:50:57.603220 | orchestrator | 2026-01-10 14:50:57 | INFO  | Task f7ad5e30-77df-4be0-ac5d-16d0a294adc9 is in state STARTED 2026-01-10 14:50:57.604058 | orchestrator | 2026-01-10 14:50:57 | INFO  | Task c675ce34-5677-4951-8384-6a5ecff98e0c is in state STARTED 2026-01-10 14:50:57.604487 | orchestrator | 2026-01-10 14:50:57 | INFO  | Task 27a2dbc7-3cd7-4e7d-a96b-79d180378f23 is in state STARTED 2026-01-10 14:50:57.605147 | orchestrator | 2026-01-10 14:50:57 | INFO  | Task 160fbb83-a71f-4bd1-9296-f8505d911308 is in state STARTED 2026-01-10 14:50:57.605166 | orchestrator | 2026-01-10 14:50:57 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:51:00.634466 | orchestrator | 2026-01-10 14:51:00 | INFO  | Task f7ad5e30-77df-4be0-ac5d-16d0a294adc9 is in state STARTED 2026-01-10 14:51:00.635145 | orchestrator | 2026-01-10 14:51:00 | INFO  | Task c675ce34-5677-4951-8384-6a5ecff98e0c is in state STARTED 2026-01-10 14:51:00.636309 | orchestrator | 2026-01-10 14:51:00 | INFO  | Task 27a2dbc7-3cd7-4e7d-a96b-79d180378f23 is in state STARTED 2026-01-10 14:51:00.637470 | orchestrator | 2026-01-10 14:51:00 | INFO  | Task 160fbb83-a71f-4bd1-9296-f8505d911308 is in state STARTED 2026-01-10 14:51:00.637718 | orchestrator | 2026-01-10 14:51:00 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:51:03.676010 | orchestrator | 2026-01-10 14:51:03 | INFO  | Task f7ad5e30-77df-4be0-ac5d-16d0a294adc9 is in state STARTED 2026-01-10 14:51:03.676595 | orchestrator | 2026-01-10 14:51:03 | INFO  | Task c675ce34-5677-4951-8384-6a5ecff98e0c is in state STARTED 2026-01-10 14:51:03.678154 | orchestrator | 2026-01-10 14:51:03 | INFO  | Task 27a2dbc7-3cd7-4e7d-a96b-79d180378f23 is in state STARTED 2026-01-10 14:51:03.678937 | orchestrator | 2026-01-10 14:51:03 | INFO  | Task 160fbb83-a71f-4bd1-9296-f8505d911308 is in state STARTED 2026-01-10 14:51:03.678958 | orchestrator | 2026-01-10 14:51:03 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:51:06.721319 | orchestrator | 2026-01-10 14:51:06 | INFO  | Task f7ad5e30-77df-4be0-ac5d-16d0a294adc9 is in state STARTED 2026-01-10 14:51:06.722464 | orchestrator | 2026-01-10 14:51:06 | INFO  | Task c675ce34-5677-4951-8384-6a5ecff98e0c is in state STARTED 2026-01-10 14:51:06.723465 | orchestrator | 2026-01-10 14:51:06 | INFO  | Task 27a2dbc7-3cd7-4e7d-a96b-79d180378f23 is in state STARTED 2026-01-10 14:51:06.724253 | orchestrator | 2026-01-10 14:51:06 | INFO  | Task 160fbb83-a71f-4bd1-9296-f8505d911308 is in state STARTED 2026-01-10 14:51:06.724286 | orchestrator | 2026-01-10 14:51:06 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:51:09.763946 | orchestrator | 2026-01-10 14:51:09 | INFO  | Task f7ad5e30-77df-4be0-ac5d-16d0a294adc9 is in state STARTED 2026-01-10 14:51:09.767054 | orchestrator | 2026-01-10 14:51:09 | INFO  | Task c675ce34-5677-4951-8384-6a5ecff98e0c is in state STARTED 2026-01-10 14:51:09.770311 | orchestrator | 2026-01-10 14:51:09 | INFO  | Task 27a2dbc7-3cd7-4e7d-a96b-79d180378f23 is in state STARTED 2026-01-10 14:51:09.772662 | orchestrator | 2026-01-10 14:51:09 | INFO  | Task 160fbb83-a71f-4bd1-9296-f8505d911308 is in state STARTED 2026-01-10 14:51:09.772708 | orchestrator | 2026-01-10 14:51:09 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:51:12.817180 | orchestrator | 2026-01-10 14:51:12 | INFO  | Task f7ad5e30-77df-4be0-ac5d-16d0a294adc9 is in state STARTED 2026-01-10 14:51:12.819023 | orchestrator | 2026-01-10 14:51:12 | INFO  | Task c675ce34-5677-4951-8384-6a5ecff98e0c is in state STARTED 2026-01-10 14:51:12.820782 | orchestrator | 2026-01-10 14:51:12 | INFO  | Task 27a2dbc7-3cd7-4e7d-a96b-79d180378f23 is in state STARTED 2026-01-10 14:51:12.822470 | orchestrator | 2026-01-10 14:51:12 | INFO  | Task 160fbb83-a71f-4bd1-9296-f8505d911308 is in state STARTED 2026-01-10 14:51:12.822680 | orchestrator | 2026-01-10 14:51:12 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:51:15.868146 | orchestrator | 2026-01-10 14:51:15 | INFO  | Task f7ad5e30-77df-4be0-ac5d-16d0a294adc9 is in state STARTED 2026-01-10 14:51:15.869119 | orchestrator | 2026-01-10 14:51:15 | INFO  | Task c675ce34-5677-4951-8384-6a5ecff98e0c is in state STARTED 2026-01-10 14:51:15.870611 | orchestrator | 2026-01-10 14:51:15 | INFO  | Task 27a2dbc7-3cd7-4e7d-a96b-79d180378f23 is in state STARTED 2026-01-10 14:51:15.872159 | orchestrator | 2026-01-10 14:51:15 | INFO  | Task 160fbb83-a71f-4bd1-9296-f8505d911308 is in state STARTED 2026-01-10 14:51:15.872201 | orchestrator | 2026-01-10 14:51:15 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:51:18.919080 | orchestrator | 2026-01-10 14:51:18 | INFO  | Task f7ad5e30-77df-4be0-ac5d-16d0a294adc9 is in state STARTED 2026-01-10 14:51:18.919131 | orchestrator | 2026-01-10 14:51:18 | INFO  | Task c675ce34-5677-4951-8384-6a5ecff98e0c is in state STARTED 2026-01-10 14:51:18.919658 | orchestrator | 2026-01-10 14:51:18 | INFO  | Task 27a2dbc7-3cd7-4e7d-a96b-79d180378f23 is in state STARTED 2026-01-10 14:51:18.920537 | orchestrator | 2026-01-10 14:51:18 | INFO  | Task 160fbb83-a71f-4bd1-9296-f8505d911308 is in state STARTED 2026-01-10 14:51:18.920557 | orchestrator | 2026-01-10 14:51:18 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:51:21.962870 | orchestrator | 2026-01-10 14:51:21 | INFO  | Task f7ad5e30-77df-4be0-ac5d-16d0a294adc9 is in state STARTED 2026-01-10 14:51:21.967236 | orchestrator | 2026-01-10 14:51:21 | INFO  | Task c675ce34-5677-4951-8384-6a5ecff98e0c is in state STARTED 2026-01-10 14:51:21.971901 | orchestrator | 2026-01-10 14:51:21 | INFO  | Task 27a2dbc7-3cd7-4e7d-a96b-79d180378f23 is in state STARTED 2026-01-10 14:51:21.974711 | orchestrator | 2026-01-10 14:51:21 | INFO  | Task 160fbb83-a71f-4bd1-9296-f8505d911308 is in state STARTED 2026-01-10 14:51:21.974767 | orchestrator | 2026-01-10 14:51:21 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:51:25.026854 | orchestrator | 2026-01-10 14:51:25 | INFO  | Task f7ad5e30-77df-4be0-ac5d-16d0a294adc9 is in state STARTED 2026-01-10 14:51:25.029029 | orchestrator | 2026-01-10 14:51:25 | INFO  | Task c675ce34-5677-4951-8384-6a5ecff98e0c is in state STARTED 2026-01-10 14:51:25.031403 | orchestrator | 2026-01-10 14:51:25 | INFO  | Task 27a2dbc7-3cd7-4e7d-a96b-79d180378f23 is in state STARTED 2026-01-10 14:51:25.034256 | orchestrator | 2026-01-10 14:51:25 | INFO  | Task 160fbb83-a71f-4bd1-9296-f8505d911308 is in state STARTED 2026-01-10 14:51:25.034361 | orchestrator | 2026-01-10 14:51:25 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:51:28.072618 | orchestrator | 2026-01-10 14:51:28 | INFO  | Task f7ad5e30-77df-4be0-ac5d-16d0a294adc9 is in state STARTED 2026-01-10 14:51:28.076944 | orchestrator | 2026-01-10 14:51:28 | INFO  | Task c675ce34-5677-4951-8384-6a5ecff98e0c is in state STARTED 2026-01-10 14:51:28.080083 | orchestrator | 2026-01-10 14:51:28 | INFO  | Task 27a2dbc7-3cd7-4e7d-a96b-79d180378f23 is in state STARTED 2026-01-10 14:51:28.085171 | orchestrator | 2026-01-10 14:51:28 | INFO  | Task 160fbb83-a71f-4bd1-9296-f8505d911308 is in state STARTED 2026-01-10 14:51:28.085219 | orchestrator | 2026-01-10 14:51:28 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:51:31.129713 | orchestrator | 2026-01-10 14:51:31 | INFO  | Task f7ad5e30-77df-4be0-ac5d-16d0a294adc9 is in state STARTED 2026-01-10 14:51:31.130456 | orchestrator | 2026-01-10 14:51:31 | INFO  | Task c675ce34-5677-4951-8384-6a5ecff98e0c is in state STARTED 2026-01-10 14:51:31.131423 | orchestrator | 2026-01-10 14:51:31 | INFO  | Task 27a2dbc7-3cd7-4e7d-a96b-79d180378f23 is in state STARTED 2026-01-10 14:51:31.131976 | orchestrator | 2026-01-10 14:51:31 | INFO  | Task 160fbb83-a71f-4bd1-9296-f8505d911308 is in state STARTED 2026-01-10 14:51:31.132007 | orchestrator | 2026-01-10 14:51:31 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:51:34.179933 | orchestrator | 2026-01-10 14:51:34 | INFO  | Task f7ad5e30-77df-4be0-ac5d-16d0a294adc9 is in state STARTED 2026-01-10 14:51:34.182261 | orchestrator | 2026-01-10 14:51:34 | INFO  | Task c675ce34-5677-4951-8384-6a5ecff98e0c is in state STARTED 2026-01-10 14:51:34.183705 | orchestrator | 2026-01-10 14:51:34 | INFO  | Task 27a2dbc7-3cd7-4e7d-a96b-79d180378f23 is in state STARTED 2026-01-10 14:51:34.185228 | orchestrator | 2026-01-10 14:51:34 | INFO  | Task 160fbb83-a71f-4bd1-9296-f8505d911308 is in state STARTED 2026-01-10 14:51:34.185295 | orchestrator | 2026-01-10 14:51:34 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:51:37.226996 | orchestrator | 2026-01-10 14:51:37 | INFO  | Task f7ad5e30-77df-4be0-ac5d-16d0a294adc9 is in state STARTED 2026-01-10 14:51:37.228450 | orchestrator | 2026-01-10 14:51:37 | INFO  | Task c675ce34-5677-4951-8384-6a5ecff98e0c is in state STARTED 2026-01-10 14:51:37.229791 | orchestrator | 2026-01-10 14:51:37 | INFO  | Task 27a2dbc7-3cd7-4e7d-a96b-79d180378f23 is in state STARTED 2026-01-10 14:51:37.231283 | orchestrator | 2026-01-10 14:51:37 | INFO  | Task 160fbb83-a71f-4bd1-9296-f8505d911308 is in state STARTED 2026-01-10 14:51:37.231334 | orchestrator | 2026-01-10 14:51:37 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:51:40.282440 | orchestrator | 2026-01-10 14:51:40 | INFO  | Task f7ad5e30-77df-4be0-ac5d-16d0a294adc9 is in state STARTED 2026-01-10 14:51:40.284302 | orchestrator | 2026-01-10 14:51:40 | INFO  | Task c675ce34-5677-4951-8384-6a5ecff98e0c is in state STARTED 2026-01-10 14:51:40.285259 | orchestrator | 2026-01-10 14:51:40 | INFO  | Task 27a2dbc7-3cd7-4e7d-a96b-79d180378f23 is in state STARTED 2026-01-10 14:51:40.286671 | orchestrator | 2026-01-10 14:51:40 | INFO  | Task 160fbb83-a71f-4bd1-9296-f8505d911308 is in state STARTED 2026-01-10 14:51:40.286716 | orchestrator | 2026-01-10 14:51:40 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:51:43.336732 | orchestrator | 2026-01-10 14:51:43 | INFO  | Task f7ad5e30-77df-4be0-ac5d-16d0a294adc9 is in state STARTED 2026-01-10 14:51:43.337723 | orchestrator | 2026-01-10 14:51:43 | INFO  | Task c675ce34-5677-4951-8384-6a5ecff98e0c is in state STARTED 2026-01-10 14:51:43.340911 | orchestrator | 2026-01-10 14:51:43 | INFO  | Task 27a2dbc7-3cd7-4e7d-a96b-79d180378f23 is in state STARTED 2026-01-10 14:51:43.343347 | orchestrator | 2026-01-10 14:51:43 | INFO  | Task 160fbb83-a71f-4bd1-9296-f8505d911308 is in state STARTED 2026-01-10 14:51:43.343658 | orchestrator | 2026-01-10 14:51:43 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:51:46.387953 | orchestrator | 2026-01-10 14:51:46 | INFO  | Task f7ad5e30-77df-4be0-ac5d-16d0a294adc9 is in state STARTED 2026-01-10 14:51:46.389770 | orchestrator | 2026-01-10 14:51:46 | INFO  | Task c675ce34-5677-4951-8384-6a5ecff98e0c is in state STARTED 2026-01-10 14:51:46.390746 | orchestrator | 2026-01-10 14:51:46 | INFO  | Task 27a2dbc7-3cd7-4e7d-a96b-79d180378f23 is in state STARTED 2026-01-10 14:51:46.391641 | orchestrator | 2026-01-10 14:51:46 | INFO  | Task 160fbb83-a71f-4bd1-9296-f8505d911308 is in state STARTED 2026-01-10 14:51:46.391701 | orchestrator | 2026-01-10 14:51:46 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:51:49.436162 | orchestrator | 2026-01-10 14:51:49 | INFO  | Task f7ad5e30-77df-4be0-ac5d-16d0a294adc9 is in state STARTED 2026-01-10 14:51:49.438046 | orchestrator | 2026-01-10 14:51:49 | INFO  | Task c675ce34-5677-4951-8384-6a5ecff98e0c is in state STARTED 2026-01-10 14:51:49.439845 | orchestrator | 2026-01-10 14:51:49 | INFO  | Task 27a2dbc7-3cd7-4e7d-a96b-79d180378f23 is in state STARTED 2026-01-10 14:51:49.441536 | orchestrator | 2026-01-10 14:51:49 | INFO  | Task 160fbb83-a71f-4bd1-9296-f8505d911308 is in state STARTED 2026-01-10 14:51:49.441671 | orchestrator | 2026-01-10 14:51:49 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:51:52.475772 | orchestrator | 2026-01-10 14:51:52 | INFO  | Task f7ad5e30-77df-4be0-ac5d-16d0a294adc9 is in state STARTED 2026-01-10 14:51:52.476468 | orchestrator | 2026-01-10 14:51:52 | INFO  | Task d0897284-56ea-426a-b367-9987ca494dc0 is in state STARTED 2026-01-10 14:51:52.477526 | orchestrator | 2026-01-10 14:51:52 | INFO  | Task c675ce34-5677-4951-8384-6a5ecff98e0c is in state STARTED 2026-01-10 14:51:52.478578 | orchestrator | 2026-01-10 14:51:52 | INFO  | Task 27a2dbc7-3cd7-4e7d-a96b-79d180378f23 is in state STARTED 2026-01-10 14:51:52.479178 | orchestrator | 2026-01-10 14:51:52 | INFO  | Task 160fbb83-a71f-4bd1-9296-f8505d911308 is in state SUCCESS 2026-01-10 14:51:52.479195 | orchestrator | 2026-01-10 14:51:52 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:51:55.517951 | orchestrator | 2026-01-10 14:51:55 | INFO  | Task f7ad5e30-77df-4be0-ac5d-16d0a294adc9 is in state STARTED 2026-01-10 14:51:55.519570 | orchestrator | 2026-01-10 14:51:55 | INFO  | Task d0897284-56ea-426a-b367-9987ca494dc0 is in state STARTED 2026-01-10 14:51:55.520440 | orchestrator | 2026-01-10 14:51:55 | INFO  | Task c675ce34-5677-4951-8384-6a5ecff98e0c is in state STARTED 2026-01-10 14:51:55.521319 | orchestrator | 2026-01-10 14:51:55 | INFO  | Task 27a2dbc7-3cd7-4e7d-a96b-79d180378f23 is in state STARTED 2026-01-10 14:51:55.522478 | orchestrator | 2026-01-10 14:51:55 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:51:58.606710 | orchestrator | 2026-01-10 14:51:58 | INFO  | Task f7ad5e30-77df-4be0-ac5d-16d0a294adc9 is in state STARTED 2026-01-10 14:51:58.612071 | orchestrator | 2026-01-10 14:51:58 | INFO  | Task d0897284-56ea-426a-b367-9987ca494dc0 is in state STARTED 2026-01-10 14:51:58.613065 | orchestrator | 2026-01-10 14:51:58 | INFO  | Task c675ce34-5677-4951-8384-6a5ecff98e0c is in state STARTED 2026-01-10 14:51:58.613742 | orchestrator | 2026-01-10 14:51:58 | INFO  | Task 27a2dbc7-3cd7-4e7d-a96b-79d180378f23 is in state STARTED 2026-01-10 14:51:58.613761 | orchestrator | 2026-01-10 14:51:58 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:52:01.653845 | orchestrator | 2026-01-10 14:52:01 | INFO  | Task f7ad5e30-77df-4be0-ac5d-16d0a294adc9 is in state STARTED 2026-01-10 14:52:01.656977 | orchestrator | 2026-01-10 14:52:01 | INFO  | Task d0897284-56ea-426a-b367-9987ca494dc0 is in state STARTED 2026-01-10 14:52:01.658576 | orchestrator | 2026-01-10 14:52:01 | INFO  | Task c675ce34-5677-4951-8384-6a5ecff98e0c is in state STARTED 2026-01-10 14:52:01.659904 | orchestrator | 2026-01-10 14:52:01 | INFO  | Task 27a2dbc7-3cd7-4e7d-a96b-79d180378f23 is in state STARTED 2026-01-10 14:52:01.660132 | orchestrator | 2026-01-10 14:52:01 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:52:04.720465 | orchestrator | 2026-01-10 14:52:04 | INFO  | Task f7ad5e30-77df-4be0-ac5d-16d0a294adc9 is in state STARTED 2026-01-10 14:52:04.723467 | orchestrator | 2026-01-10 14:52:04 | INFO  | Task d0897284-56ea-426a-b367-9987ca494dc0 is in state STARTED 2026-01-10 14:52:04.724117 | orchestrator | 2026-01-10 14:52:04 | INFO  | Task c675ce34-5677-4951-8384-6a5ecff98e0c is in state STARTED 2026-01-10 14:52:04.725615 | orchestrator | 2026-01-10 14:52:04 | INFO  | Task 27a2dbc7-3cd7-4e7d-a96b-79d180378f23 is in state STARTED 2026-01-10 14:52:04.725670 | orchestrator | 2026-01-10 14:52:04 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:52:07.779792 | orchestrator | 2026-01-10 14:52:07 | INFO  | Task f7ad5e30-77df-4be0-ac5d-16d0a294adc9 is in state STARTED 2026-01-10 14:52:07.781530 | orchestrator | 2026-01-10 14:52:07 | INFO  | Task d0897284-56ea-426a-b367-9987ca494dc0 is in state STARTED 2026-01-10 14:52:07.783496 | orchestrator | 2026-01-10 14:52:07 | INFO  | Task c675ce34-5677-4951-8384-6a5ecff98e0c is in state STARTED 2026-01-10 14:52:07.785112 | orchestrator | 2026-01-10 14:52:07 | INFO  | Task 27a2dbc7-3cd7-4e7d-a96b-79d180378f23 is in state STARTED 2026-01-10 14:52:07.785230 | orchestrator | 2026-01-10 14:52:07 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:52:10.840256 | orchestrator | 2026-01-10 14:52:10 | INFO  | Task f7ad5e30-77df-4be0-ac5d-16d0a294adc9 is in state STARTED 2026-01-10 14:52:10.842570 | orchestrator | 2026-01-10 14:52:10 | INFO  | Task d0897284-56ea-426a-b367-9987ca494dc0 is in state STARTED 2026-01-10 14:52:10.844826 | orchestrator | 2026-01-10 14:52:10 | INFO  | Task c675ce34-5677-4951-8384-6a5ecff98e0c is in state STARTED 2026-01-10 14:52:10.846316 | orchestrator | 2026-01-10 14:52:10 | INFO  | Task 27a2dbc7-3cd7-4e7d-a96b-79d180378f23 is in state STARTED 2026-01-10 14:52:10.846368 | orchestrator | 2026-01-10 14:52:10 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:52:13.911720 | orchestrator | 2026-01-10 14:52:13 | INFO  | Task f7ad5e30-77df-4be0-ac5d-16d0a294adc9 is in state STARTED 2026-01-10 14:52:13.913613 | orchestrator | 2026-01-10 14:52:13 | INFO  | Task d0897284-56ea-426a-b367-9987ca494dc0 is in state STARTED 2026-01-10 14:52:13.915498 | orchestrator | 2026-01-10 14:52:13 | INFO  | Task c675ce34-5677-4951-8384-6a5ecff98e0c is in state STARTED 2026-01-10 14:52:13.917967 | orchestrator | 2026-01-10 14:52:13 | INFO  | Task 27a2dbc7-3cd7-4e7d-a96b-79d180378f23 is in state STARTED 2026-01-10 14:52:13.918085 | orchestrator | 2026-01-10 14:52:13 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:52:16.973363 | orchestrator | 2026-01-10 14:52:16 | INFO  | Task f7ad5e30-77df-4be0-ac5d-16d0a294adc9 is in state SUCCESS 2026-01-10 14:52:16.974414 | orchestrator | 2026-01-10 14:52:16.974452 | orchestrator | 2026-01-10 14:52:16.974460 | orchestrator | PLAY [Download ironic ipa images] ********************************************** 2026-01-10 14:52:16.974467 | orchestrator | 2026-01-10 14:52:16.974474 | orchestrator | TASK [Ensure the destination directory exists] ********************************* 2026-01-10 14:52:16.974481 | orchestrator | Saturday 10 January 2026 14:50:51 +0000 (0:00:00.068) 0:00:00.068 ****** 2026-01-10 14:52:16.974488 | orchestrator | changed: [localhost] 2026-01-10 14:52:16.974496 | orchestrator | 2026-01-10 14:52:16.974502 | orchestrator | TASK [Download ironic-agent initramfs] ***************************************** 2026-01-10 14:52:16.974509 | orchestrator | Saturday 10 January 2026 14:50:52 +0000 (0:00:01.144) 0:00:01.212 ****** 2026-01-10 14:52:16.974515 | orchestrator | FAILED - RETRYING: [localhost]: Download ironic-agent initramfs (3 retries left). 2026-01-10 14:52:16.974522 | orchestrator | changed: [localhost] 2026-01-10 14:52:16.974528 | orchestrator | 2026-01-10 14:52:16.974535 | orchestrator | TASK [Download ironic-agent kernel] ******************************************** 2026-01-10 14:52:16.974541 | orchestrator | Saturday 10 January 2026 14:51:44 +0000 (0:00:51.378) 0:00:52.591 ****** 2026-01-10 14:52:16.974562 | orchestrator | changed: [localhost] 2026-01-10 14:52:16.974568 | orchestrator | 2026-01-10 14:52:16.974575 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-10 14:52:16.974582 | orchestrator | 2026-01-10 14:52:16.974589 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-10 14:52:16.974596 | orchestrator | Saturday 10 January 2026 14:51:48 +0000 (0:00:04.471) 0:00:57.062 ****** 2026-01-10 14:52:16.974603 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:52:16.974610 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:52:16.974617 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:52:16.974624 | orchestrator | 2026-01-10 14:52:16.974631 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-10 14:52:16.974639 | orchestrator | Saturday 10 January 2026 14:51:48 +0000 (0:00:00.280) 0:00:57.343 ****** 2026-01-10 14:52:16.974646 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: enable_ironic_True 2026-01-10 14:52:16.974653 | orchestrator | ok: [testbed-node-0] => (item=enable_ironic_False) 2026-01-10 14:52:16.974661 | orchestrator | ok: [testbed-node-1] => (item=enable_ironic_False) 2026-01-10 14:52:16.974668 | orchestrator | ok: [testbed-node-2] => (item=enable_ironic_False) 2026-01-10 14:52:16.974675 | orchestrator | 2026-01-10 14:52:16.974682 | orchestrator | PLAY [Apply role ironic] ******************************************************* 2026-01-10 14:52:16.974689 | orchestrator | skipping: no hosts matched 2026-01-10 14:52:16.974697 | orchestrator | 2026-01-10 14:52:16.974704 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-10 14:52:16.974881 | orchestrator | localhost : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-10 14:52:16.974891 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-10 14:52:16.974910 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-10 14:52:16.974936 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-10 14:52:16.974943 | orchestrator | 2026-01-10 14:52:16.974950 | orchestrator | 2026-01-10 14:52:16.974957 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-10 14:52:16.974963 | orchestrator | Saturday 10 January 2026 14:51:49 +0000 (0:00:00.535) 0:00:57.878 ****** 2026-01-10 14:52:16.974971 | orchestrator | =============================================================================== 2026-01-10 14:52:16.974978 | orchestrator | Download ironic-agent initramfs ---------------------------------------- 51.38s 2026-01-10 14:52:16.974985 | orchestrator | Download ironic-agent kernel -------------------------------------------- 4.47s 2026-01-10 14:52:16.974992 | orchestrator | Ensure the destination directory exists --------------------------------- 1.14s 2026-01-10 14:52:16.974999 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.54s 2026-01-10 14:52:16.975004 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.28s 2026-01-10 14:52:16.975008 | orchestrator | 2026-01-10 14:52:16.975012 | orchestrator | 2026-01-10 14:52:16.975016 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-10 14:52:16.975023 | orchestrator | 2026-01-10 14:52:16.975030 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-10 14:52:16.975036 | orchestrator | Saturday 10 January 2026 14:48:55 +0000 (0:00:00.250) 0:00:00.250 ****** 2026-01-10 14:52:16.975042 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:52:16.975049 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:52:16.975056 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:52:16.975062 | orchestrator | 2026-01-10 14:52:16.975069 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-10 14:52:16.975075 | orchestrator | Saturday 10 January 2026 14:48:56 +0000 (0:00:00.315) 0:00:00.565 ****** 2026-01-10 14:52:16.975090 | orchestrator | ok: [testbed-node-0] => (item=enable_designate_True) 2026-01-10 14:52:16.975098 | orchestrator | ok: [testbed-node-1] => (item=enable_designate_True) 2026-01-10 14:52:16.975102 | orchestrator | ok: [testbed-node-2] => (item=enable_designate_True) 2026-01-10 14:52:16.975107 | orchestrator | 2026-01-10 14:52:16.975111 | orchestrator | PLAY [Apply role designate] **************************************************** 2026-01-10 14:52:16.975115 | orchestrator | 2026-01-10 14:52:16.975120 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-01-10 14:52:16.975124 | orchestrator | Saturday 10 January 2026 14:48:56 +0000 (0:00:00.385) 0:00:00.951 ****** 2026-01-10 14:52:16.975134 | orchestrator | included: /ansible/roles/designate/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:52:16.975139 | orchestrator | 2026-01-10 14:52:16.975153 | orchestrator | TASK [service-ks-register : designate | Creating/deleting services] ************ 2026-01-10 14:52:16.975157 | orchestrator | Saturday 10 January 2026 14:48:57 +0000 (0:00:00.491) 0:00:01.442 ****** 2026-01-10 14:52:16.975161 | orchestrator | changed: [testbed-node-0] => (item=designate (dns)) 2026-01-10 14:52:16.975166 | orchestrator | 2026-01-10 14:52:16.975170 | orchestrator | TASK [service-ks-register : designate | Creating/deleting endpoints] *********** 2026-01-10 14:52:16.975174 | orchestrator | Saturday 10 January 2026 14:49:00 +0000 (0:00:03.215) 0:00:04.658 ****** 2026-01-10 14:52:16.975179 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api-int.testbed.osism.xyz:9001 -> internal) 2026-01-10 14:52:16.975183 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api.testbed.osism.xyz:9001 -> public) 2026-01-10 14:52:16.975187 | orchestrator | 2026-01-10 14:52:16.975192 | orchestrator | TASK [service-ks-register : designate | Creating projects] ********************* 2026-01-10 14:52:16.975196 | orchestrator | Saturday 10 January 2026 14:49:06 +0000 (0:00:06.574) 0:00:11.233 ****** 2026-01-10 14:52:16.975200 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-01-10 14:52:16.975205 | orchestrator | 2026-01-10 14:52:16.975212 | orchestrator | TASK [service-ks-register : designate | Creating users] ************************ 2026-01-10 14:52:16.975218 | orchestrator | Saturday 10 January 2026 14:49:10 +0000 (0:00:03.968) 0:00:15.202 ****** 2026-01-10 14:52:16.975224 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-01-10 14:52:16.975231 | orchestrator | changed: [testbed-node-0] => (item=designate -> service) 2026-01-10 14:52:16.975237 | orchestrator | 2026-01-10 14:52:16.975245 | orchestrator | TASK [service-ks-register : designate | Creating roles] ************************ 2026-01-10 14:52:16.975249 | orchestrator | Saturday 10 January 2026 14:49:14 +0000 (0:00:04.067) 0:00:19.270 ****** 2026-01-10 14:52:16.975254 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-01-10 14:52:16.975258 | orchestrator | 2026-01-10 14:52:16.975263 | orchestrator | TASK [service-ks-register : designate | Granting/revoking user roles] ********** 2026-01-10 14:52:16.975267 | orchestrator | Saturday 10 January 2026 14:49:18 +0000 (0:00:03.315) 0:00:22.585 ****** 2026-01-10 14:52:16.975272 | orchestrator | changed: [testbed-node-0] => (item=designate -> service -> admin) 2026-01-10 14:52:16.975276 | orchestrator | 2026-01-10 14:52:16.975280 | orchestrator | TASK [designate : Ensuring config directories exist] *************************** 2026-01-10 14:52:16.975283 | orchestrator | Saturday 10 January 2026 14:49:22 +0000 (0:00:04.064) 0:00:26.650 ****** 2026-01-10 14:52:16.975289 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-10 14:52:16.975299 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-10 14:52:16.975309 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-10 14:52:16.975315 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-10 14:52:16.975322 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-10 14:52:16.975328 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-10 14:52:16.975337 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-01-10 14:52:16.975344 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-01-10 14:52:16.975358 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-01-10 14:52:16.975364 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-01-10 14:52:16.975369 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-01-10 14:52:16.975372 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-01-10 14:52:16.975376 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-01-10 14:52:16.975408 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-01-10 14:52:16.975412 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-01-10 14:52:16.975422 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-10 14:52:16.975427 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-10 14:52:16.975431 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-10 14:52:16.975434 | orchestrator | 2026-01-10 14:52:16.975438 | orchestrator | TASK [designate : Check if policies shall be overwritten] ********************** 2026-01-10 14:52:16.975442 | orchestrator | Saturday 10 January 2026 14:49:26 +0000 (0:00:03.960) 0:00:30.611 ****** 2026-01-10 14:52:16.975446 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:52:16.975453 | orchestrator | 2026-01-10 14:52:16.975456 | orchestrator | TASK [designate : Set designate policy file] *********************************** 2026-01-10 14:52:16.975460 | orchestrator | Saturday 10 January 2026 14:49:26 +0000 (0:00:00.217) 0:00:30.828 ****** 2026-01-10 14:52:16.975464 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:52:16.975467 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:52:16.975471 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:52:16.975475 | orchestrator | 2026-01-10 14:52:16.975478 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-01-10 14:52:16.975482 | orchestrator | Saturday 10 January 2026 14:49:26 +0000 (0:00:00.309) 0:00:31.137 ****** 2026-01-10 14:52:16.975486 | orchestrator | included: /ansible/roles/designate/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:52:16.975511 | orchestrator | 2026-01-10 14:52:16.975557 | orchestrator | TASK [service-cert-copy : designate | Copying over extra CA certificates] ****** 2026-01-10 14:52:16.975564 | orchestrator | Saturday 10 January 2026 14:49:27 +0000 (0:00:01.113) 0:00:32.251 ****** 2026-01-10 14:52:16.975569 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-10 14:52:16.975580 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-10 14:52:16.975603 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-10 14:52:16.975613 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-10 14:52:16.975618 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-10 14:52:16.975624 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-10 14:52:16.975631 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-01-10 14:52:16.975646 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-01-10 14:52:16.975653 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-01-10 14:52:16.975659 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-01-10 14:52:16.975666 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-01-10 14:52:16.975670 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-01-10 14:52:16.975674 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-01-10 14:52:16.975682 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-01-10 14:52:16.975690 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-01-10 14:52:16.975694 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-10 14:52:16.975700 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-10 14:52:16.975704 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-10 14:52:16.975708 | orchestrator | 2026-01-10 14:52:16.975712 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS certificate] *** 2026-01-10 14:52:16.975716 | orchestrator | Saturday 10 January 2026 14:49:34 +0000 (0:00:06.707) 0:00:38.958 ****** 2026-01-10 14:52:16.975720 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-01-10 14:52:16.976256 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-01-10 14:52:16.976276 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-10 14:52:16.976289 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-10 14:52:16.976296 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-10 14:52:16.976303 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-10 14:52:16.976310 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-10 14:52:16.976316 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:52:16.976353 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-10 14:52:16.976361 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-01-10 14:52:16.976371 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-10 14:52:16.976378 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-10 14:52:16.976385 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-10 14:52:16.976392 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-10 14:52:16.976403 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-10 14:52:16.976410 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-10 14:52:16.976420 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-10 14:52:16.976427 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:52:16.976433 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-10 14:52:16.976440 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-10 14:52:16.976447 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:52:16.976453 | orchestrator | 2026-01-10 14:52:16.976470 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS key] *** 2026-01-10 14:52:16.976477 | orchestrator | Saturday 10 January 2026 14:49:37 +0000 (0:00:03.335) 0:00:42.293 ****** 2026-01-10 14:52:16.976484 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-01-10 14:52:16.976497 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-10 14:52:16.976508 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-10 14:52:16.976515 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-10 14:52:16.976522 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-10 14:52:16.976529 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-01-10 14:52:16.976538 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-01-10 14:52:16.976552 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-10 14:52:16.976558 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:52:16.976565 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-10 14:52:16.976571 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-10 14:52:16.976578 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-10 14:52:16.976590 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-10 14:52:16.976596 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-10 14:52:16.976608 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-10 14:52:16.976612 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-10 14:52:16.976616 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:52:16.976620 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-10 14:52:16.976625 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-10 14:52:16.976632 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-10 14:52:16.976638 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:52:16.976645 | orchestrator | 2026-01-10 14:52:16.976651 | orchestrator | TASK [designate : Copying over config.json files for services] ***************** 2026-01-10 14:52:16.976657 | orchestrator | Saturday 10 January 2026 14:49:41 +0000 (0:00:03.226) 0:00:45.520 ****** 2026-01-10 14:52:16.976666 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-10 14:52:16.976684 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-10 14:52:16.976691 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-10 14:52:16.976697 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-10 14:52:16.976701 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-10 14:52:16.976705 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-10 14:52:16.976716 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-01-10 14:52:16.976720 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-01-10 14:52:16.976724 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-01-10 14:52:16.976728 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-01-10 14:52:16.976732 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-01-10 14:52:16.976736 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-01-10 14:52:16.976757 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-01-10 14:52:16.976769 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-01-10 14:52:16.976774 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-01-10 14:52:16.976777 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-10 14:52:16.976781 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-10 14:52:16.976785 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-10 14:52:16.976789 | orchestrator | 2026-01-10 14:52:16.976796 | orchestrator | TASK [designate : Copying over designate.conf] ********************************* 2026-01-10 14:52:16.976800 | orchestrator | Saturday 10 January 2026 14:49:48 +0000 (0:00:07.100) 0:00:52.620 ****** 2026-01-10 14:52:16.976807 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-10 14:52:16.976813 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-10 14:52:16.976817 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-10 14:52:16.976821 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-10 14:52:16.976825 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-10 14:52:16.976832 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-10 14:52:16.976841 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-01-10 14:52:16.976845 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-01-10 14:52:16.976849 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-01-10 14:52:16.976853 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-01-10 14:52:16.976857 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-01-10 14:52:16.976864 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-01-10 14:52:16.976870 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-01-10 14:52:16.976878 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-01-10 14:52:16.976882 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-01-10 14:52:16.976887 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-10 14:52:16.976891 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-10 14:52:16.976898 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-10 14:52:16.976902 | orchestrator | 2026-01-10 14:52:16.976906 | orchestrator | TASK [designate : Copying over pools.yaml] ************************************* 2026-01-10 14:52:16.976911 | orchestrator | Saturday 10 January 2026 14:50:11 +0000 (0:00:23.476) 0:01:16.097 ****** 2026-01-10 14:52:16.976915 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-01-10 14:52:16.976920 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-01-10 14:52:16.976924 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-01-10 14:52:16.976928 | orchestrator | 2026-01-10 14:52:16.976932 | orchestrator | TASK [designate : Copying over named.conf] ************************************* 2026-01-10 14:52:16.976936 | orchestrator | Saturday 10 January 2026 14:50:19 +0000 (0:00:07.507) 0:01:23.605 ****** 2026-01-10 14:52:16.976940 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-01-10 14:52:16.976944 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-01-10 14:52:16.976949 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-01-10 14:52:16.976953 | orchestrator | 2026-01-10 14:52:16.976959 | orchestrator | TASK [designate : Copying over rndc.conf] ************************************** 2026-01-10 14:52:16.976963 | orchestrator | Saturday 10 January 2026 14:50:23 +0000 (0:00:04.251) 0:01:27.857 ****** 2026-01-10 14:52:16.976970 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-01-10 14:52:16.976975 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-01-10 14:52:16.976982 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-01-10 14:52:16.976987 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-10 14:52:16.976993 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-10 14:52:16.977000 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-10 14:52:16.977005 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-10 14:52:16.977010 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-10 14:52:16.977017 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-10 14:52:16.977022 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-10 14:52:16.977026 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-10 14:52:16.977037 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-10 14:52:16.977042 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-10 14:52:16.977046 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-10 14:52:16.977053 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-10 14:52:16.977057 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-10 14:52:16.977062 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-10 14:52:16.977068 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-10 14:52:16.977072 | orchestrator | 2026-01-10 14:52:16.977076 | orchestrator | TASK [designate : Copying over rndc.key] *************************************** 2026-01-10 14:52:16.977081 | orchestrator | Saturday 10 January 2026 14:50:26 +0000 (0:00:03.092) 0:01:30.950 ****** 2026-01-10 14:52:16.977088 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-01-10 14:52:16.977093 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-01-10 14:52:16.977099 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-01-10 14:52:16.977103 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-10 14:52:16.977158 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-10 14:52:16.977164 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-10 14:52:16.977168 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-10 14:52:16.977175 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-10 14:52:16.977179 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-10 14:52:16.977183 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-10 14:52:16.977186 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-10 14:52:16.977194 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-10 14:52:16.977199 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-10 14:52:16.977205 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-10 14:52:16.977209 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-10 14:52:16.977213 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-10 14:52:16.977217 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-10 14:52:16.977222 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-10 14:52:16.977226 | orchestrator | 2026-01-10 14:52:16.977233 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-01-10 14:52:16.977240 | orchestrator | Saturday 10 January 2026 14:50:29 +0000 (0:00:03.191) 0:01:34.141 ****** 2026-01-10 14:52:16.977247 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:52:16.977255 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:52:16.977264 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:52:16.977270 | orchestrator | 2026-01-10 14:52:16.977276 | orchestrator | TASK [designate : Copying over existing policy file] *************************** 2026-01-10 14:52:16.977283 | orchestrator | Saturday 10 January 2026 14:50:30 +0000 (0:00:00.744) 0:01:34.885 ****** 2026-01-10 14:52:16.977290 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-01-10 14:52:16.977301 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-10 14:52:16.977308 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-10 14:52:16.977315 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-10 14:52:16.977321 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-10 14:52:16.977336 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-10 14:52:16.977344 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:52:16.977348 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-01-10 14:52:16.977352 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-10 14:52:16.977356 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-10 14:52:16.977359 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-10 14:52:16.977363 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-10 14:52:16.977372 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-10 14:52:16.977381 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:52:16.977385 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-01-10 14:52:16.977389 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-10 14:52:16.977392 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-10 14:52:16.977396 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-10 14:52:16.977400 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-10 14:52:16.977409 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-10 14:52:16.977423 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:52:16.977427 | orchestrator | 2026-01-10 14:52:16.977431 | orchestrator | TASK [service-check-containers : designate | Check containers] ***************** 2026-01-10 14:52:16.977434 | orchestrator | Saturday 10 January 2026 14:50:33 +0000 (0:00:02.468) 0:01:37.354 ****** 2026-01-10 14:52:16.977438 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-10 14:52:16.977442 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-10 14:52:16.977446 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-10 14:52:16.977454 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-10 14:52:16.977461 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-10 14:52:16.977465 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-10 14:52:16.977469 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-01-10 14:52:16.977473 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-01-10 14:52:16.977477 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-01-10 14:52:16.977481 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-01-10 14:52:16.977491 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-01-10 14:52:16.977495 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-01-10 14:52:16.977499 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-01-10 14:52:16.977503 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-01-10 14:52:16.977507 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-01-10 14:52:16.977511 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-10 14:52:16.977517 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-10 14:52:16.977526 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-10 14:52:16.977530 | orchestrator | 2026-01-10 14:52:16.977536 | orchestrator | TASK [service-check-containers : designate | Notify handlers to restart containers] *** 2026-01-10 14:52:16.977541 | orchestrator | Saturday 10 January 2026 14:50:39 +0000 (0:00:06.021) 0:01:43.375 ****** 2026-01-10 14:52:16.977545 | orchestrator | changed: [testbed-node-0] => { 2026-01-10 14:52:16.977549 | orchestrator |  "msg": "Notifying handlers" 2026-01-10 14:52:16.977553 | orchestrator | } 2026-01-10 14:52:16.977557 | orchestrator | changed: [testbed-node-1] => { 2026-01-10 14:52:16.977560 | orchestrator |  "msg": "Notifying handlers" 2026-01-10 14:52:16.977564 | orchestrator | } 2026-01-10 14:52:16.977568 | orchestrator | changed: [testbed-node-2] => { 2026-01-10 14:52:16.977572 | orchestrator |  "msg": "Notifying handlers" 2026-01-10 14:52:16.977575 | orchestrator | } 2026-01-10 14:52:16.977579 | orchestrator | 2026-01-10 14:52:16.977583 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-01-10 14:52:16.977587 | orchestrator | Saturday 10 January 2026 14:50:39 +0000 (0:00:00.875) 0:01:44.250 ****** 2026-01-10 14:52:16.977590 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-01-10 14:52:16.977594 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-10 14:52:16.977598 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-10 14:52:16.977607 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-10 14:52:16.977613 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-10 14:52:16.977617 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-10 14:52:16.977621 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:52:16.977625 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-01-10 14:52:16.977628 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-10 14:52:16.977635 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-10 14:52:16.977640 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-10 14:52:16.977646 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-10 14:52:16.977650 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-10 14:52:16.977654 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:52:16.977658 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-01-10 14:52:16.977662 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-10 14:52:16.977668 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-10 14:52:16.977673 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-10 14:52:16.977679 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-10 14:52:16.977683 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-10 14:52:16.977687 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:52:16.977691 | orchestrator | 2026-01-10 14:52:16.977695 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-01-10 14:52:16.977699 | orchestrator | Saturday 10 January 2026 14:50:42 +0000 (0:00:02.372) 0:01:46.623 ****** 2026-01-10 14:52:16.977702 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:52:16.977706 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:52:16.977710 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:52:16.977713 | orchestrator | 2026-01-10 14:52:16.977717 | orchestrator | TASK [designate : Creating Designate databases] ******************************** 2026-01-10 14:52:16.977721 | orchestrator | Saturday 10 January 2026 14:50:42 +0000 (0:00:00.503) 0:01:47.126 ****** 2026-01-10 14:52:16.977725 | orchestrator | changed: [testbed-node-0] => (item=designate) 2026-01-10 14:52:16.977729 | orchestrator | 2026-01-10 14:52:16.977732 | orchestrator | TASK [designate : Creating Designate databases user and setting permissions] *** 2026-01-10 14:52:16.977736 | orchestrator | Saturday 10 January 2026 14:50:45 +0000 (0:00:02.759) 0:01:49.886 ****** 2026-01-10 14:52:16.977740 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-01-10 14:52:16.977743 | orchestrator | changed: [testbed-node-0 -> {{ groups['designate-central'][0] }}] 2026-01-10 14:52:16.977765 | orchestrator | 2026-01-10 14:52:16.977772 | orchestrator | TASK [designate : Running Designate bootstrap container] *********************** 2026-01-10 14:52:16.977776 | orchestrator | Saturday 10 January 2026 14:50:47 +0000 (0:00:02.365) 0:01:52.251 ****** 2026-01-10 14:52:16.977779 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:52:16.977783 | orchestrator | 2026-01-10 14:52:16.977787 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-01-10 14:52:16.977790 | orchestrator | Saturday 10 January 2026 14:51:04 +0000 (0:00:16.692) 0:02:08.943 ****** 2026-01-10 14:52:16.977794 | orchestrator | 2026-01-10 14:52:16.977798 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-01-10 14:52:16.977801 | orchestrator | Saturday 10 January 2026 14:51:04 +0000 (0:00:00.065) 0:02:09.009 ****** 2026-01-10 14:52:16.977805 | orchestrator | 2026-01-10 14:52:16.977809 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-01-10 14:52:16.977812 | orchestrator | Saturday 10 January 2026 14:51:04 +0000 (0:00:00.066) 0:02:09.075 ****** 2026-01-10 14:52:16.977816 | orchestrator | 2026-01-10 14:52:16.977820 | orchestrator | RUNNING HANDLER [designate : Restart designate-backend-bind9 container] ******** 2026-01-10 14:52:16.977823 | orchestrator | Saturday 10 January 2026 14:51:04 +0000 (0:00:00.073) 0:02:09.148 ****** 2026-01-10 14:52:16.977827 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:52:16.977831 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:52:16.977835 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:52:16.977838 | orchestrator | 2026-01-10 14:52:16.977842 | orchestrator | RUNNING HANDLER [designate : Restart designate-api container] ****************** 2026-01-10 14:52:16.977846 | orchestrator | Saturday 10 January 2026 14:51:18 +0000 (0:00:13.516) 0:02:22.665 ****** 2026-01-10 14:52:16.977849 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:52:16.977853 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:52:16.977857 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:52:16.977860 | orchestrator | 2026-01-10 14:52:16.977864 | orchestrator | RUNNING HANDLER [designate : Restart designate-central container] ************** 2026-01-10 14:52:16.977868 | orchestrator | Saturday 10 January 2026 14:51:28 +0000 (0:00:10.413) 0:02:33.079 ****** 2026-01-10 14:52:16.977871 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:52:16.977875 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:52:16.977879 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:52:16.977882 | orchestrator | 2026-01-10 14:52:16.977886 | orchestrator | RUNNING HANDLER [designate : Restart designate-producer container] ************* 2026-01-10 14:52:16.977890 | orchestrator | Saturday 10 January 2026 14:51:34 +0000 (0:00:05.594) 0:02:38.674 ****** 2026-01-10 14:52:16.977893 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:52:16.977897 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:52:16.977901 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:52:16.977904 | orchestrator | 2026-01-10 14:52:16.977908 | orchestrator | RUNNING HANDLER [designate : Restart designate-mdns container] ***************** 2026-01-10 14:52:16.977912 | orchestrator | Saturday 10 January 2026 14:51:44 +0000 (0:00:09.996) 0:02:48.670 ****** 2026-01-10 14:52:16.977919 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:52:16.977923 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:52:16.977927 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:52:16.977930 | orchestrator | 2026-01-10 14:52:16.977934 | orchestrator | RUNNING HANDLER [designate : Restart designate-worker container] *************** 2026-01-10 14:52:16.977940 | orchestrator | Saturday 10 January 2026 14:51:56 +0000 (0:00:12.000) 0:03:00.671 ****** 2026-01-10 14:52:16.977944 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:52:16.977947 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:52:16.977951 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:52:16.977955 | orchestrator | 2026-01-10 14:52:16.977958 | orchestrator | TASK [designate : Non-destructive DNS pools update] **************************** 2026-01-10 14:52:16.977962 | orchestrator | Saturday 10 January 2026 14:52:07 +0000 (0:00:10.939) 0:03:11.611 ****** 2026-01-10 14:52:16.977966 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:52:16.977972 | orchestrator | 2026-01-10 14:52:16.977976 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-10 14:52:16.977980 | orchestrator | testbed-node-0 : ok=30  changed=24  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-01-10 14:52:16.977985 | orchestrator | testbed-node-1 : ok=20  changed=16  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-10 14:52:16.977989 | orchestrator | testbed-node-2 : ok=20  changed=16  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-10 14:52:16.977992 | orchestrator | 2026-01-10 14:52:16.977996 | orchestrator | 2026-01-10 14:52:16.978000 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-10 14:52:16.978004 | orchestrator | Saturday 10 January 2026 14:52:14 +0000 (0:00:07.407) 0:03:19.019 ****** 2026-01-10 14:52:16.978007 | orchestrator | =============================================================================== 2026-01-10 14:52:16.978011 | orchestrator | designate : Copying over designate.conf -------------------------------- 23.48s 2026-01-10 14:52:16.978099 | orchestrator | designate : Running Designate bootstrap container ---------------------- 16.69s 2026-01-10 14:52:16.978104 | orchestrator | designate : Restart designate-backend-bind9 container ------------------ 13.52s 2026-01-10 14:52:16.978108 | orchestrator | designate : Restart designate-mdns container --------------------------- 12.00s 2026-01-10 14:52:16.978112 | orchestrator | designate : Restart designate-worker container ------------------------- 10.94s 2026-01-10 14:52:16.978115 | orchestrator | designate : Restart designate-api container ---------------------------- 10.41s 2026-01-10 14:52:16.978119 | orchestrator | designate : Restart designate-producer container ----------------------- 10.00s 2026-01-10 14:52:16.978123 | orchestrator | designate : Copying over pools.yaml ------------------------------------- 7.51s 2026-01-10 14:52:16.978127 | orchestrator | designate : Non-destructive DNS pools update ---------------------------- 7.41s 2026-01-10 14:52:16.978131 | orchestrator | designate : Copying over config.json files for services ----------------- 7.10s 2026-01-10 14:52:16.978134 | orchestrator | service-cert-copy : designate | Copying over extra CA certificates ------ 6.71s 2026-01-10 14:52:16.978138 | orchestrator | service-ks-register : designate | Creating/deleting endpoints ----------- 6.57s 2026-01-10 14:52:16.978142 | orchestrator | service-check-containers : designate | Check containers ----------------- 6.02s 2026-01-10 14:52:16.978146 | orchestrator | designate : Restart designate-central container ------------------------- 5.59s 2026-01-10 14:52:16.978149 | orchestrator | designate : Copying over named.conf ------------------------------------- 4.25s 2026-01-10 14:52:16.978153 | orchestrator | service-ks-register : designate | Creating users ------------------------ 4.07s 2026-01-10 14:52:16.978157 | orchestrator | service-ks-register : designate | Granting/revoking user roles ---------- 4.06s 2026-01-10 14:52:16.978161 | orchestrator | service-ks-register : designate | Creating projects --------------------- 3.97s 2026-01-10 14:52:16.978164 | orchestrator | designate : Ensuring config directories exist --------------------------- 3.96s 2026-01-10 14:52:16.978168 | orchestrator | service-cert-copy : designate | Copying over backend internal TLS certificate --- 3.34s 2026-01-10 14:52:16.978172 | orchestrator | 2026-01-10 14:52:16 | INFO  | Task d0897284-56ea-426a-b367-9987ca494dc0 is in state STARTED 2026-01-10 14:52:16.978178 | orchestrator | 2026-01-10 14:52:16 | INFO  | Task c675ce34-5677-4951-8384-6a5ecff98e0c is in state STARTED 2026-01-10 14:52:16.979417 | orchestrator | 2026-01-10 14:52:16 | INFO  | Task 27a2dbc7-3cd7-4e7d-a96b-79d180378f23 is in state STARTED 2026-01-10 14:52:16.980931 | orchestrator | 2026-01-10 14:52:16 | INFO  | Task 193054ac-f4e0-40cd-a896-86221a63bfeb is in state STARTED 2026-01-10 14:52:16.980965 | orchestrator | 2026-01-10 14:52:16 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:52:20.038292 | orchestrator | 2026-01-10 14:52:20 | INFO  | Task d0897284-56ea-426a-b367-9987ca494dc0 is in state STARTED 2026-01-10 14:52:20.038446 | orchestrator | 2026-01-10 14:52:20 | INFO  | Task c675ce34-5677-4951-8384-6a5ecff98e0c is in state STARTED 2026-01-10 14:52:20.039565 | orchestrator | 2026-01-10 14:52:20 | INFO  | Task 27a2dbc7-3cd7-4e7d-a96b-79d180378f23 is in state STARTED 2026-01-10 14:52:20.040507 | orchestrator | 2026-01-10 14:52:20 | INFO  | Task 193054ac-f4e0-40cd-a896-86221a63bfeb is in state STARTED 2026-01-10 14:52:20.040691 | orchestrator | 2026-01-10 14:52:20 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:52:23.099795 | orchestrator | 2026-01-10 14:52:23 | INFO  | Task d0897284-56ea-426a-b367-9987ca494dc0 is in state STARTED 2026-01-10 14:52:23.100110 | orchestrator | 2026-01-10 14:52:23 | INFO  | Task c675ce34-5677-4951-8384-6a5ecff98e0c is in state STARTED 2026-01-10 14:52:23.100982 | orchestrator | 2026-01-10 14:52:23 | INFO  | Task 27a2dbc7-3cd7-4e7d-a96b-79d180378f23 is in state STARTED 2026-01-10 14:52:23.101650 | orchestrator | 2026-01-10 14:52:23 | INFO  | Task 193054ac-f4e0-40cd-a896-86221a63bfeb is in state STARTED 2026-01-10 14:52:23.101673 | orchestrator | 2026-01-10 14:52:23 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:52:26.135842 | orchestrator | 2026-01-10 14:52:26 | INFO  | Task d0897284-56ea-426a-b367-9987ca494dc0 is in state STARTED 2026-01-10 14:52:26.136356 | orchestrator | 2026-01-10 14:52:26 | INFO  | Task c675ce34-5677-4951-8384-6a5ecff98e0c is in state STARTED 2026-01-10 14:52:26.137119 | orchestrator | 2026-01-10 14:52:26 | INFO  | Task 27a2dbc7-3cd7-4e7d-a96b-79d180378f23 is in state STARTED 2026-01-10 14:52:26.141130 | orchestrator | 2026-01-10 14:52:26 | INFO  | Task 193054ac-f4e0-40cd-a896-86221a63bfeb is in state STARTED 2026-01-10 14:52:26.141175 | orchestrator | 2026-01-10 14:52:26 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:52:29.166522 | orchestrator | 2026-01-10 14:52:29 | INFO  | Task d0897284-56ea-426a-b367-9987ca494dc0 is in state STARTED 2026-01-10 14:52:29.167168 | orchestrator | 2026-01-10 14:52:29 | INFO  | Task c675ce34-5677-4951-8384-6a5ecff98e0c is in state STARTED 2026-01-10 14:52:29.167983 | orchestrator | 2026-01-10 14:52:29 | INFO  | Task 27a2dbc7-3cd7-4e7d-a96b-79d180378f23 is in state STARTED 2026-01-10 14:52:29.168849 | orchestrator | 2026-01-10 14:52:29 | INFO  | Task 193054ac-f4e0-40cd-a896-86221a63bfeb is in state STARTED 2026-01-10 14:52:29.168922 | orchestrator | 2026-01-10 14:52:29 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:52:32.198409 | orchestrator | 2026-01-10 14:52:32 | INFO  | Task d0897284-56ea-426a-b367-9987ca494dc0 is in state STARTED 2026-01-10 14:52:32.200538 | orchestrator | 2026-01-10 14:52:32 | INFO  | Task c675ce34-5677-4951-8384-6a5ecff98e0c is in state STARTED 2026-01-10 14:52:32.202618 | orchestrator | 2026-01-10 14:52:32 | INFO  | Task 27a2dbc7-3cd7-4e7d-a96b-79d180378f23 is in state STARTED 2026-01-10 14:52:32.204737 | orchestrator | 2026-01-10 14:52:32 | INFO  | Task 193054ac-f4e0-40cd-a896-86221a63bfeb is in state STARTED 2026-01-10 14:52:32.206002 | orchestrator | 2026-01-10 14:52:32 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:52:35.249041 | orchestrator | 2026-01-10 14:52:35 | INFO  | Task d0897284-56ea-426a-b367-9987ca494dc0 is in state STARTED 2026-01-10 14:52:35.252119 | orchestrator | 2026-01-10 14:52:35 | INFO  | Task c675ce34-5677-4951-8384-6a5ecff98e0c is in state STARTED 2026-01-10 14:52:35.252851 | orchestrator | 2026-01-10 14:52:35 | INFO  | Task 27a2dbc7-3cd7-4e7d-a96b-79d180378f23 is in state STARTED 2026-01-10 14:52:35.259229 | orchestrator | 2026-01-10 14:52:35 | INFO  | Task 193054ac-f4e0-40cd-a896-86221a63bfeb is in state STARTED 2026-01-10 14:52:35.259301 | orchestrator | 2026-01-10 14:52:35 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:52:38.306275 | orchestrator | 2026-01-10 14:52:38 | INFO  | Task d0897284-56ea-426a-b367-9987ca494dc0 is in state STARTED 2026-01-10 14:52:38.310780 | orchestrator | 2026-01-10 14:52:38 | INFO  | Task c675ce34-5677-4951-8384-6a5ecff98e0c is in state STARTED 2026-01-10 14:52:38.315453 | orchestrator | 2026-01-10 14:52:38 | INFO  | Task 27a2dbc7-3cd7-4e7d-a96b-79d180378f23 is in state STARTED 2026-01-10 14:52:38.317406 | orchestrator | 2026-01-10 14:52:38 | INFO  | Task 193054ac-f4e0-40cd-a896-86221a63bfeb is in state STARTED 2026-01-10 14:52:38.318195 | orchestrator | 2026-01-10 14:52:38 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:52:41.367244 | orchestrator | 2026-01-10 14:52:41 | INFO  | Task d0897284-56ea-426a-b367-9987ca494dc0 is in state STARTED 2026-01-10 14:52:41.367857 | orchestrator | 2026-01-10 14:52:41 | INFO  | Task c675ce34-5677-4951-8384-6a5ecff98e0c is in state STARTED 2026-01-10 14:52:41.368920 | orchestrator | 2026-01-10 14:52:41 | INFO  | Task 27a2dbc7-3cd7-4e7d-a96b-79d180378f23 is in state STARTED 2026-01-10 14:52:41.369768 | orchestrator | 2026-01-10 14:52:41 | INFO  | Task 193054ac-f4e0-40cd-a896-86221a63bfeb is in state STARTED 2026-01-10 14:52:41.369808 | orchestrator | 2026-01-10 14:52:41 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:52:44.399657 | orchestrator | 2026-01-10 14:52:44 | INFO  | Task d0897284-56ea-426a-b367-9987ca494dc0 is in state STARTED 2026-01-10 14:52:44.400041 | orchestrator | 2026-01-10 14:52:44 | INFO  | Task c675ce34-5677-4951-8384-6a5ecff98e0c is in state STARTED 2026-01-10 14:52:44.402009 | orchestrator | 2026-01-10 14:52:44 | INFO  | Task 27a2dbc7-3cd7-4e7d-a96b-79d180378f23 is in state STARTED 2026-01-10 14:52:44.402612 | orchestrator | 2026-01-10 14:52:44 | INFO  | Task 193054ac-f4e0-40cd-a896-86221a63bfeb is in state STARTED 2026-01-10 14:52:44.402680 | orchestrator | 2026-01-10 14:52:44 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:52:47.442919 | orchestrator | 2026-01-10 14:52:47 | INFO  | Task d0897284-56ea-426a-b367-9987ca494dc0 is in state STARTED 2026-01-10 14:52:47.444257 | orchestrator | 2026-01-10 14:52:47 | INFO  | Task c675ce34-5677-4951-8384-6a5ecff98e0c is in state STARTED 2026-01-10 14:52:47.446792 | orchestrator | 2026-01-10 14:52:47 | INFO  | Task 27a2dbc7-3cd7-4e7d-a96b-79d180378f23 is in state STARTED 2026-01-10 14:52:47.448272 | orchestrator | 2026-01-10 14:52:47 | INFO  | Task 193054ac-f4e0-40cd-a896-86221a63bfeb is in state STARTED 2026-01-10 14:52:47.448322 | orchestrator | 2026-01-10 14:52:47 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:52:50.483996 | orchestrator | 2026-01-10 14:52:50 | INFO  | Task d0897284-56ea-426a-b367-9987ca494dc0 is in state STARTED 2026-01-10 14:52:50.485914 | orchestrator | 2026-01-10 14:52:50 | INFO  | Task c675ce34-5677-4951-8384-6a5ecff98e0c is in state STARTED 2026-01-10 14:52:50.488271 | orchestrator | 2026-01-10 14:52:50 | INFO  | Task 27a2dbc7-3cd7-4e7d-a96b-79d180378f23 is in state STARTED 2026-01-10 14:52:50.489640 | orchestrator | 2026-01-10 14:52:50 | INFO  | Task 193054ac-f4e0-40cd-a896-86221a63bfeb is in state STARTED 2026-01-10 14:52:50.489716 | orchestrator | 2026-01-10 14:52:50 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:52:53.546129 | orchestrator | 2026-01-10 14:52:53 | INFO  | Task d0897284-56ea-426a-b367-9987ca494dc0 is in state STARTED 2026-01-10 14:52:53.549263 | orchestrator | 2026-01-10 14:52:53 | INFO  | Task c675ce34-5677-4951-8384-6a5ecff98e0c is in state STARTED 2026-01-10 14:52:53.552162 | orchestrator | 2026-01-10 14:52:53 | INFO  | Task 27a2dbc7-3cd7-4e7d-a96b-79d180378f23 is in state STARTED 2026-01-10 14:52:53.555203 | orchestrator | 2026-01-10 14:52:53 | INFO  | Task 193054ac-f4e0-40cd-a896-86221a63bfeb is in state STARTED 2026-01-10 14:52:53.555260 | orchestrator | 2026-01-10 14:52:53 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:52:56.609143 | orchestrator | 2026-01-10 14:52:56 | INFO  | Task d0897284-56ea-426a-b367-9987ca494dc0 is in state STARTED 2026-01-10 14:52:56.610498 | orchestrator | 2026-01-10 14:52:56 | INFO  | Task c675ce34-5677-4951-8384-6a5ecff98e0c is in state STARTED 2026-01-10 14:52:56.613565 | orchestrator | 2026-01-10 14:52:56 | INFO  | Task 27a2dbc7-3cd7-4e7d-a96b-79d180378f23 is in state STARTED 2026-01-10 14:52:56.614887 | orchestrator | 2026-01-10 14:52:56 | INFO  | Task 193054ac-f4e0-40cd-a896-86221a63bfeb is in state STARTED 2026-01-10 14:52:56.615251 | orchestrator | 2026-01-10 14:52:56 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:52:59.657433 | orchestrator | 2026-01-10 14:52:59 | INFO  | Task d0897284-56ea-426a-b367-9987ca494dc0 is in state STARTED 2026-01-10 14:52:59.657865 | orchestrator | 2026-01-10 14:52:59 | INFO  | Task c675ce34-5677-4951-8384-6a5ecff98e0c is in state STARTED 2026-01-10 14:52:59.659022 | orchestrator | 2026-01-10 14:52:59 | INFO  | Task 27a2dbc7-3cd7-4e7d-a96b-79d180378f23 is in state STARTED 2026-01-10 14:52:59.660485 | orchestrator | 2026-01-10 14:52:59 | INFO  | Task 193054ac-f4e0-40cd-a896-86221a63bfeb is in state STARTED 2026-01-10 14:52:59.660518 | orchestrator | 2026-01-10 14:52:59 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:53:02.709416 | orchestrator | 2026-01-10 14:53:02 | INFO  | Task d0897284-56ea-426a-b367-9987ca494dc0 is in state STARTED 2026-01-10 14:53:02.712459 | orchestrator | 2026-01-10 14:53:02 | INFO  | Task c675ce34-5677-4951-8384-6a5ecff98e0c is in state STARTED 2026-01-10 14:53:02.715953 | orchestrator | 2026-01-10 14:53:02 | INFO  | Task 27a2dbc7-3cd7-4e7d-a96b-79d180378f23 is in state STARTED 2026-01-10 14:53:02.717330 | orchestrator | 2026-01-10 14:53:02 | INFO  | Task 193054ac-f4e0-40cd-a896-86221a63bfeb is in state STARTED 2026-01-10 14:53:02.717392 | orchestrator | 2026-01-10 14:53:02 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:53:05.754223 | orchestrator | 2026-01-10 14:53:05 | INFO  | Task d0897284-56ea-426a-b367-9987ca494dc0 is in state STARTED 2026-01-10 14:53:05.760039 | orchestrator | 2026-01-10 14:53:05 | INFO  | Task c675ce34-5677-4951-8384-6a5ecff98e0c is in state STARTED 2026-01-10 14:53:05.762698 | orchestrator | 2026-01-10 14:53:05 | INFO  | Task 27a2dbc7-3cd7-4e7d-a96b-79d180378f23 is in state STARTED 2026-01-10 14:53:05.764730 | orchestrator | 2026-01-10 14:53:05 | INFO  | Task 193054ac-f4e0-40cd-a896-86221a63bfeb is in state STARTED 2026-01-10 14:53:05.765605 | orchestrator | 2026-01-10 14:53:05 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:53:08.797817 | orchestrator | 2026-01-10 14:53:08 | INFO  | Task d0897284-56ea-426a-b367-9987ca494dc0 is in state STARTED 2026-01-10 14:53:08.799239 | orchestrator | 2026-01-10 14:53:08 | INFO  | Task c675ce34-5677-4951-8384-6a5ecff98e0c is in state STARTED 2026-01-10 14:53:08.800711 | orchestrator | 2026-01-10 14:53:08 | INFO  | Task 27a2dbc7-3cd7-4e7d-a96b-79d180378f23 is in state STARTED 2026-01-10 14:53:08.802168 | orchestrator | 2026-01-10 14:53:08 | INFO  | Task 193054ac-f4e0-40cd-a896-86221a63bfeb is in state STARTED 2026-01-10 14:53:08.802248 | orchestrator | 2026-01-10 14:53:08 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:53:11.837889 | orchestrator | 2026-01-10 14:53:11.837943 | orchestrator | 2026-01-10 14:53:11.837952 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-10 14:53:11.837973 | orchestrator | 2026-01-10 14:53:11.837979 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-10 14:53:11.837985 | orchestrator | Saturday 10 January 2026 14:51:54 +0000 (0:00:00.274) 0:00:00.274 ****** 2026-01-10 14:53:11.837990 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:53:11.837997 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:53:11.838003 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:53:11.838009 | orchestrator | 2026-01-10 14:53:11.838044 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-10 14:53:11.838050 | orchestrator | Saturday 10 January 2026 14:51:54 +0000 (0:00:00.360) 0:00:00.635 ****** 2026-01-10 14:53:11.838056 | orchestrator | ok: [testbed-node-0] => (item=enable_placement_True) 2026-01-10 14:53:11.838063 | orchestrator | ok: [testbed-node-1] => (item=enable_placement_True) 2026-01-10 14:53:11.838069 | orchestrator | ok: [testbed-node-2] => (item=enable_placement_True) 2026-01-10 14:53:11.838075 | orchestrator | 2026-01-10 14:53:11.838081 | orchestrator | PLAY [Apply role placement] **************************************************** 2026-01-10 14:53:11.838086 | orchestrator | 2026-01-10 14:53:11.838091 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-01-10 14:53:11.838096 | orchestrator | Saturday 10 January 2026 14:51:55 +0000 (0:00:00.760) 0:00:01.396 ****** 2026-01-10 14:53:11.838102 | orchestrator | included: /ansible/roles/placement/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:53:11.838107 | orchestrator | 2026-01-10 14:53:11.838113 | orchestrator | TASK [service-ks-register : placement | Creating/deleting services] ************ 2026-01-10 14:53:11.838118 | orchestrator | Saturday 10 January 2026 14:51:56 +0000 (0:00:00.793) 0:00:02.189 ****** 2026-01-10 14:53:11.838124 | orchestrator | changed: [testbed-node-0] => (item=placement (placement)) 2026-01-10 14:53:11.838129 | orchestrator | 2026-01-10 14:53:11.838134 | orchestrator | TASK [service-ks-register : placement | Creating/deleting endpoints] *********** 2026-01-10 14:53:11.838140 | orchestrator | Saturday 10 January 2026 14:52:00 +0000 (0:00:03.654) 0:00:05.844 ****** 2026-01-10 14:53:11.838146 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api-int.testbed.osism.xyz:8780 -> internal) 2026-01-10 14:53:11.838151 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api.testbed.osism.xyz:8780 -> public) 2026-01-10 14:53:11.838155 | orchestrator | 2026-01-10 14:53:11.838158 | orchestrator | TASK [service-ks-register : placement | Creating projects] ********************* 2026-01-10 14:53:11.838161 | orchestrator | Saturday 10 January 2026 14:52:07 +0000 (0:00:07.770) 0:00:13.614 ****** 2026-01-10 14:53:11.838165 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-01-10 14:53:11.838168 | orchestrator | 2026-01-10 14:53:11.838171 | orchestrator | TASK [service-ks-register : placement | Creating users] ************************ 2026-01-10 14:53:11.838174 | orchestrator | Saturday 10 January 2026 14:52:11 +0000 (0:00:03.352) 0:00:16.967 ****** 2026-01-10 14:53:11.838177 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-01-10 14:53:11.838180 | orchestrator | changed: [testbed-node-0] => (item=placement -> service) 2026-01-10 14:53:11.838183 | orchestrator | 2026-01-10 14:53:11.838186 | orchestrator | TASK [service-ks-register : placement | Creating roles] ************************ 2026-01-10 14:53:11.838189 | orchestrator | Saturday 10 January 2026 14:52:15 +0000 (0:00:04.178) 0:00:21.146 ****** 2026-01-10 14:53:11.838192 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-01-10 14:53:11.838195 | orchestrator | 2026-01-10 14:53:11.838198 | orchestrator | TASK [service-ks-register : placement | Granting/revoking user roles] ********** 2026-01-10 14:53:11.838201 | orchestrator | Saturday 10 January 2026 14:52:18 +0000 (0:00:03.210) 0:00:24.356 ****** 2026-01-10 14:53:11.838204 | orchestrator | changed: [testbed-node-0] => (item=placement -> service -> admin) 2026-01-10 14:53:11.838208 | orchestrator | 2026-01-10 14:53:11.838211 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-01-10 14:53:11.838214 | orchestrator | Saturday 10 January 2026 14:52:22 +0000 (0:00:03.643) 0:00:28.000 ****** 2026-01-10 14:53:11.838221 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:53:11.838225 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:53:11.838228 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:53:11.838231 | orchestrator | 2026-01-10 14:53:11.838234 | orchestrator | TASK [placement : Ensuring config directories exist] *************************** 2026-01-10 14:53:11.838243 | orchestrator | Saturday 10 January 2026 14:52:22 +0000 (0:00:00.302) 0:00:28.302 ****** 2026-01-10 14:53:11.838261 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-01-10 14:53:11.838266 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-01-10 14:53:11.838270 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-01-10 14:53:11.838274 | orchestrator | 2026-01-10 14:53:11.838277 | orchestrator | TASK [placement : Check if policies shall be overwritten] ********************** 2026-01-10 14:53:11.838280 | orchestrator | Saturday 10 January 2026 14:52:23 +0000 (0:00:01.209) 0:00:29.512 ****** 2026-01-10 14:53:11.838283 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:53:11.838286 | orchestrator | 2026-01-10 14:53:11.838289 | orchestrator | TASK [placement : Set placement policy file] *********************************** 2026-01-10 14:53:11.838295 | orchestrator | Saturday 10 January 2026 14:52:23 +0000 (0:00:00.248) 0:00:29.760 ****** 2026-01-10 14:53:11.838298 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:53:11.838301 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:53:11.838304 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:53:11.838307 | orchestrator | 2026-01-10 14:53:11.838310 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-01-10 14:53:11.838313 | orchestrator | Saturday 10 January 2026 14:52:25 +0000 (0:00:01.136) 0:00:30.896 ****** 2026-01-10 14:53:11.838316 | orchestrator | included: /ansible/roles/placement/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:53:11.838319 | orchestrator | 2026-01-10 14:53:11.838322 | orchestrator | TASK [service-cert-copy : placement | Copying over extra CA certificates] ****** 2026-01-10 14:53:11.838325 | orchestrator | Saturday 10 January 2026 14:52:25 +0000 (0:00:00.733) 0:00:31.630 ****** 2026-01-10 14:53:11.838331 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-01-10 14:53:11.838337 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-01-10 14:53:11.838341 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-01-10 14:53:11.838347 | orchestrator | 2026-01-10 14:53:11.838350 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS certificate] *** 2026-01-10 14:53:11.838353 | orchestrator | Saturday 10 January 2026 14:52:27 +0000 (0:00:02.085) 0:00:33.716 ****** 2026-01-10 14:53:11.838358 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-01-10 14:53:11.838361 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:53:11.838368 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-01-10 14:53:11.838371 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:53:11.838375 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-01-10 14:53:11.838378 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:53:11.838381 | orchestrator | 2026-01-10 14:53:11.838384 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS key] *** 2026-01-10 14:53:11.838387 | orchestrator | Saturday 10 January 2026 14:52:29 +0000 (0:00:01.443) 0:00:35.159 ****** 2026-01-10 14:53:11.838390 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-01-10 14:53:11.838396 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:53:11.838403 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-01-10 14:53:11.838407 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:53:11.838413 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-01-10 14:53:11.838416 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:53:11.838419 | orchestrator | 2026-01-10 14:53:11.838422 | orchestrator | TASK [placement : Copying over config.json files for services] ***************** 2026-01-10 14:53:11.838426 | orchestrator | Saturday 10 January 2026 14:52:30 +0000 (0:00:01.440) 0:00:36.600 ****** 2026-01-10 14:53:11.838431 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-01-10 14:53:11.838440 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-01-10 14:53:11.838445 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-01-10 14:53:11.838449 | orchestrator | 2026-01-10 14:53:11.838452 | orchestrator | TASK [placement : Copying over placement.conf] ********************************* 2026-01-10 14:53:11.838455 | orchestrator | Saturday 10 January 2026 14:52:32 +0000 (0:00:01.548) 0:00:38.148 ****** 2026-01-10 14:53:11.838461 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-01-10 14:53:11.838464 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-01-10 14:53:11.838472 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-01-10 14:53:11.838475 | orchestrator | 2026-01-10 14:53:11.838478 | orchestrator | TASK [placement : Copying over placement-api wsgi configuration] *************** 2026-01-10 14:53:11.838482 | orchestrator | Saturday 10 January 2026 14:52:37 +0000 (0:00:04.739) 0:00:42.888 ****** 2026-01-10 14:53:11.838485 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2)  2026-01-10 14:53:11.838489 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:53:11.838493 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2)  2026-01-10 14:53:11.838496 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:53:11.838500 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2)  2026-01-10 14:53:11.838504 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:53:11.838509 | orchestrator | 2026-01-10 14:53:11.838514 | orchestrator | TASK [Configure uWSGI for Placement] ******************************************* 2026-01-10 14:53:11.838519 | orchestrator | Saturday 10 January 2026 14:52:37 +0000 (0:00:00.731) 0:00:43.619 ****** 2026-01-10 14:53:11.838524 | orchestrator | included: service-uwsgi-config for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:53:11.838529 | orchestrator | 2026-01-10 14:53:11.838535 | orchestrator | TASK [service-uwsgi-config : Copying over placement-api uWSGI config] ********** 2026-01-10 14:53:11.838542 | orchestrator | Saturday 10 January 2026 14:52:38 +0000 (0:00:00.726) 0:00:44.346 ****** 2026-01-10 14:53:11.838548 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:53:11.838571 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:53:11.838576 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:53:11.838580 | orchestrator | 2026-01-10 14:53:11.838583 | orchestrator | TASK [placement : Copying over migrate-db.rc.j2 configuration] ***************** 2026-01-10 14:53:11.838587 | orchestrator | Saturday 10 January 2026 14:52:40 +0000 (0:00:02.145) 0:00:46.492 ****** 2026-01-10 14:53:11.838591 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:53:11.838594 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:53:11.838601 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:53:11.838604 | orchestrator | 2026-01-10 14:53:11.838608 | orchestrator | TASK [placement : Copying over existing policy file] *************************** 2026-01-10 14:53:11.838611 | orchestrator | Saturday 10 January 2026 14:52:42 +0000 (0:00:01.370) 0:00:47.862 ****** 2026-01-10 14:53:11.838615 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-01-10 14:53:11.838619 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:53:11.838623 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-01-10 14:53:11.838627 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:53:11.838633 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-01-10 14:53:11.838637 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:53:11.838640 | orchestrator | 2026-01-10 14:53:11.838644 | orchestrator | TASK [service-check-containers : placement | Check containers] ***************** 2026-01-10 14:53:11.838648 | orchestrator | Saturday 10 January 2026 14:52:42 +0000 (0:00:00.548) 0:00:48.411 ****** 2026-01-10 14:53:11.838654 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-01-10 14:53:11.838660 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-01-10 14:53:11.838699 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-01-10 14:53:11.838703 | orchestrator | 2026-01-10 14:53:11.838709 | orchestrator | TASK [service-check-containers : placement | Notify handlers to restart containers] *** 2026-01-10 14:53:11.838713 | orchestrator | Saturday 10 January 2026 14:52:43 +0000 (0:00:01.118) 0:00:49.529 ****** 2026-01-10 14:53:11.838716 | orchestrator | changed: [testbed-node-0] => { 2026-01-10 14:53:11.838719 | orchestrator |  "msg": "Notifying handlers" 2026-01-10 14:53:11.838722 | orchestrator | } 2026-01-10 14:53:11.838725 | orchestrator | changed: [testbed-node-1] => { 2026-01-10 14:53:11.838728 | orchestrator |  "msg": "Notifying handlers" 2026-01-10 14:53:11.838731 | orchestrator | } 2026-01-10 14:53:11.838734 | orchestrator | changed: [testbed-node-2] => { 2026-01-10 14:53:11.838737 | orchestrator |  "msg": "Notifying handlers" 2026-01-10 14:53:11.838740 | orchestrator | } 2026-01-10 14:53:11.838743 | orchestrator | 2026-01-10 14:53:11.838746 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-01-10 14:53:11.838750 | orchestrator | Saturday 10 January 2026 14:52:44 +0000 (0:00:00.445) 0:00:49.975 ****** 2026-01-10 14:53:11.838759 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL2026-01-10 14:53:11 | INFO  | Task d0897284-56ea-426a-b367-9987ca494dc0 is in state SUCCESS 2026-01-10 14:53:11.838764 | orchestrator | ', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-01-10 14:53:11.838767 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:53:11.838771 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-01-10 14:53:11.838774 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:53:11.838777 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-01-10 14:53:11.838780 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:53:11.838784 | orchestrator | 2026-01-10 14:53:11.838787 | orchestrator | TASK [placement : Creating placement databases] ******************************** 2026-01-10 14:53:11.838790 | orchestrator | Saturday 10 January 2026 14:52:44 +0000 (0:00:00.644) 0:00:50.619 ****** 2026-01-10 14:53:11.838795 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:53:11.838798 | orchestrator | 2026-01-10 14:53:11.838801 | orchestrator | TASK [placement : Creating placement databases user and setting permissions] *** 2026-01-10 14:53:11.838807 | orchestrator | Saturday 10 January 2026 14:52:46 +0000 (0:00:01.864) 0:00:52.484 ****** 2026-01-10 14:53:11.838810 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:53:11.838813 | orchestrator | 2026-01-10 14:53:11.838816 | orchestrator | TASK [placement : Running placement bootstrap container] *********************** 2026-01-10 14:53:11.838819 | orchestrator | Saturday 10 January 2026 14:52:48 +0000 (0:00:02.238) 0:00:54.723 ****** 2026-01-10 14:53:11.838822 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:53:11.838825 | orchestrator | 2026-01-10 14:53:11.838828 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-01-10 14:53:11.838831 | orchestrator | Saturday 10 January 2026 14:53:02 +0000 (0:00:13.583) 0:01:08.306 ****** 2026-01-10 14:53:11.838834 | orchestrator | 2026-01-10 14:53:11.838837 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-01-10 14:53:11.838840 | orchestrator | Saturday 10 January 2026 14:53:02 +0000 (0:00:00.082) 0:01:08.389 ****** 2026-01-10 14:53:11.838843 | orchestrator | 2026-01-10 14:53:11.838846 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-01-10 14:53:11.838849 | orchestrator | Saturday 10 January 2026 14:53:02 +0000 (0:00:00.288) 0:01:08.678 ****** 2026-01-10 14:53:11.838852 | orchestrator | 2026-01-10 14:53:11.838855 | orchestrator | RUNNING HANDLER [placement : Restart placement-api container] ****************** 2026-01-10 14:53:11.838861 | orchestrator | Saturday 10 January 2026 14:53:02 +0000 (0:00:00.067) 0:01:08.746 ****** 2026-01-10 14:53:11.838864 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:53:11.838867 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:53:11.838870 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:53:11.838873 | orchestrator | 2026-01-10 14:53:11.838876 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-10 14:53:11.838880 | orchestrator | testbed-node-0 : ok=23  changed=16  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-01-10 14:53:11.838883 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-10 14:53:11.838887 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-10 14:53:11.838890 | orchestrator | 2026-01-10 14:53:11.838893 | orchestrator | 2026-01-10 14:53:11.838896 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-10 14:53:11.838899 | orchestrator | Saturday 10 January 2026 14:53:10 +0000 (0:00:07.992) 0:01:16.739 ****** 2026-01-10 14:53:11.838902 | orchestrator | =============================================================================== 2026-01-10 14:53:11.838905 | orchestrator | placement : Running placement bootstrap container ---------------------- 13.58s 2026-01-10 14:53:11.838908 | orchestrator | placement : Restart placement-api container ----------------------------- 7.99s 2026-01-10 14:53:11.838911 | orchestrator | service-ks-register : placement | Creating/deleting endpoints ----------- 7.77s 2026-01-10 14:53:11.838914 | orchestrator | placement : Copying over placement.conf --------------------------------- 4.74s 2026-01-10 14:53:11.838917 | orchestrator | service-ks-register : placement | Creating users ------------------------ 4.18s 2026-01-10 14:53:11.838920 | orchestrator | service-ks-register : placement | Creating/deleting services ------------ 3.65s 2026-01-10 14:53:11.838923 | orchestrator | service-ks-register : placement | Granting/revoking user roles ---------- 3.64s 2026-01-10 14:53:11.838926 | orchestrator | service-ks-register : placement | Creating projects --------------------- 3.35s 2026-01-10 14:53:11.838929 | orchestrator | service-ks-register : placement | Creating roles ------------------------ 3.21s 2026-01-10 14:53:11.838932 | orchestrator | placement : Creating placement databases user and setting permissions --- 2.24s 2026-01-10 14:53:11.838935 | orchestrator | service-uwsgi-config : Copying over placement-api uWSGI config ---------- 2.15s 2026-01-10 14:53:11.838938 | orchestrator | service-cert-copy : placement | Copying over extra CA certificates ------ 2.09s 2026-01-10 14:53:11.838944 | orchestrator | placement : Creating placement databases -------------------------------- 1.86s 2026-01-10 14:53:11.838947 | orchestrator | placement : Copying over config.json files for services ----------------- 1.55s 2026-01-10 14:53:11.838950 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS certificate --- 1.44s 2026-01-10 14:53:11.838953 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS key --- 1.44s 2026-01-10 14:53:11.838956 | orchestrator | placement : Copying over migrate-db.rc.j2 configuration ----------------- 1.37s 2026-01-10 14:53:11.838959 | orchestrator | placement : Ensuring config directories exist --------------------------- 1.21s 2026-01-10 14:53:11.838962 | orchestrator | placement : Set placement policy file ----------------------------------- 1.14s 2026-01-10 14:53:11.838965 | orchestrator | service-check-containers : placement | Check containers ----------------- 1.12s 2026-01-10 14:53:11.838968 | orchestrator | 2026-01-10 14:53:11 | INFO  | Task c675ce34-5677-4951-8384-6a5ecff98e0c is in state STARTED 2026-01-10 14:53:11.838971 | orchestrator | 2026-01-10 14:53:11 | INFO  | Task 27a2dbc7-3cd7-4e7d-a96b-79d180378f23 is in state STARTED 2026-01-10 14:53:11.839031 | orchestrator | 2026-01-10 14:53:11 | INFO  | Task 193054ac-f4e0-40cd-a896-86221a63bfeb is in state STARTED 2026-01-10 14:53:11.839036 | orchestrator | 2026-01-10 14:53:11 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:53:14.869843 | orchestrator | 2026-01-10 14:53:14 | INFO  | Task c675ce34-5677-4951-8384-6a5ecff98e0c is in state STARTED 2026-01-10 14:53:14.870486 | orchestrator | 2026-01-10 14:53:14 | INFO  | Task 27a2dbc7-3cd7-4e7d-a96b-79d180378f23 is in state STARTED 2026-01-10 14:53:14.871424 | orchestrator | 2026-01-10 14:53:14 | INFO  | Task 193054ac-f4e0-40cd-a896-86221a63bfeb is in state STARTED 2026-01-10 14:53:14.872421 | orchestrator | 2026-01-10 14:53:14 | INFO  | Task 0a348750-1b44-4b05-95c0-509296aa4862 is in state STARTED 2026-01-10 14:53:14.872456 | orchestrator | 2026-01-10 14:53:14 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:53:17.900210 | orchestrator | 2026-01-10 14:53:17.900266 | orchestrator | 2026-01-10 14:53:17 | INFO  | Task c675ce34-5677-4951-8384-6a5ecff98e0c is in state SUCCESS 2026-01-10 14:53:17.901236 | orchestrator | 2026-01-10 14:53:17.901288 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-10 14:53:17.901293 | orchestrator | 2026-01-10 14:53:17.901298 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-10 14:53:17.901303 | orchestrator | Saturday 10 January 2026 14:48:34 +0000 (0:00:00.267) 0:00:00.267 ****** 2026-01-10 14:53:17.901308 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:53:17.901313 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:53:17.901317 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:53:17.901322 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:53:17.901326 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:53:17.901331 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:53:17.901335 | orchestrator | 2026-01-10 14:53:17.901340 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-10 14:53:17.901344 | orchestrator | Saturday 10 January 2026 14:48:35 +0000 (0:00:00.777) 0:00:01.045 ****** 2026-01-10 14:53:17.901348 | orchestrator | ok: [testbed-node-0] => (item=enable_neutron_True) 2026-01-10 14:53:17.901353 | orchestrator | ok: [testbed-node-1] => (item=enable_neutron_True) 2026-01-10 14:53:17.901357 | orchestrator | ok: [testbed-node-2] => (item=enable_neutron_True) 2026-01-10 14:53:17.901361 | orchestrator | ok: [testbed-node-3] => (item=enable_neutron_True) 2026-01-10 14:53:17.901365 | orchestrator | ok: [testbed-node-4] => (item=enable_neutron_True) 2026-01-10 14:53:17.901370 | orchestrator | ok: [testbed-node-5] => (item=enable_neutron_True) 2026-01-10 14:53:17.901374 | orchestrator | 2026-01-10 14:53:17.901379 | orchestrator | PLAY [Apply role neutron] ****************************************************** 2026-01-10 14:53:17.901383 | orchestrator | 2026-01-10 14:53:17.901387 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-01-10 14:53:17.901410 | orchestrator | Saturday 10 January 2026 14:48:35 +0000 (0:00:00.625) 0:00:01.670 ****** 2026-01-10 14:53:17.901419 | orchestrator | included: /ansible/roles/neutron/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-10 14:53:17.901425 | orchestrator | 2026-01-10 14:53:17.901429 | orchestrator | TASK [neutron : Get container facts] ******************************************* 2026-01-10 14:53:17.901432 | orchestrator | Saturday 10 January 2026 14:48:37 +0000 (0:00:01.319) 0:00:02.989 ****** 2026-01-10 14:53:17.901436 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:53:17.901440 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:53:17.901443 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:53:17.901447 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:53:17.901451 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:53:17.901454 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:53:17.901458 | orchestrator | 2026-01-10 14:53:17.901462 | orchestrator | TASK [neutron : Get container volume facts] ************************************ 2026-01-10 14:53:17.901466 | orchestrator | Saturday 10 January 2026 14:48:38 +0000 (0:00:01.329) 0:00:04.319 ****** 2026-01-10 14:53:17.901469 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:53:17.901473 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:53:17.901477 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:53:17.901480 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:53:17.901484 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:53:17.901488 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:53:17.901491 | orchestrator | 2026-01-10 14:53:17.901495 | orchestrator | TASK [neutron : Check for ML2/OVN presence] ************************************ 2026-01-10 14:53:17.901499 | orchestrator | Saturday 10 January 2026 14:48:39 +0000 (0:00:01.044) 0:00:05.363 ****** 2026-01-10 14:53:17.901502 | orchestrator | ok: [testbed-node-0] => { 2026-01-10 14:53:17.901506 | orchestrator |  "changed": false, 2026-01-10 14:53:17.901511 | orchestrator |  "msg": "All assertions passed" 2026-01-10 14:53:17.901519 | orchestrator | } 2026-01-10 14:53:17.901528 | orchestrator | ok: [testbed-node-1] => { 2026-01-10 14:53:17.901533 | orchestrator |  "changed": false, 2026-01-10 14:53:17.901540 | orchestrator |  "msg": "All assertions passed" 2026-01-10 14:53:17.901546 | orchestrator | } 2026-01-10 14:53:17.901552 | orchestrator | ok: [testbed-node-2] => { 2026-01-10 14:53:17.901557 | orchestrator |  "changed": false, 2026-01-10 14:53:17.901586 | orchestrator |  "msg": "All assertions passed" 2026-01-10 14:53:17.901610 | orchestrator | } 2026-01-10 14:53:17.901616 | orchestrator | ok: [testbed-node-3] => { 2026-01-10 14:53:17.901623 | orchestrator |  "changed": false, 2026-01-10 14:53:17.901629 | orchestrator |  "msg": "All assertions passed" 2026-01-10 14:53:17.901636 | orchestrator | } 2026-01-10 14:53:17.901643 | orchestrator | ok: [testbed-node-4] => { 2026-01-10 14:53:17.901650 | orchestrator |  "changed": false, 2026-01-10 14:53:17.901670 | orchestrator |  "msg": "All assertions passed" 2026-01-10 14:53:17.901674 | orchestrator | } 2026-01-10 14:53:17.901678 | orchestrator | ok: [testbed-node-5] => { 2026-01-10 14:53:17.901682 | orchestrator |  "changed": false, 2026-01-10 14:53:17.901685 | orchestrator |  "msg": "All assertions passed" 2026-01-10 14:53:17.901689 | orchestrator | } 2026-01-10 14:53:17.901693 | orchestrator | 2026-01-10 14:53:17.901697 | orchestrator | TASK [neutron : Check for ML2/OVS presence] ************************************ 2026-01-10 14:53:17.901700 | orchestrator | Saturday 10 January 2026 14:48:40 +0000 (0:00:00.924) 0:00:06.288 ****** 2026-01-10 14:53:17.901704 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:53:17.901710 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:53:17.901717 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:53:17.901723 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:53:17.901729 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:53:17.901735 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:53:17.901741 | orchestrator | 2026-01-10 14:53:17.901756 | orchestrator | TASK [service-ks-register : neutron | Creating/deleting services] ************** 2026-01-10 14:53:17.901768 | orchestrator | Saturday 10 January 2026 14:48:40 +0000 (0:00:00.644) 0:00:06.933 ****** 2026-01-10 14:53:17.901775 | orchestrator | changed: [testbed-node-0] => (item=neutron (network)) 2026-01-10 14:53:17.901781 | orchestrator | 2026-01-10 14:53:17.901906 | orchestrator | TASK [service-ks-register : neutron | Creating/deleting endpoints] ************* 2026-01-10 14:53:17.901919 | orchestrator | Saturday 10 January 2026 14:48:44 +0000 (0:00:03.110) 0:00:10.043 ****** 2026-01-10 14:53:17.901928 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api-int.testbed.osism.xyz:9696 -> internal) 2026-01-10 14:53:17.901934 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api.testbed.osism.xyz:9696 -> public) 2026-01-10 14:53:17.901940 | orchestrator | 2026-01-10 14:53:17.901958 | orchestrator | TASK [service-ks-register : neutron | Creating projects] *********************** 2026-01-10 14:53:17.901964 | orchestrator | Saturday 10 January 2026 14:48:49 +0000 (0:00:05.893) 0:00:15.936 ****** 2026-01-10 14:53:17.901970 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-01-10 14:53:17.901977 | orchestrator | 2026-01-10 14:53:17.901983 | orchestrator | TASK [service-ks-register : neutron | Creating users] ************************** 2026-01-10 14:53:17.901989 | orchestrator | Saturday 10 January 2026 14:48:53 +0000 (0:00:03.305) 0:00:19.242 ****** 2026-01-10 14:53:17.901994 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-01-10 14:53:17.902000 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service) 2026-01-10 14:53:17.902007 | orchestrator | 2026-01-10 14:53:17.902048 | orchestrator | TASK [service-ks-register : neutron | Creating roles] ************************** 2026-01-10 14:53:17.902056 | orchestrator | Saturday 10 January 2026 14:48:57 +0000 (0:00:03.961) 0:00:23.203 ****** 2026-01-10 14:53:17.902062 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-01-10 14:53:17.902069 | orchestrator | 2026-01-10 14:53:17.902075 | orchestrator | TASK [service-ks-register : neutron | Granting/revoking user roles] ************ 2026-01-10 14:53:17.902082 | orchestrator | Saturday 10 January 2026 14:49:00 +0000 (0:00:03.365) 0:00:26.569 ****** 2026-01-10 14:53:17.902088 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> admin) 2026-01-10 14:53:17.902094 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> service) 2026-01-10 14:53:17.902101 | orchestrator | 2026-01-10 14:53:17.902107 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-01-10 14:53:17.902112 | orchestrator | Saturday 10 January 2026 14:49:08 +0000 (0:00:07.646) 0:00:34.215 ****** 2026-01-10 14:53:17.902118 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:53:17.902124 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:53:17.902131 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:53:17.902136 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:53:17.902142 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:53:17.902148 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:53:17.902155 | orchestrator | 2026-01-10 14:53:17.902161 | orchestrator | TASK [Load and persist kernel modules] ***************************************** 2026-01-10 14:53:17.902168 | orchestrator | Saturday 10 January 2026 14:49:09 +0000 (0:00:00.785) 0:00:35.001 ****** 2026-01-10 14:53:17.902174 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:53:17.902180 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:53:17.902187 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:53:17.902193 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:53:17.902201 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:53:17.902207 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:53:17.902214 | orchestrator | 2026-01-10 14:53:17.902220 | orchestrator | TASK [neutron : Check IPv6 support] ******************************************** 2026-01-10 14:53:17.902226 | orchestrator | Saturday 10 January 2026 14:49:11 +0000 (0:00:02.290) 0:00:37.291 ****** 2026-01-10 14:53:17.902232 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:53:17.902238 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:53:17.902245 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:53:17.902252 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:53:17.902265 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:53:17.902272 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:53:17.902279 | orchestrator | 2026-01-10 14:53:17.902285 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-01-10 14:53:17.902291 | orchestrator | Saturday 10 January 2026 14:49:13 +0000 (0:00:02.032) 0:00:39.324 ****** 2026-01-10 14:53:17.902297 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:53:17.902303 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:53:17.902310 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:53:17.902317 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:53:17.902323 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:53:17.902330 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:53:17.902336 | orchestrator | 2026-01-10 14:53:17.902343 | orchestrator | TASK [neutron : Ensuring config directories exist] ***************************** 2026-01-10 14:53:17.902350 | orchestrator | Saturday 10 January 2026 14:49:15 +0000 (0:00:02.211) 0:00:41.535 ****** 2026-01-10 14:53:17.902365 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-10 14:53:17.902385 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-10 14:53:17.902392 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-10 14:53:17.902405 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-01-10 14:53:17.902412 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-01-10 14:53:17.902418 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-01-10 14:53:17.902423 | orchestrator | 2026-01-10 14:53:17.902426 | orchestrator | TASK [neutron : Check if extra ml2 plugins exists] ***************************** 2026-01-10 14:53:17.902430 | orchestrator | Saturday 10 January 2026 14:49:18 +0000 (0:00:02.876) 0:00:44.412 ****** 2026-01-10 14:53:17.902434 | orchestrator | [WARNING]: Skipped 2026-01-10 14:53:17.902438 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' path 2026-01-10 14:53:17.902450 | orchestrator | due to this access issue: 2026-01-10 14:53:17.902454 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' is not 2026-01-10 14:53:17.902458 | orchestrator | a directory 2026-01-10 14:53:17.902462 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-10 14:53:17.902477 | orchestrator | 2026-01-10 14:53:17.902482 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-01-10 14:53:17.902485 | orchestrator | Saturday 10 January 2026 14:49:19 +0000 (0:00:00.904) 0:00:45.317 ****** 2026-01-10 14:53:17.902489 | orchestrator | included: /ansible/roles/neutron/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-10 14:53:17.902494 | orchestrator | 2026-01-10 14:53:17.902497 | orchestrator | TASK [service-cert-copy : neutron | Copying over extra CA certificates] ******** 2026-01-10 14:53:17.902501 | orchestrator | Saturday 10 January 2026 14:49:20 +0000 (0:00:01.340) 0:00:46.657 ****** 2026-01-10 14:53:17.902512 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-10 14:53:17.902524 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-10 14:53:17.902533 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-10 14:53:17.902548 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-01-10 14:53:17.902559 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-01-10 14:53:17.902569 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-01-10 14:53:17.902576 | orchestrator | 2026-01-10 14:53:17.902582 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS certificate] *** 2026-01-10 14:53:17.902588 | orchestrator | Saturday 10 January 2026 14:49:23 +0000 (0:00:03.270) 0:00:49.928 ****** 2026-01-10 14:53:17.902593 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-01-10 14:53:17.902600 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:53:17.902608 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-01-10 14:53:17.902615 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:53:17.902623 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-10 14:53:17.902633 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:53:17.902640 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-01-10 14:53:17.902648 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:53:17.902692 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-10 14:53:17.902699 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:53:17.902705 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-10 14:53:17.902711 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:53:17.902716 | orchestrator | 2026-01-10 14:53:17.902727 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS key] ***** 2026-01-10 14:53:17.902733 | orchestrator | Saturday 10 January 2026 14:49:26 +0000 (0:00:02.797) 0:00:52.726 ****** 2026-01-10 14:53:17.902744 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-01-10 14:53:17.902755 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:53:17.902761 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-01-10 14:53:17.902768 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:53:17.902774 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-01-10 14:53:17.902781 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:53:17.902787 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-10 14:53:17.902793 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:53:17.902806 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-10 14:53:17.902824 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:53:17.902828 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-10 14:53:17.902832 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:53:17.902836 | orchestrator | 2026-01-10 14:53:17.902840 | orchestrator | TASK [neutron : Creating TLS backend PEM File] ********************************* 2026-01-10 14:53:17.902843 | orchestrator | Saturday 10 January 2026 14:49:29 +0000 (0:00:03.195) 0:00:55.921 ****** 2026-01-10 14:53:17.902847 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:53:17.902851 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:53:17.902855 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:53:17.902858 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:53:17.902862 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:53:17.902866 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:53:17.902869 | orchestrator | 2026-01-10 14:53:17.902873 | orchestrator | TASK [neutron : Check if policies shall be overwritten] ************************ 2026-01-10 14:53:17.902877 | orchestrator | Saturday 10 January 2026 14:49:32 +0000 (0:00:02.845) 0:00:58.767 ****** 2026-01-10 14:53:17.902880 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:53:17.902884 | orchestrator | 2026-01-10 14:53:17.902888 | orchestrator | TASK [neutron : Set neutron policy file] *************************************** 2026-01-10 14:53:17.902891 | orchestrator | Saturday 10 January 2026 14:49:32 +0000 (0:00:00.127) 0:00:58.894 ****** 2026-01-10 14:53:17.902895 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:53:17.902899 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:53:17.902903 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:53:17.902906 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:53:17.902910 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:53:17.902914 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:53:17.902917 | orchestrator | 2026-01-10 14:53:17.902921 | orchestrator | TASK [neutron : Copying over existing policy file] ***************************** 2026-01-10 14:53:17.902925 | orchestrator | Saturday 10 January 2026 14:49:33 +0000 (0:00:00.872) 0:00:59.767 ****** 2026-01-10 14:53:17.902929 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-01-10 14:53:17.902933 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:53:17.902939 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-01-10 14:53:17.902948 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:53:17.902952 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-01-10 14:53:17.902956 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:53:17.902960 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-10 14:53:17.902963 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-10 14:53:17.902967 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:53:17.902970 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:53:17.902974 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-10 14:53:17.902981 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:53:17.902985 | orchestrator | 2026-01-10 14:53:17.902988 | orchestrator | TASK [neutron : Copying over config.json files for services] ******************* 2026-01-10 14:53:17.902992 | orchestrator | Saturday 10 January 2026 14:49:37 +0000 (0:00:03.450) 0:01:03.217 ****** 2026-01-10 14:53:17.902998 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-10 14:53:17.903002 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-10 14:53:17.903005 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-10 14:53:17.903009 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-01-10 14:53:17.903017 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-01-10 14:53:17.903024 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-01-10 14:53:17.903027 | orchestrator | 2026-01-10 14:53:17.903032 | orchestrator | TASK [neutron : Copying over neutron.conf] ************************************* 2026-01-10 14:53:17.903037 | orchestrator | Saturday 10 January 2026 14:49:42 +0000 (0:00:05.164) 0:01:08.381 ****** 2026-01-10 14:53:17.903047 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-10 14:53:17.903054 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-01-10 14:53:17.903067 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-10 14:53:17.903077 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-10 14:53:17.903082 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-01-10 14:53:17.903088 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-01-10 14:53:17.903093 | orchestrator | 2026-01-10 14:53:17.903098 | orchestrator | TASK [neutron : Copying over neutron_vpnaas.conf] ****************************** 2026-01-10 14:53:17.903104 | orchestrator | Saturday 10 January 2026 14:49:50 +0000 (0:00:08.499) 0:01:16.881 ****** 2026-01-10 14:53:17.903109 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-01-10 14:53:17.903119 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:53:17.903129 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-01-10 14:53:17.903135 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:53:17.903140 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-01-10 14:53:17.903146 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:53:17.903151 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-10 14:53:17.903156 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:53:17.903162 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-10 14:53:17.903172 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:53:17.903180 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-10 14:53:17.903186 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:53:17.903191 | orchestrator | 2026-01-10 14:53:17.903196 | orchestrator | TASK [neutron : Copying over ssh key] ****************************************** 2026-01-10 14:53:17.903202 | orchestrator | Saturday 10 January 2026 14:49:54 +0000 (0:00:04.084) 0:01:20.966 ****** 2026-01-10 14:53:17.903208 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:53:17.903212 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:53:17.903216 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:53:17.903219 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:53:17.903223 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:53:17.903226 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:53:17.903230 | orchestrator | 2026-01-10 14:53:17.903233 | orchestrator | TASK [neutron : Copying over ml2_conf.ini] ************************************* 2026-01-10 14:53:17.903239 | orchestrator | Saturday 10 January 2026 14:49:57 +0000 (0:00:02.807) 0:01:23.773 ****** 2026-01-10 14:53:17.903243 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-10 14:53:17.903247 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:53:17.903250 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-10 14:53:17.903256 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:53:17.903260 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-10 14:53:17.903263 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:53:17.903267 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-10 14:53:17.903277 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-10 14:53:17.903281 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-10 14:53:17.903285 | orchestrator | 2026-01-10 14:53:17.903288 | orchestrator | TASK [neutron : Copying over openvswitch_agent.ini] **************************** 2026-01-10 14:53:17.903294 | orchestrator | Saturday 10 January 2026 14:50:03 +0000 (0:00:05.517) 0:01:29.291 ****** 2026-01-10 14:53:17.903298 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:53:17.903301 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:53:17.903305 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:53:17.903308 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:53:17.903312 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:53:17.903315 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:53:17.903319 | orchestrator | 2026-01-10 14:53:17.903322 | orchestrator | TASK [neutron : Copying over sriov_agent.ini] ********************************** 2026-01-10 14:53:17.903326 | orchestrator | Saturday 10 January 2026 14:50:06 +0000 (0:00:03.248) 0:01:32.539 ****** 2026-01-10 14:53:17.903329 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:53:17.903332 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:53:17.903336 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:53:17.903339 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:53:17.903342 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:53:17.903346 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:53:17.903350 | orchestrator | 2026-01-10 14:53:17.903355 | orchestrator | TASK [neutron : Copying over mlnx_agent.ini] *********************************** 2026-01-10 14:53:17.903361 | orchestrator | Saturday 10 January 2026 14:50:10 +0000 (0:00:04.163) 0:01:36.703 ****** 2026-01-10 14:53:17.903367 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:53:17.903373 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:53:17.903378 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:53:17.903384 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:53:17.903389 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:53:17.903394 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:53:17.903400 | orchestrator | 2026-01-10 14:53:17.903406 | orchestrator | TASK [neutron : Copying over eswitchd.conf] ************************************ 2026-01-10 14:53:17.903410 | orchestrator | Saturday 10 January 2026 14:50:13 +0000 (0:00:03.181) 0:01:39.884 ****** 2026-01-10 14:53:17.903415 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:53:17.903421 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:53:17.903426 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:53:17.903431 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:53:17.903436 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:53:17.903442 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:53:17.903448 | orchestrator | 2026-01-10 14:53:17.903453 | orchestrator | TASK [neutron : Copying over dhcp_agent.ini] *********************************** 2026-01-10 14:53:17.903458 | orchestrator | Saturday 10 January 2026 14:50:17 +0000 (0:00:04.046) 0:01:43.931 ****** 2026-01-10 14:53:17.903464 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:53:17.903469 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:53:17.903474 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:53:17.903479 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:53:17.903484 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:53:17.903489 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:53:17.903494 | orchestrator | 2026-01-10 14:53:17.903499 | orchestrator | TASK [neutron : Copying over dnsmasq.conf] ************************************* 2026-01-10 14:53:17.903506 | orchestrator | Saturday 10 January 2026 14:50:21 +0000 (0:00:03.772) 0:01:47.703 ****** 2026-01-10 14:53:17.903515 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-01-10 14:53:17.903521 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:53:17.903527 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-01-10 14:53:17.903532 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:53:17.903537 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-01-10 14:53:17.903542 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:53:17.903547 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-01-10 14:53:17.903557 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:53:17.903561 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-01-10 14:53:17.903566 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:53:17.903576 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-01-10 14:53:17.903581 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:53:17.903586 | orchestrator | 2026-01-10 14:53:17.903591 | orchestrator | TASK [neutron : Copying over l3_agent.ini] ************************************* 2026-01-10 14:53:17.903597 | orchestrator | Saturday 10 January 2026 14:50:24 +0000 (0:00:02.589) 0:01:50.293 ****** 2026-01-10 14:53:17.903602 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-01-10 14:53:17.903609 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:53:17.903615 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-10 14:53:17.903621 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:53:17.903626 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-01-10 14:53:17.903632 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:53:17.903641 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-10 14:53:17.903649 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:53:17.903718 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-01-10 14:53:17.903724 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:53:17.903730 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-10 14:53:17.903735 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:53:17.903740 | orchestrator | 2026-01-10 14:53:17.903747 | orchestrator | TASK [neutron : Copying over fwaas_driver.ini] ********************************* 2026-01-10 14:53:17.903753 | orchestrator | Saturday 10 January 2026 14:50:26 +0000 (0:00:02.383) 0:01:52.677 ****** 2026-01-10 14:53:17.903758 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-01-10 14:53:17.903764 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:53:17.903773 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-10 14:53:17.903783 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:53:17.903794 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-01-10 14:53:17.903801 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:53:17.903807 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-10 14:53:17.903813 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:53:17.903819 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-01-10 14:53:17.903825 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:53:17.903831 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-10 14:53:17.903844 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:53:17.903849 | orchestrator | 2026-01-10 14:53:17.903855 | orchestrator | TASK [neutron : Copying over metadata_agent.ini] ******************************* 2026-01-10 14:53:17.903861 | orchestrator | Saturday 10 January 2026 14:50:29 +0000 (0:00:03.202) 0:01:55.879 ****** 2026-01-10 14:53:17.903869 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:53:17.903875 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:53:17.903881 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:53:17.903886 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:53:17.903892 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:53:17.903897 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:53:17.903902 | orchestrator | 2026-01-10 14:53:17.903906 | orchestrator | TASK [neutron : Copying over neutron_ovn_metadata_agent.ini] ******************* 2026-01-10 14:53:17.903909 | orchestrator | Saturday 10 January 2026 14:50:33 +0000 (0:00:04.077) 0:01:59.957 ****** 2026-01-10 14:53:17.903913 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:53:17.903916 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:53:17.903920 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:53:17.903926 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:53:17.903931 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:53:17.903937 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:53:17.903945 | orchestrator | 2026-01-10 14:53:17.903955 | orchestrator | TASK [neutron : Copying over metering_agent.ini] ******************************* 2026-01-10 14:53:17.903960 | orchestrator | Saturday 10 January 2026 14:50:37 +0000 (0:00:03.472) 0:02:03.430 ****** 2026-01-10 14:53:17.903965 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:53:17.903970 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:53:17.903976 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:53:17.903981 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:53:17.903986 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:53:17.903991 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:53:17.903996 | orchestrator | 2026-01-10 14:53:17.904001 | orchestrator | TASK [neutron : Copying over ironic_neutron_agent.ini] ************************* 2026-01-10 14:53:17.904006 | orchestrator | Saturday 10 January 2026 14:50:41 +0000 (0:00:03.986) 0:02:07.417 ****** 2026-01-10 14:53:17.904011 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:53:17.904017 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:53:17.904022 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:53:17.904026 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:53:17.904032 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:53:17.904037 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:53:17.904046 | orchestrator | 2026-01-10 14:53:17.904052 | orchestrator | TASK [neutron : Copying over bgp_dragent.ini] ********************************** 2026-01-10 14:53:17.904057 | orchestrator | Saturday 10 January 2026 14:50:43 +0000 (0:00:02.116) 0:02:09.534 ****** 2026-01-10 14:53:17.904061 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:53:17.904066 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:53:17.904071 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:53:17.904076 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:53:17.904081 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:53:17.904086 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:53:17.904091 | orchestrator | 2026-01-10 14:53:17.904096 | orchestrator | TASK [neutron : Copying over ovn_agent.ini] ************************************ 2026-01-10 14:53:17.904101 | orchestrator | Saturday 10 January 2026 14:50:46 +0000 (0:00:03.240) 0:02:12.774 ****** 2026-01-10 14:53:17.904107 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:53:17.904112 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:53:17.904118 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:53:17.904129 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:53:17.904135 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:53:17.904141 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:53:17.904147 | orchestrator | 2026-01-10 14:53:17.904153 | orchestrator | TASK [neutron : Copying over nsx.ini] ****************************************** 2026-01-10 14:53:17.904158 | orchestrator | Saturday 10 January 2026 14:50:49 +0000 (0:00:02.233) 0:02:15.008 ****** 2026-01-10 14:53:17.904164 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:53:17.904170 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:53:17.904176 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:53:17.904181 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:53:17.904187 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:53:17.904193 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:53:17.904199 | orchestrator | 2026-01-10 14:53:17.904205 | orchestrator | TASK [neutron : Copy neutron-l3-agent-wrapper script] ************************** 2026-01-10 14:53:17.904211 | orchestrator | Saturday 10 January 2026 14:50:51 +0000 (0:00:02.359) 0:02:17.367 ****** 2026-01-10 14:53:17.904216 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:53:17.904222 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:53:17.904228 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:53:17.904233 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:53:17.904238 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:53:17.904244 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:53:17.904250 | orchestrator | 2026-01-10 14:53:17.904256 | orchestrator | TASK [neutron : Copying over extra ml2 plugins] ******************************** 2026-01-10 14:53:17.904261 | orchestrator | Saturday 10 January 2026 14:50:53 +0000 (0:00:01.821) 0:02:19.189 ****** 2026-01-10 14:53:17.904266 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:53:17.904272 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:53:17.904278 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:53:17.904283 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:53:17.904289 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:53:17.904294 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:53:17.904300 | orchestrator | 2026-01-10 14:53:17.904306 | orchestrator | TASK [neutron : Copying over neutron-tls-proxy.cfg] **************************** 2026-01-10 14:53:17.904312 | orchestrator | Saturday 10 January 2026 14:50:55 +0000 (0:00:01.955) 0:02:21.145 ****** 2026-01-10 14:53:17.904318 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-01-10 14:53:17.904324 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:53:17.904329 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-01-10 14:53:17.904335 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:53:17.904341 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-01-10 14:53:17.904346 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:53:17.904351 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-01-10 14:53:17.904356 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:53:17.904365 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-01-10 14:53:17.904371 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:53:17.904376 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-01-10 14:53:17.904382 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:53:17.904387 | orchestrator | 2026-01-10 14:53:17.904393 | orchestrator | TASK [neutron : Copying over neutron_taas.conf] ******************************** 2026-01-10 14:53:17.904399 | orchestrator | Saturday 10 January 2026 14:50:57 +0000 (0:00:01.969) 0:02:23.114 ****** 2026-01-10 14:53:17.904524 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-01-10 14:53:17.904543 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:53:17.904549 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-01-10 14:53:17.904555 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:53:17.904561 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-01-10 14:53:17.904567 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:53:17.904576 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-10 14:53:17.904581 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:53:17.904592 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-10 14:53:17.904602 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:53:17.904608 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-10 14:53:17.904615 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:53:17.904621 | orchestrator | 2026-01-10 14:53:17.904627 | orchestrator | TASK [service-check-containers : neutron | Check containers] ******************* 2026-01-10 14:53:17.904633 | orchestrator | Saturday 10 January 2026 14:50:59 +0000 (0:00:01.992) 0:02:25.107 ****** 2026-01-10 14:53:17.904639 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-10 14:53:17.904645 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-10 14:53:17.904670 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-01-10 14:53:17.904680 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-10 14:53:17.904686 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-01-10 14:53:17.904693 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-01-10 14:53:17.904698 | orchestrator | 2026-01-10 14:53:17.904704 | orchestrator | TASK [service-check-containers : neutron | Notify handlers to restart containers] *** 2026-01-10 14:53:17.904710 | orchestrator | Saturday 10 January 2026 14:51:02 +0000 (0:00:03.044) 0:02:28.151 ****** 2026-01-10 14:53:17.904715 | orchestrator | changed: [testbed-node-0] => { 2026-01-10 14:53:17.904721 | orchestrator |  "msg": "Notifying handlers" 2026-01-10 14:53:17.904727 | orchestrator | } 2026-01-10 14:53:17.904732 | orchestrator | changed: [testbed-node-1] => { 2026-01-10 14:53:17.904738 | orchestrator |  "msg": "Notifying handlers" 2026-01-10 14:53:17.904743 | orchestrator | } 2026-01-10 14:53:17.904749 | orchestrator | changed: [testbed-node-2] => { 2026-01-10 14:53:17.904755 | orchestrator |  "msg": "Notifying handlers" 2026-01-10 14:53:17.904760 | orchestrator | } 2026-01-10 14:53:17.904766 | orchestrator | changed: [testbed-node-3] => { 2026-01-10 14:53:17.904772 | orchestrator |  "msg": "Notifying handlers" 2026-01-10 14:53:17.904783 | orchestrator | } 2026-01-10 14:53:17.904789 | orchestrator | changed: [testbed-node-4] => { 2026-01-10 14:53:17.904794 | orchestrator |  "msg": "Notifying handlers" 2026-01-10 14:53:17.904799 | orchestrator | } 2026-01-10 14:53:17.904804 | orchestrator | changed: [testbed-node-5] => { 2026-01-10 14:53:17.904810 | orchestrator |  "msg": "Notifying handlers" 2026-01-10 14:53:17.904815 | orchestrator | } 2026-01-10 14:53:17.904821 | orchestrator | 2026-01-10 14:53:17.904826 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-01-10 14:53:17.904836 | orchestrator | Saturday 10 January 2026 14:51:03 +0000 (0:00:00.946) 0:02:29.097 ****** 2026-01-10 14:53:17.904850 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-01-10 14:53:17.904856 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:53:17.904862 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-01-10 14:53:17.904867 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:53:17.904872 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-01-10 14:53:17.904878 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:53:17.904883 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-10 14:53:17.904893 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:53:17.904902 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-10 14:53:17.904908 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:53:17.904918 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-10 14:53:17.904923 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:53:17.904929 | orchestrator | 2026-01-10 14:53:17.904937 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-01-10 14:53:17.904945 | orchestrator | Saturday 10 January 2026 14:51:05 +0000 (0:00:02.432) 0:02:31.530 ****** 2026-01-10 14:53:17.904950 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:53:17.904955 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:53:17.904960 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:53:17.904965 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:53:17.904971 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:53:17.904975 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:53:17.904981 | orchestrator | 2026-01-10 14:53:17.904988 | orchestrator | TASK [neutron : Creating Neutron database] ************************************* 2026-01-10 14:53:17.904997 | orchestrator | Saturday 10 January 2026 14:51:06 +0000 (0:00:00.725) 0:02:32.256 ****** 2026-01-10 14:53:17.905002 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:53:17.905008 | orchestrator | 2026-01-10 14:53:17.905013 | orchestrator | TASK [neutron : Creating Neutron database user and setting permissions] ******** 2026-01-10 14:53:17.905018 | orchestrator | Saturday 10 January 2026 14:51:08 +0000 (0:00:02.107) 0:02:34.363 ****** 2026-01-10 14:53:17.905024 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:53:17.905029 | orchestrator | 2026-01-10 14:53:17.905036 | orchestrator | TASK [neutron : Running Neutron bootstrap container] *************************** 2026-01-10 14:53:17.905041 | orchestrator | Saturday 10 January 2026 14:51:10 +0000 (0:00:02.428) 0:02:36.791 ****** 2026-01-10 14:53:17.905047 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:53:17.905052 | orchestrator | 2026-01-10 14:53:17.905062 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-01-10 14:53:17.905073 | orchestrator | Saturday 10 January 2026 14:51:50 +0000 (0:00:39.934) 0:03:16.726 ****** 2026-01-10 14:53:17.905079 | orchestrator | 2026-01-10 14:53:17.905086 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-01-10 14:53:17.905092 | orchestrator | Saturday 10 January 2026 14:51:50 +0000 (0:00:00.064) 0:03:16.790 ****** 2026-01-10 14:53:17.905098 | orchestrator | 2026-01-10 14:53:17.905103 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-01-10 14:53:17.905109 | orchestrator | Saturday 10 January 2026 14:51:51 +0000 (0:00:00.223) 0:03:17.013 ****** 2026-01-10 14:53:17.905112 | orchestrator | 2026-01-10 14:53:17.905115 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-01-10 14:53:17.905119 | orchestrator | Saturday 10 January 2026 14:51:51 +0000 (0:00:00.074) 0:03:17.087 ****** 2026-01-10 14:53:17.905122 | orchestrator | 2026-01-10 14:53:17.905126 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-01-10 14:53:17.905129 | orchestrator | Saturday 10 January 2026 14:51:51 +0000 (0:00:00.064) 0:03:17.152 ****** 2026-01-10 14:53:17.905132 | orchestrator | 2026-01-10 14:53:17.905136 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-01-10 14:53:17.905139 | orchestrator | Saturday 10 January 2026 14:51:51 +0000 (0:00:00.060) 0:03:17.213 ****** 2026-01-10 14:53:17.905142 | orchestrator | 2026-01-10 14:53:17.905146 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-server container] ******************* 2026-01-10 14:53:17.905149 | orchestrator | Saturday 10 January 2026 14:51:51 +0000 (0:00:00.060) 0:03:17.274 ****** 2026-01-10 14:53:17.905153 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:53:17.905156 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:53:17.905159 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:53:17.905163 | orchestrator | 2026-01-10 14:53:17.905166 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-ovn-metadata-agent container] ******* 2026-01-10 14:53:17.905169 | orchestrator | Saturday 10 January 2026 14:52:22 +0000 (0:00:31.462) 0:03:48.736 ****** 2026-01-10 14:53:17.905173 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:53:17.905176 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:53:17.905179 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:53:17.905183 | orchestrator | 2026-01-10 14:53:17.905186 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-10 14:53:17.905193 | orchestrator | testbed-node-0 : ok=27  changed=16  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-01-10 14:53:17.905197 | orchestrator | testbed-node-1 : ok=17  changed=9  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2026-01-10 14:53:17.905200 | orchestrator | testbed-node-2 : ok=17  changed=9  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2026-01-10 14:53:17.905204 | orchestrator | testbed-node-3 : ok=16  changed=8  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-01-10 14:53:17.905211 | orchestrator | testbed-node-4 : ok=16  changed=8  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-01-10 14:53:17.905214 | orchestrator | testbed-node-5 : ok=16  changed=8  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-01-10 14:53:17.905218 | orchestrator | 2026-01-10 14:53:17.905221 | orchestrator | 2026-01-10 14:53:17.905224 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-10 14:53:17.905228 | orchestrator | Saturday 10 January 2026 14:53:16 +0000 (0:00:53.496) 0:04:42.233 ****** 2026-01-10 14:53:17.905231 | orchestrator | =============================================================================== 2026-01-10 14:53:17.905234 | orchestrator | neutron : Restart neutron-ovn-metadata-agent container ----------------- 53.50s 2026-01-10 14:53:17.905242 | orchestrator | neutron : Running Neutron bootstrap container -------------------------- 39.93s 2026-01-10 14:53:17.905246 | orchestrator | neutron : Restart neutron-server container ----------------------------- 31.46s 2026-01-10 14:53:17.905249 | orchestrator | neutron : Copying over neutron.conf ------------------------------------- 8.50s 2026-01-10 14:53:17.905252 | orchestrator | service-ks-register : neutron | Granting/revoking user roles ------------ 7.65s 2026-01-10 14:53:17.905256 | orchestrator | service-ks-register : neutron | Creating/deleting endpoints ------------- 5.89s 2026-01-10 14:53:17.905259 | orchestrator | neutron : Copying over ml2_conf.ini ------------------------------------- 5.52s 2026-01-10 14:53:17.905262 | orchestrator | neutron : Copying over config.json files for services ------------------- 5.16s 2026-01-10 14:53:17.905266 | orchestrator | neutron : Copying over sriov_agent.ini ---------------------------------- 4.16s 2026-01-10 14:53:17.905269 | orchestrator | neutron : Copying over neutron_vpnaas.conf ------------------------------ 4.08s 2026-01-10 14:53:17.905272 | orchestrator | neutron : Copying over metadata_agent.ini ------------------------------- 4.08s 2026-01-10 14:53:17.905276 | orchestrator | neutron : Copying over eswitchd.conf ------------------------------------ 4.05s 2026-01-10 14:53:17.905279 | orchestrator | neutron : Copying over metering_agent.ini ------------------------------- 3.99s 2026-01-10 14:53:17.905282 | orchestrator | service-ks-register : neutron | Creating users -------------------------- 3.96s 2026-01-10 14:53:17.905286 | orchestrator | neutron : Copying over dhcp_agent.ini ----------------------------------- 3.77s 2026-01-10 14:53:17.905289 | orchestrator | neutron : Copying over neutron_ovn_metadata_agent.ini ------------------- 3.47s 2026-01-10 14:53:17.905292 | orchestrator | neutron : Copying over existing policy file ----------------------------- 3.45s 2026-01-10 14:53:17.905296 | orchestrator | service-ks-register : neutron | Creating roles -------------------------- 3.37s 2026-01-10 14:53:17.905299 | orchestrator | service-ks-register : neutron | Creating projects ----------------------- 3.31s 2026-01-10 14:53:17.905302 | orchestrator | service-cert-copy : neutron | Copying over extra CA certificates -------- 3.27s 2026-01-10 14:53:17.905306 | orchestrator | 2026-01-10 14:53:17 | INFO  | Task 27a2dbc7-3cd7-4e7d-a96b-79d180378f23 is in state STARTED 2026-01-10 14:53:17.905309 | orchestrator | 2026-01-10 14:53:17 | INFO  | Task 193054ac-f4e0-40cd-a896-86221a63bfeb is in state STARTED 2026-01-10 14:53:17.905313 | orchestrator | 2026-01-10 14:53:17 | INFO  | Task 0a348750-1b44-4b05-95c0-509296aa4862 is in state STARTED 2026-01-10 14:53:17.905316 | orchestrator | 2026-01-10 14:53:17 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:53:20.926065 | orchestrator | 2026-01-10 14:53:20 | INFO  | Task cfe501a0-cf3d-476f-8c6e-8c382245ff6f is in state STARTED 2026-01-10 14:53:20.926537 | orchestrator | 2026-01-10 14:53:20 | INFO  | Task 27a2dbc7-3cd7-4e7d-a96b-79d180378f23 is in state STARTED 2026-01-10 14:53:20.927682 | orchestrator | 2026-01-10 14:53:20 | INFO  | Task 193054ac-f4e0-40cd-a896-86221a63bfeb is in state STARTED 2026-01-10 14:53:20.928355 | orchestrator | 2026-01-10 14:53:20 | INFO  | Task 0a348750-1b44-4b05-95c0-509296aa4862 is in state STARTED 2026-01-10 14:53:20.928386 | orchestrator | 2026-01-10 14:53:20 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:53:23.960564 | orchestrator | 2026-01-10 14:53:23 | INFO  | Task cfe501a0-cf3d-476f-8c6e-8c382245ff6f is in state STARTED 2026-01-10 14:53:23.960620 | orchestrator | 2026-01-10 14:53:23 | INFO  | Task 27a2dbc7-3cd7-4e7d-a96b-79d180378f23 is in state STARTED 2026-01-10 14:53:23.960637 | orchestrator | 2026-01-10 14:53:23 | INFO  | Task 193054ac-f4e0-40cd-a896-86221a63bfeb is in state STARTED 2026-01-10 14:53:23.960669 | orchestrator | 2026-01-10 14:53:23 | INFO  | Task 0a348750-1b44-4b05-95c0-509296aa4862 is in state STARTED 2026-01-10 14:53:23.960677 | orchestrator | 2026-01-10 14:53:23 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:53:26.995286 | orchestrator | 2026-01-10 14:53:26 | INFO  | Task cfe501a0-cf3d-476f-8c6e-8c382245ff6f is in state STARTED 2026-01-10 14:53:26.996055 | orchestrator | 2026-01-10 14:53:26 | INFO  | Task 27a2dbc7-3cd7-4e7d-a96b-79d180378f23 is in state STARTED 2026-01-10 14:53:26.996964 | orchestrator | 2026-01-10 14:53:26 | INFO  | Task 193054ac-f4e0-40cd-a896-86221a63bfeb is in state STARTED 2026-01-10 14:53:26.998332 | orchestrator | 2026-01-10 14:53:26 | INFO  | Task 0a348750-1b44-4b05-95c0-509296aa4862 is in state STARTED 2026-01-10 14:53:26.998359 | orchestrator | 2026-01-10 14:53:26 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:53:30.033889 | orchestrator | 2026-01-10 14:53:30 | INFO  | Task cfe501a0-cf3d-476f-8c6e-8c382245ff6f is in state SUCCESS 2026-01-10 14:53:30.034260 | orchestrator | 2026-01-10 14:53:30 | INFO  | Task 50416474-4d7f-4703-a08e-c36f9b97b8f5 is in state STARTED 2026-01-10 14:53:30.034989 | orchestrator | 2026-01-10 14:53:30 | INFO  | Task 27a2dbc7-3cd7-4e7d-a96b-79d180378f23 is in state STARTED 2026-01-10 14:53:30.035993 | orchestrator | 2026-01-10 14:53:30 | INFO  | Task 193054ac-f4e0-40cd-a896-86221a63bfeb is in state STARTED 2026-01-10 14:53:30.036575 | orchestrator | 2026-01-10 14:53:30 | INFO  | Task 0a348750-1b44-4b05-95c0-509296aa4862 is in state STARTED 2026-01-10 14:53:30.036809 | orchestrator | 2026-01-10 14:53:30 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:53:33.066008 | orchestrator | 2026-01-10 14:53:33 | INFO  | Task 50416474-4d7f-4703-a08e-c36f9b97b8f5 is in state STARTED 2026-01-10 14:53:33.067091 | orchestrator | 2026-01-10 14:53:33 | INFO  | Task 27a2dbc7-3cd7-4e7d-a96b-79d180378f23 is in state STARTED 2026-01-10 14:53:33.068349 | orchestrator | 2026-01-10 14:53:33 | INFO  | Task 193054ac-f4e0-40cd-a896-86221a63bfeb is in state STARTED 2026-01-10 14:53:33.069884 | orchestrator | 2026-01-10 14:53:33 | INFO  | Task 0a348750-1b44-4b05-95c0-509296aa4862 is in state STARTED 2026-01-10 14:53:33.069924 | orchestrator | 2026-01-10 14:53:33 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:53:36.120777 | orchestrator | 2026-01-10 14:53:36 | INFO  | Task 50416474-4d7f-4703-a08e-c36f9b97b8f5 is in state STARTED 2026-01-10 14:53:36.121481 | orchestrator | 2026-01-10 14:53:36 | INFO  | Task 27a2dbc7-3cd7-4e7d-a96b-79d180378f23 is in state STARTED 2026-01-10 14:53:36.122546 | orchestrator | 2026-01-10 14:53:36 | INFO  | Task 193054ac-f4e0-40cd-a896-86221a63bfeb is in state STARTED 2026-01-10 14:53:36.123760 | orchestrator | 2026-01-10 14:53:36 | INFO  | Task 0a348750-1b44-4b05-95c0-509296aa4862 is in state STARTED 2026-01-10 14:53:36.124985 | orchestrator | 2026-01-10 14:53:36 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:53:39.168317 | orchestrator | 2026-01-10 14:53:39 | INFO  | Task 50416474-4d7f-4703-a08e-c36f9b97b8f5 is in state STARTED 2026-01-10 14:53:39.168711 | orchestrator | 2026-01-10 14:53:39 | INFO  | Task 27a2dbc7-3cd7-4e7d-a96b-79d180378f23 is in state STARTED 2026-01-10 14:53:39.169505 | orchestrator | 2026-01-10 14:53:39 | INFO  | Task 193054ac-f4e0-40cd-a896-86221a63bfeb is in state STARTED 2026-01-10 14:53:39.170393 | orchestrator | 2026-01-10 14:53:39 | INFO  | Task 0a348750-1b44-4b05-95c0-509296aa4862 is in state STARTED 2026-01-10 14:53:39.170428 | orchestrator | 2026-01-10 14:53:39 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:53:42.204943 | orchestrator | 2026-01-10 14:53:42 | INFO  | Task 50416474-4d7f-4703-a08e-c36f9b97b8f5 is in state STARTED 2026-01-10 14:53:42.205437 | orchestrator | 2026-01-10 14:53:42 | INFO  | Task 27a2dbc7-3cd7-4e7d-a96b-79d180378f23 is in state STARTED 2026-01-10 14:53:42.206263 | orchestrator | 2026-01-10 14:53:42 | INFO  | Task 193054ac-f4e0-40cd-a896-86221a63bfeb is in state STARTED 2026-01-10 14:53:42.206899 | orchestrator | 2026-01-10 14:53:42 | INFO  | Task 0a348750-1b44-4b05-95c0-509296aa4862 is in state STARTED 2026-01-10 14:53:42.206929 | orchestrator | 2026-01-10 14:53:42 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:53:45.255403 | orchestrator | 2026-01-10 14:53:45 | INFO  | Task 50416474-4d7f-4703-a08e-c36f9b97b8f5 is in state STARTED 2026-01-10 14:53:45.257376 | orchestrator | 2026-01-10 14:53:45 | INFO  | Task 27a2dbc7-3cd7-4e7d-a96b-79d180378f23 is in state STARTED 2026-01-10 14:53:45.263140 | orchestrator | 2026-01-10 14:53:45 | INFO  | Task 193054ac-f4e0-40cd-a896-86221a63bfeb is in state STARTED 2026-01-10 14:53:45.269015 | orchestrator | 2026-01-10 14:53:45 | INFO  | Task 0a348750-1b44-4b05-95c0-509296aa4862 is in state STARTED 2026-01-10 14:53:45.269713 | orchestrator | 2026-01-10 14:53:45 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:53:48.326242 | orchestrator | 2026-01-10 14:53:48 | INFO  | Task 50416474-4d7f-4703-a08e-c36f9b97b8f5 is in state STARTED 2026-01-10 14:53:48.326528 | orchestrator | 2026-01-10 14:53:48 | INFO  | Task 27a2dbc7-3cd7-4e7d-a96b-79d180378f23 is in state STARTED 2026-01-10 14:53:48.327648 | orchestrator | 2026-01-10 14:53:48 | INFO  | Task 193054ac-f4e0-40cd-a896-86221a63bfeb is in state STARTED 2026-01-10 14:53:48.328589 | orchestrator | 2026-01-10 14:53:48 | INFO  | Task 0a348750-1b44-4b05-95c0-509296aa4862 is in state STARTED 2026-01-10 14:53:48.328702 | orchestrator | 2026-01-10 14:53:48 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:53:51.363206 | orchestrator | 2026-01-10 14:53:51 | INFO  | Task 50416474-4d7f-4703-a08e-c36f9b97b8f5 is in state STARTED 2026-01-10 14:53:51.365087 | orchestrator | 2026-01-10 14:53:51 | INFO  | Task 27a2dbc7-3cd7-4e7d-a96b-79d180378f23 is in state STARTED 2026-01-10 14:53:51.366755 | orchestrator | 2026-01-10 14:53:51 | INFO  | Task 193054ac-f4e0-40cd-a896-86221a63bfeb is in state STARTED 2026-01-10 14:53:51.368724 | orchestrator | 2026-01-10 14:53:51 | INFO  | Task 0a348750-1b44-4b05-95c0-509296aa4862 is in state STARTED 2026-01-10 14:53:51.369034 | orchestrator | 2026-01-10 14:53:51 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:53:54.414512 | orchestrator | 2026-01-10 14:53:54 | INFO  | Task 50416474-4d7f-4703-a08e-c36f9b97b8f5 is in state STARTED 2026-01-10 14:53:54.417551 | orchestrator | 2026-01-10 14:53:54 | INFO  | Task 27a2dbc7-3cd7-4e7d-a96b-79d180378f23 is in state STARTED 2026-01-10 14:53:54.420427 | orchestrator | 2026-01-10 14:53:54 | INFO  | Task 193054ac-f4e0-40cd-a896-86221a63bfeb is in state STARTED 2026-01-10 14:53:54.421958 | orchestrator | 2026-01-10 14:53:54 | INFO  | Task 0a348750-1b44-4b05-95c0-509296aa4862 is in state STARTED 2026-01-10 14:53:54.422003 | orchestrator | 2026-01-10 14:53:54 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:53:57.459648 | orchestrator | 2026-01-10 14:53:57 | INFO  | Task 50416474-4d7f-4703-a08e-c36f9b97b8f5 is in state STARTED 2026-01-10 14:53:57.460839 | orchestrator | 2026-01-10 14:53:57 | INFO  | Task 27a2dbc7-3cd7-4e7d-a96b-79d180378f23 is in state STARTED 2026-01-10 14:53:57.463369 | orchestrator | 2026-01-10 14:53:57 | INFO  | Task 193054ac-f4e0-40cd-a896-86221a63bfeb is in state STARTED 2026-01-10 14:53:57.463999 | orchestrator | 2026-01-10 14:53:57 | INFO  | Task 0a348750-1b44-4b05-95c0-509296aa4862 is in state STARTED 2026-01-10 14:53:57.464253 | orchestrator | 2026-01-10 14:53:57 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:54:00.514335 | orchestrator | 2026-01-10 14:54:00 | INFO  | Task 50416474-4d7f-4703-a08e-c36f9b97b8f5 is in state STARTED 2026-01-10 14:54:00.520709 | orchestrator | 2026-01-10 14:54:00 | INFO  | Task 27a2dbc7-3cd7-4e7d-a96b-79d180378f23 is in state STARTED 2026-01-10 14:54:00.523624 | orchestrator | 2026-01-10 14:54:00 | INFO  | Task 193054ac-f4e0-40cd-a896-86221a63bfeb is in state STARTED 2026-01-10 14:54:00.525001 | orchestrator | 2026-01-10 14:54:00 | INFO  | Task 0a348750-1b44-4b05-95c0-509296aa4862 is in state STARTED 2026-01-10 14:54:00.526345 | orchestrator | 2026-01-10 14:54:00 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:54:03.588320 | orchestrator | 2026-01-10 14:54:03 | INFO  | Task 50416474-4d7f-4703-a08e-c36f9b97b8f5 is in state STARTED 2026-01-10 14:54:03.590283 | orchestrator | 2026-01-10 14:54:03 | INFO  | Task 27a2dbc7-3cd7-4e7d-a96b-79d180378f23 is in state STARTED 2026-01-10 14:54:03.592384 | orchestrator | 2026-01-10 14:54:03 | INFO  | Task 193054ac-f4e0-40cd-a896-86221a63bfeb is in state STARTED 2026-01-10 14:54:03.594772 | orchestrator | 2026-01-10 14:54:03 | INFO  | Task 0a348750-1b44-4b05-95c0-509296aa4862 is in state STARTED 2026-01-10 14:54:03.595178 | orchestrator | 2026-01-10 14:54:03 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:54:06.636247 | orchestrator | 2026-01-10 14:54:06 | INFO  | Task 50416474-4d7f-4703-a08e-c36f9b97b8f5 is in state STARTED 2026-01-10 14:54:06.638505 | orchestrator | 2026-01-10 14:54:06 | INFO  | Task 27a2dbc7-3cd7-4e7d-a96b-79d180378f23 is in state STARTED 2026-01-10 14:54:06.640785 | orchestrator | 2026-01-10 14:54:06 | INFO  | Task 193054ac-f4e0-40cd-a896-86221a63bfeb is in state STARTED 2026-01-10 14:54:06.642474 | orchestrator | 2026-01-10 14:54:06 | INFO  | Task 0a348750-1b44-4b05-95c0-509296aa4862 is in state STARTED 2026-01-10 14:54:06.642598 | orchestrator | 2026-01-10 14:54:06 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:54:09.687386 | orchestrator | 2026-01-10 14:54:09 | INFO  | Task 50416474-4d7f-4703-a08e-c36f9b97b8f5 is in state STARTED 2026-01-10 14:54:09.689206 | orchestrator | 2026-01-10 14:54:09 | INFO  | Task 27a2dbc7-3cd7-4e7d-a96b-79d180378f23 is in state STARTED 2026-01-10 14:54:09.691272 | orchestrator | 2026-01-10 14:54:09 | INFO  | Task 193054ac-f4e0-40cd-a896-86221a63bfeb is in state STARTED 2026-01-10 14:54:09.692705 | orchestrator | 2026-01-10 14:54:09 | INFO  | Task 0a348750-1b44-4b05-95c0-509296aa4862 is in state STARTED 2026-01-10 14:54:09.692796 | orchestrator | 2026-01-10 14:54:09 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:54:12.734325 | orchestrator | 2026-01-10 14:54:12 | INFO  | Task 50416474-4d7f-4703-a08e-c36f9b97b8f5 is in state STARTED 2026-01-10 14:54:12.735861 | orchestrator | 2026-01-10 14:54:12 | INFO  | Task 27a2dbc7-3cd7-4e7d-a96b-79d180378f23 is in state STARTED 2026-01-10 14:54:12.737577 | orchestrator | 2026-01-10 14:54:12 | INFO  | Task 193054ac-f4e0-40cd-a896-86221a63bfeb is in state STARTED 2026-01-10 14:54:12.739166 | orchestrator | 2026-01-10 14:54:12 | INFO  | Task 0a348750-1b44-4b05-95c0-509296aa4862 is in state STARTED 2026-01-10 14:54:12.739200 | orchestrator | 2026-01-10 14:54:12 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:54:15.786863 | orchestrator | 2026-01-10 14:54:15 | INFO  | Task 50416474-4d7f-4703-a08e-c36f9b97b8f5 is in state STARTED 2026-01-10 14:54:15.789356 | orchestrator | 2026-01-10 14:54:15 | INFO  | Task 27a2dbc7-3cd7-4e7d-a96b-79d180378f23 is in state STARTED 2026-01-10 14:54:15.791703 | orchestrator | 2026-01-10 14:54:15 | INFO  | Task 193054ac-f4e0-40cd-a896-86221a63bfeb is in state STARTED 2026-01-10 14:54:15.793935 | orchestrator | 2026-01-10 14:54:15 | INFO  | Task 0a348750-1b44-4b05-95c0-509296aa4862 is in state STARTED 2026-01-10 14:54:15.793987 | orchestrator | 2026-01-10 14:54:15 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:54:18.848640 | orchestrator | 2026-01-10 14:54:18 | INFO  | Task 50416474-4d7f-4703-a08e-c36f9b97b8f5 is in state STARTED 2026-01-10 14:54:18.848689 | orchestrator | 2026-01-10 14:54:18 | INFO  | Task 27a2dbc7-3cd7-4e7d-a96b-79d180378f23 is in state STARTED 2026-01-10 14:54:18.850753 | orchestrator | 2026-01-10 14:54:18 | INFO  | Task 193054ac-f4e0-40cd-a896-86221a63bfeb is in state SUCCESS 2026-01-10 14:54:18.852463 | orchestrator | 2026-01-10 14:54:18.852512 | orchestrator | 2026-01-10 14:54:18.852518 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-10 14:54:18.852524 | orchestrator | 2026-01-10 14:54:18.852528 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-10 14:54:18.852532 | orchestrator | Saturday 10 January 2026 14:53:24 +0000 (0:00:00.346) 0:00:00.346 ****** 2026-01-10 14:54:18.852536 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:54:18.852541 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:54:18.852545 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:54:18.852549 | orchestrator | 2026-01-10 14:54:18.852564 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-10 14:54:18.852568 | orchestrator | Saturday 10 January 2026 14:53:24 +0000 (0:00:00.505) 0:00:00.851 ****** 2026-01-10 14:54:18.852572 | orchestrator | ok: [testbed-node-0] => (item=enable_nova_True) 2026-01-10 14:54:18.852576 | orchestrator | ok: [testbed-node-1] => (item=enable_nova_True) 2026-01-10 14:54:18.852580 | orchestrator | ok: [testbed-node-2] => (item=enable_nova_True) 2026-01-10 14:54:18.852584 | orchestrator | 2026-01-10 14:54:18.852587 | orchestrator | PLAY [Wait for the Nova service] *********************************************** 2026-01-10 14:54:18.852591 | orchestrator | 2026-01-10 14:54:18.852595 | orchestrator | TASK [Waiting for Nova public port to be UP] *********************************** 2026-01-10 14:54:18.852599 | orchestrator | Saturday 10 January 2026 14:53:25 +0000 (0:00:00.864) 0:00:01.716 ****** 2026-01-10 14:54:18.852602 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:54:18.852606 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:54:18.852610 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:54:18.852613 | orchestrator | 2026-01-10 14:54:18.852620 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-10 14:54:18.852626 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-10 14:54:18.852634 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-10 14:54:18.852640 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-10 14:54:18.852646 | orchestrator | 2026-01-10 14:54:18.852653 | orchestrator | 2026-01-10 14:54:18.852659 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-10 14:54:18.852666 | orchestrator | Saturday 10 January 2026 14:53:26 +0000 (0:00:00.762) 0:00:02.478 ****** 2026-01-10 14:54:18.852672 | orchestrator | =============================================================================== 2026-01-10 14:54:18.852697 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.86s 2026-01-10 14:54:18.852710 | orchestrator | Waiting for Nova public port to be UP ----------------------------------- 0.76s 2026-01-10 14:54:18.852716 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.51s 2026-01-10 14:54:18.852723 | orchestrator | 2026-01-10 14:54:18.852781 | orchestrator | 2026-01-10 14:54:18.852787 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-10 14:54:18.852803 | orchestrator | 2026-01-10 14:54:18.852807 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-10 14:54:18.852811 | orchestrator | Saturday 10 January 2026 14:52:19 +0000 (0:00:00.266) 0:00:00.266 ****** 2026-01-10 14:54:18.852815 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:54:18.852818 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:54:18.852836 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:54:18.852846 | orchestrator | 2026-01-10 14:54:18.852854 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-10 14:54:18.852860 | orchestrator | Saturday 10 January 2026 14:52:19 +0000 (0:00:00.283) 0:00:00.549 ****** 2026-01-10 14:54:18.852867 | orchestrator | ok: [testbed-node-0] => (item=enable_magnum_True) 2026-01-10 14:54:18.852874 | orchestrator | ok: [testbed-node-1] => (item=enable_magnum_True) 2026-01-10 14:54:18.852879 | orchestrator | ok: [testbed-node-2] => (item=enable_magnum_True) 2026-01-10 14:54:18.852894 | orchestrator | 2026-01-10 14:54:18.852905 | orchestrator | PLAY [Apply role magnum] ******************************************************* 2026-01-10 14:54:18.852911 | orchestrator | 2026-01-10 14:54:18.852916 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-01-10 14:54:18.852921 | orchestrator | Saturday 10 January 2026 14:52:20 +0000 (0:00:00.486) 0:00:01.035 ****** 2026-01-10 14:54:18.852927 | orchestrator | included: /ansible/roles/magnum/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:54:18.852933 | orchestrator | 2026-01-10 14:54:18.852939 | orchestrator | TASK [service-ks-register : magnum | Creating/deleting services] *************** 2026-01-10 14:54:18.852945 | orchestrator | Saturday 10 January 2026 14:52:20 +0000 (0:00:00.558) 0:00:01.594 ****** 2026-01-10 14:54:18.852951 | orchestrator | changed: [testbed-node-0] => (item=magnum (container-infra)) 2026-01-10 14:54:18.852958 | orchestrator | 2026-01-10 14:54:18.852964 | orchestrator | TASK [service-ks-register : magnum | Creating/deleting endpoints] ************** 2026-01-10 14:54:18.852969 | orchestrator | Saturday 10 January 2026 14:52:24 +0000 (0:00:03.620) 0:00:05.215 ****** 2026-01-10 14:54:18.852975 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api-int.testbed.osism.xyz:9511/v1 -> internal) 2026-01-10 14:54:18.852981 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api.testbed.osism.xyz:9511/v1 -> public) 2026-01-10 14:54:18.852986 | orchestrator | 2026-01-10 14:54:18.852992 | orchestrator | TASK [service-ks-register : magnum | Creating projects] ************************ 2026-01-10 14:54:18.852998 | orchestrator | Saturday 10 January 2026 14:52:31 +0000 (0:00:06.947) 0:00:12.163 ****** 2026-01-10 14:54:18.853004 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-01-10 14:54:18.853010 | orchestrator | 2026-01-10 14:54:18.853016 | orchestrator | TASK [service-ks-register : magnum | Creating users] *************************** 2026-01-10 14:54:18.853022 | orchestrator | Saturday 10 January 2026 14:52:34 +0000 (0:00:03.249) 0:00:15.412 ****** 2026-01-10 14:54:18.853041 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-01-10 14:54:18.853047 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service) 2026-01-10 14:54:18.853052 | orchestrator | 2026-01-10 14:54:18.853057 | orchestrator | TASK [service-ks-register : magnum | Creating roles] *************************** 2026-01-10 14:54:18.853064 | orchestrator | Saturday 10 January 2026 14:52:39 +0000 (0:00:04.315) 0:00:19.728 ****** 2026-01-10 14:54:18.853069 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-01-10 14:54:18.853075 | orchestrator | 2026-01-10 14:54:18.853081 | orchestrator | TASK [service-ks-register : magnum | Granting/revoking user roles] ************* 2026-01-10 14:54:18.853087 | orchestrator | Saturday 10 January 2026 14:52:42 +0000 (0:00:03.321) 0:00:23.049 ****** 2026-01-10 14:54:18.853094 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service -> admin) 2026-01-10 14:54:18.853100 | orchestrator | 2026-01-10 14:54:18.853106 | orchestrator | TASK [magnum : Creating Magnum trustee domain] ********************************* 2026-01-10 14:54:18.853112 | orchestrator | Saturday 10 January 2026 14:52:45 +0000 (0:00:03.243) 0:00:26.292 ****** 2026-01-10 14:54:18.853118 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:54:18.853123 | orchestrator | 2026-01-10 14:54:18.853130 | orchestrator | TASK [magnum : Creating Magnum trustee user] *********************************** 2026-01-10 14:54:18.853136 | orchestrator | Saturday 10 January 2026 14:52:48 +0000 (0:00:03.101) 0:00:29.394 ****** 2026-01-10 14:54:18.853142 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:54:18.853155 | orchestrator | 2026-01-10 14:54:18.853162 | orchestrator | TASK [magnum : Creating Magnum trustee user role] ****************************** 2026-01-10 14:54:18.853168 | orchestrator | Saturday 10 January 2026 14:52:53 +0000 (0:00:04.273) 0:00:33.667 ****** 2026-01-10 14:54:18.853174 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:54:18.853180 | orchestrator | 2026-01-10 14:54:18.853184 | orchestrator | TASK [magnum : Ensuring config directories exist] ****************************** 2026-01-10 14:54:18.853188 | orchestrator | Saturday 10 January 2026 14:52:56 +0000 (0:00:03.379) 0:00:37.047 ****** 2026-01-10 14:54:18.853200 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-10 14:54:18.853206 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-10 14:54:18.853216 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-10 14:54:18.853221 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-10 14:54:18.853228 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-10 14:54:18.853234 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-10 14:54:18.853238 | orchestrator | 2026-01-10 14:54:18.853242 | orchestrator | TASK [magnum : Check if policies shall be overwritten] ************************* 2026-01-10 14:54:18.853245 | orchestrator | Saturday 10 January 2026 14:52:57 +0000 (0:00:01.305) 0:00:38.352 ****** 2026-01-10 14:54:18.853249 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:54:18.853253 | orchestrator | 2026-01-10 14:54:18.853257 | orchestrator | TASK [magnum : Set magnum policy file] ***************************************** 2026-01-10 14:54:18.853260 | orchestrator | Saturday 10 January 2026 14:52:57 +0000 (0:00:00.136) 0:00:38.489 ****** 2026-01-10 14:54:18.853264 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:54:18.853268 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:54:18.853271 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:54:18.853275 | orchestrator | 2026-01-10 14:54:18.853279 | orchestrator | TASK [magnum : Check if kubeconfig file is supplied] *************************** 2026-01-10 14:54:18.853282 | orchestrator | Saturday 10 January 2026 14:52:58 +0000 (0:00:00.535) 0:00:39.024 ****** 2026-01-10 14:54:18.853286 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-10 14:54:18.853290 | orchestrator | 2026-01-10 14:54:18.853294 | orchestrator | TASK [magnum : Copying over kubeconfig file] *********************************** 2026-01-10 14:54:18.853297 | orchestrator | Saturday 10 January 2026 14:52:59 +0000 (0:00:00.895) 0:00:39.920 ****** 2026-01-10 14:54:18.853304 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-10 14:54:18.853311 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-10 14:54:18.853318 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-10 14:54:18.853322 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-10 14:54:18.853326 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-10 14:54:18.853335 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-10 14:54:18.853341 | orchestrator | 2026-01-10 14:54:18.853345 | orchestrator | TASK [magnum : Set magnum kubeconfig file's path] ****************************** 2026-01-10 14:54:18.853359 | orchestrator | Saturday 10 January 2026 14:53:01 +0000 (0:00:02.444) 0:00:42.364 ****** 2026-01-10 14:54:18.853363 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:54:18.853367 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:54:18.853371 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:54:18.853374 | orchestrator | 2026-01-10 14:54:18.853378 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-01-10 14:54:18.853382 | orchestrator | Saturday 10 January 2026 14:53:02 +0000 (0:00:00.312) 0:00:42.676 ****** 2026-01-10 14:54:18.853386 | orchestrator | included: /ansible/roles/magnum/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:54:18.853390 | orchestrator | 2026-01-10 14:54:18.853394 | orchestrator | TASK [service-cert-copy : magnum | Copying over extra CA certificates] ********* 2026-01-10 14:54:18.853398 | orchestrator | Saturday 10 January 2026 14:53:02 +0000 (0:00:00.746) 0:00:43.423 ****** 2026-01-10 14:54:18.853404 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-10 14:54:18.853408 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-10 14:54:18.853413 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-10 14:54:18.853422 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-10 14:54:18.853426 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-10 14:54:18.853432 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-10 14:54:18.853436 | orchestrator | 2026-01-10 14:54:18.853440 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS certificate] *** 2026-01-10 14:54:18.853444 | orchestrator | Saturday 10 January 2026 14:53:05 +0000 (0:00:02.757) 0:00:46.181 ****** 2026-01-10 14:54:18.853448 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-01-10 14:54:18.853457 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-10 14:54:18.853461 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:54:18.853465 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-01-10 14:54:18.853469 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-01-10 14:54:18.853474 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-10 14:54:18.853478 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:54:18.853482 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-10 14:54:18.853488 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:54:18.853493 | orchestrator | 2026-01-10 14:54:18.853496 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS key] ****** 2026-01-10 14:54:18.853500 | orchestrator | Saturday 10 January 2026 14:53:07 +0000 (0:00:01.867) 0:00:48.048 ****** 2026-01-10 14:54:18.853528 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-01-10 14:54:18.853533 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-01-10 14:54:18.853545 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-10 14:54:18.853549 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:54:18.853582 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-10 14:54:18.853590 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:54:18.853770 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-01-10 14:54:18.853785 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-10 14:54:18.853790 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:54:18.853794 | orchestrator | 2026-01-10 14:54:18.853797 | orchestrator | TASK [magnum : Copying over config.json files for services] ******************** 2026-01-10 14:54:18.853801 | orchestrator | Saturday 10 January 2026 14:53:09 +0000 (0:00:01.602) 0:00:49.651 ****** 2026-01-10 14:54:18.853809 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-10 14:54:18.853813 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-10 14:54:18.853827 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-10 14:54:18.853831 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-10 14:54:18.853835 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-10 14:54:18.853841 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-10 14:54:18.853845 | orchestrator | 2026-01-10 14:54:18.853849 | orchestrator | TASK [magnum : Copying over magnum.conf] *************************************** 2026-01-10 14:54:18.853856 | orchestrator | Saturday 10 January 2026 14:53:11 +0000 (0:00:02.497) 0:00:52.148 ****** 2026-01-10 14:54:18.853860 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-10 14:54:18.853867 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-10 14:54:18.853871 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-10 14:54:18.853877 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-10 14:54:18.853882 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-10 14:54:18.853888 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-10 14:54:18.853892 | orchestrator | 2026-01-10 14:54:18.853896 | orchestrator | TASK [magnum : Copying over existing policy file] ****************************** 2026-01-10 14:54:18.853900 | orchestrator | Saturday 10 January 2026 14:53:19 +0000 (0:00:07.497) 0:00:59.646 ****** 2026-01-10 14:54:18.853906 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-01-10 14:54:18.853910 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-10 14:54:18.853914 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:54:18.853921 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-01-10 14:54:18.853927 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-10 14:54:18.853931 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:54:18.853937 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-01-10 14:54:18.853942 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-10 14:54:18.853953 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:54:18.853961 | orchestrator | 2026-01-10 14:54:18.853965 | orchestrator | TASK [service-check-containers : magnum | Check containers] ******************** 2026-01-10 14:54:18.853969 | orchestrator | Saturday 10 January 2026 14:53:20 +0000 (0:00:01.464) 0:01:01.110 ****** 2026-01-10 14:54:18.853975 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-10 14:54:18.853982 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-10 14:54:18.853988 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-10 14:54:18.853993 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-10 14:54:18.853997 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-10 14:54:18.854003 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-10 14:54:18.854009 | orchestrator | 2026-01-10 14:54:18.854041 | orchestrator | TASK [service-check-containers : magnum | Notify handlers to restart containers] *** 2026-01-10 14:54:18.854046 | orchestrator | Saturday 10 January 2026 14:53:24 +0000 (0:00:03.624) 0:01:04.735 ****** 2026-01-10 14:54:18.854050 | orchestrator | changed: [testbed-node-0] => { 2026-01-10 14:54:18.854054 | orchestrator |  "msg": "Notifying handlers" 2026-01-10 14:54:18.854058 | orchestrator | } 2026-01-10 14:54:18.854062 | orchestrator | changed: [testbed-node-1] => { 2026-01-10 14:54:18.854065 | orchestrator |  "msg": "Notifying handlers" 2026-01-10 14:54:18.854082 | orchestrator | } 2026-01-10 14:54:18.854088 | orchestrator | changed: [testbed-node-2] => { 2026-01-10 14:54:18.854094 | orchestrator |  "msg": "Notifying handlers" 2026-01-10 14:54:18.854101 | orchestrator | } 2026-01-10 14:54:18.854107 | orchestrator | 2026-01-10 14:54:18.854118 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-01-10 14:54:18.854125 | orchestrator | Saturday 10 January 2026 14:53:24 +0000 (0:00:00.553) 0:01:05.288 ****** 2026-01-10 14:54:18.854133 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-01-10 14:54:18.854144 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-10 14:54:18.854151 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:54:18.854158 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-01-10 14:54:18.854185 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-10 14:54:18.854192 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:54:18.854198 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-01-10 14:54:18.854207 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-10 14:54:18.854213 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:54:18.854219 | orchestrator | 2026-01-10 14:54:18.854225 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-01-10 14:54:18.854230 | orchestrator | Saturday 10 January 2026 14:53:25 +0000 (0:00:01.225) 0:01:06.513 ****** 2026-01-10 14:54:18.854236 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:54:18.854242 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:54:18.854248 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:54:18.854253 | orchestrator | 2026-01-10 14:54:18.854259 | orchestrator | TASK [magnum : Creating Magnum database] *************************************** 2026-01-10 14:54:18.854265 | orchestrator | Saturday 10 January 2026 14:53:26 +0000 (0:00:00.674) 0:01:07.187 ****** 2026-01-10 14:54:18.854271 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:54:18.854280 | orchestrator | 2026-01-10 14:54:18.854286 | orchestrator | TASK [magnum : Creating Magnum database user and setting permissions] ********** 2026-01-10 14:54:18.854292 | orchestrator | Saturday 10 January 2026 14:53:28 +0000 (0:00:02.324) 0:01:09.512 ****** 2026-01-10 14:54:18.854297 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:54:18.854303 | orchestrator | 2026-01-10 14:54:18.854309 | orchestrator | TASK [magnum : Running Magnum bootstrap container] ***************************** 2026-01-10 14:54:18.854315 | orchestrator | Saturday 10 January 2026 14:53:31 +0000 (0:00:02.512) 0:01:12.024 ****** 2026-01-10 14:54:18.854321 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:54:18.854326 | orchestrator | 2026-01-10 14:54:18.854332 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-01-10 14:54:18.854338 | orchestrator | Saturday 10 January 2026 14:53:46 +0000 (0:00:15.548) 0:01:27.573 ****** 2026-01-10 14:54:18.854344 | orchestrator | 2026-01-10 14:54:18.854349 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-01-10 14:54:18.854355 | orchestrator | Saturday 10 January 2026 14:53:47 +0000 (0:00:00.068) 0:01:27.641 ****** 2026-01-10 14:54:18.854361 | orchestrator | 2026-01-10 14:54:18.854366 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-01-10 14:54:18.854372 | orchestrator | Saturday 10 January 2026 14:53:47 +0000 (0:00:00.066) 0:01:27.708 ****** 2026-01-10 14:54:18.854378 | orchestrator | 2026-01-10 14:54:18.854384 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-api container] ************************ 2026-01-10 14:54:18.854390 | orchestrator | Saturday 10 January 2026 14:53:47 +0000 (0:00:00.070) 0:01:27.779 ****** 2026-01-10 14:54:18.854396 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:54:18.854402 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:54:18.854410 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:54:18.854416 | orchestrator | 2026-01-10 14:54:18.854422 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-conductor container] ****************** 2026-01-10 14:54:18.854427 | orchestrator | Saturday 10 January 2026 14:54:00 +0000 (0:00:12.906) 0:01:40.685 ****** 2026-01-10 14:54:18.854434 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:54:18.854439 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:54:18.854445 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:54:18.854451 | orchestrator | 2026-01-10 14:54:18.854457 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-10 14:54:18.854463 | orchestrator | testbed-node-0 : ok=27  changed=19  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-10 14:54:18.854470 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-01-10 14:54:18.854477 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-01-10 14:54:18.854483 | orchestrator | 2026-01-10 14:54:18.854489 | orchestrator | 2026-01-10 14:54:18.854494 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-10 14:54:18.854500 | orchestrator | Saturday 10 January 2026 14:54:17 +0000 (0:00:17.270) 0:01:57.956 ****** 2026-01-10 14:54:18.854506 | orchestrator | =============================================================================== 2026-01-10 14:54:18.854512 | orchestrator | magnum : Restart magnum-conductor container ---------------------------- 17.27s 2026-01-10 14:54:18.854518 | orchestrator | magnum : Running Magnum bootstrap container ---------------------------- 15.55s 2026-01-10 14:54:18.854524 | orchestrator | magnum : Restart magnum-api container ---------------------------------- 12.91s 2026-01-10 14:54:18.854530 | orchestrator | magnum : Copying over magnum.conf --------------------------------------- 7.50s 2026-01-10 14:54:18.854536 | orchestrator | service-ks-register : magnum | Creating/deleting endpoints -------------- 6.95s 2026-01-10 14:54:18.854542 | orchestrator | service-ks-register : magnum | Creating users --------------------------- 4.32s 2026-01-10 14:54:18.854549 | orchestrator | magnum : Creating Magnum trustee user ----------------------------------- 4.27s 2026-01-10 14:54:18.854612 | orchestrator | service-check-containers : magnum | Check containers -------------------- 3.62s 2026-01-10 14:54:18.854619 | orchestrator | service-ks-register : magnum | Creating/deleting services --------------- 3.62s 2026-01-10 14:54:18.854626 | orchestrator | magnum : Creating Magnum trustee user role ------------------------------ 3.38s 2026-01-10 14:54:18.854632 | orchestrator | service-ks-register : magnum | Creating roles --------------------------- 3.32s 2026-01-10 14:54:18.854637 | orchestrator | service-ks-register : magnum | Creating projects ------------------------ 3.25s 2026-01-10 14:54:18.854641 | orchestrator | service-ks-register : magnum | Granting/revoking user roles ------------- 3.24s 2026-01-10 14:54:18.854646 | orchestrator | magnum : Creating Magnum trustee domain --------------------------------- 3.10s 2026-01-10 14:54:18.854650 | orchestrator | service-cert-copy : magnum | Copying over extra CA certificates --------- 2.76s 2026-01-10 14:54:18.854658 | orchestrator | magnum : Creating Magnum database user and setting permissions ---------- 2.51s 2026-01-10 14:54:18.854663 | orchestrator | magnum : Copying over config.json files for services -------------------- 2.50s 2026-01-10 14:54:18.854667 | orchestrator | magnum : Copying over kubeconfig file ----------------------------------- 2.44s 2026-01-10 14:54:18.854672 | orchestrator | magnum : Creating Magnum database --------------------------------------- 2.32s 2026-01-10 14:54:18.854678 | orchestrator | service-cert-copy : magnum | Copying over backend internal TLS certificate --- 1.87s 2026-01-10 14:54:18.854685 | orchestrator | 2026-01-10 14:54:18 | INFO  | Task 0a348750-1b44-4b05-95c0-509296aa4862 is in state STARTED 2026-01-10 14:54:18.854692 | orchestrator | 2026-01-10 14:54:18 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:54:21.911076 | orchestrator | 2026-01-10 14:54:21 | INFO  | Task 50416474-4d7f-4703-a08e-c36f9b97b8f5 is in state STARTED 2026-01-10 14:54:21.913507 | orchestrator | 2026-01-10 14:54:21 | INFO  | Task 27a2dbc7-3cd7-4e7d-a96b-79d180378f23 is in state STARTED 2026-01-10 14:54:21.917938 | orchestrator | 2026-01-10 14:54:21 | INFO  | Task 0a348750-1b44-4b05-95c0-509296aa4862 is in state STARTED 2026-01-10 14:54:21.918460 | orchestrator | 2026-01-10 14:54:21 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:54:24.973065 | orchestrator | 2026-01-10 14:54:24 | INFO  | Task 50416474-4d7f-4703-a08e-c36f9b97b8f5 is in state STARTED 2026-01-10 14:54:24.975468 | orchestrator | 2026-01-10 14:54:24 | INFO  | Task 27a2dbc7-3cd7-4e7d-a96b-79d180378f23 is in state STARTED 2026-01-10 14:54:24.977954 | orchestrator | 2026-01-10 14:54:24 | INFO  | Task 0a348750-1b44-4b05-95c0-509296aa4862 is in state STARTED 2026-01-10 14:54:24.977998 | orchestrator | 2026-01-10 14:54:24 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:54:28.026872 | orchestrator | 2026-01-10 14:54:28 | INFO  | Task 50416474-4d7f-4703-a08e-c36f9b97b8f5 is in state STARTED 2026-01-10 14:54:28.029493 | orchestrator | 2026-01-10 14:54:28 | INFO  | Task 27a2dbc7-3cd7-4e7d-a96b-79d180378f23 is in state STARTED 2026-01-10 14:54:28.032302 | orchestrator | 2026-01-10 14:54:28 | INFO  | Task 0a348750-1b44-4b05-95c0-509296aa4862 is in state STARTED 2026-01-10 14:54:28.032649 | orchestrator | 2026-01-10 14:54:28 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:54:31.080651 | orchestrator | 2026-01-10 14:54:31 | INFO  | Task 50416474-4d7f-4703-a08e-c36f9b97b8f5 is in state STARTED 2026-01-10 14:54:31.081175 | orchestrator | 2026-01-10 14:54:31 | INFO  | Task 27a2dbc7-3cd7-4e7d-a96b-79d180378f23 is in state STARTED 2026-01-10 14:54:31.082325 | orchestrator | 2026-01-10 14:54:31 | INFO  | Task 0a348750-1b44-4b05-95c0-509296aa4862 is in state STARTED 2026-01-10 14:54:31.082354 | orchestrator | 2026-01-10 14:54:31 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:54:34.133923 | orchestrator | 2026-01-10 14:54:34 | INFO  | Task 50416474-4d7f-4703-a08e-c36f9b97b8f5 is in state STARTED 2026-01-10 14:54:34.137846 | orchestrator | 2026-01-10 14:54:34 | INFO  | Task 27a2dbc7-3cd7-4e7d-a96b-79d180378f23 is in state STARTED 2026-01-10 14:54:34.140356 | orchestrator | 2026-01-10 14:54:34 | INFO  | Task 0a348750-1b44-4b05-95c0-509296aa4862 is in state STARTED 2026-01-10 14:54:34.140407 | orchestrator | 2026-01-10 14:54:34 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:54:37.185438 | orchestrator | 2026-01-10 14:54:37 | INFO  | Task 50416474-4d7f-4703-a08e-c36f9b97b8f5 is in state STARTED 2026-01-10 14:54:37.187036 | orchestrator | 2026-01-10 14:54:37 | INFO  | Task 27a2dbc7-3cd7-4e7d-a96b-79d180378f23 is in state STARTED 2026-01-10 14:54:37.188100 | orchestrator | 2026-01-10 14:54:37 | INFO  | Task 0a348750-1b44-4b05-95c0-509296aa4862 is in state STARTED 2026-01-10 14:54:37.188165 | orchestrator | 2026-01-10 14:54:37 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:54:40.252183 | orchestrator | 2026-01-10 14:54:40 | INFO  | Task 50416474-4d7f-4703-a08e-c36f9b97b8f5 is in state STARTED 2026-01-10 14:54:40.253949 | orchestrator | 2026-01-10 14:54:40 | INFO  | Task 27a2dbc7-3cd7-4e7d-a96b-79d180378f23 is in state STARTED 2026-01-10 14:54:40.254811 | orchestrator | 2026-01-10 14:54:40 | INFO  | Task 0a348750-1b44-4b05-95c0-509296aa4862 is in state STARTED 2026-01-10 14:54:40.256892 | orchestrator | 2026-01-10 14:54:40 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:54:43.316677 | orchestrator | 2026-01-10 14:54:43 | INFO  | Task 50416474-4d7f-4703-a08e-c36f9b97b8f5 is in state STARTED 2026-01-10 14:54:43.317392 | orchestrator | 2026-01-10 14:54:43 | INFO  | Task 27a2dbc7-3cd7-4e7d-a96b-79d180378f23 is in state STARTED 2026-01-10 14:54:43.319105 | orchestrator | 2026-01-10 14:54:43 | INFO  | Task 0a348750-1b44-4b05-95c0-509296aa4862 is in state STARTED 2026-01-10 14:54:43.319286 | orchestrator | 2026-01-10 14:54:43 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:54:46.356592 | orchestrator | 2026-01-10 14:54:46 | INFO  | Task 50416474-4d7f-4703-a08e-c36f9b97b8f5 is in state STARTED 2026-01-10 14:54:46.360157 | orchestrator | 2026-01-10 14:54:46 | INFO  | Task 27a2dbc7-3cd7-4e7d-a96b-79d180378f23 is in state STARTED 2026-01-10 14:54:46.365587 | orchestrator | 2026-01-10 14:54:46 | INFO  | Task 0a348750-1b44-4b05-95c0-509296aa4862 is in state STARTED 2026-01-10 14:54:46.366576 | orchestrator | 2026-01-10 14:54:46 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:54:49.423378 | orchestrator | 2026-01-10 14:54:49 | INFO  | Task 50416474-4d7f-4703-a08e-c36f9b97b8f5 is in state STARTED 2026-01-10 14:54:49.426181 | orchestrator | 2026-01-10 14:54:49 | INFO  | Task 27a2dbc7-3cd7-4e7d-a96b-79d180378f23 is in state STARTED 2026-01-10 14:54:49.428841 | orchestrator | 2026-01-10 14:54:49 | INFO  | Task 0a348750-1b44-4b05-95c0-509296aa4862 is in state STARTED 2026-01-10 14:54:49.428890 | orchestrator | 2026-01-10 14:54:49 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:54:52.477004 | orchestrator | 2026-01-10 14:54:52 | INFO  | Task 50416474-4d7f-4703-a08e-c36f9b97b8f5 is in state STARTED 2026-01-10 14:54:52.479947 | orchestrator | 2026-01-10 14:54:52 | INFO  | Task 27a2dbc7-3cd7-4e7d-a96b-79d180378f23 is in state STARTED 2026-01-10 14:54:52.481863 | orchestrator | 2026-01-10 14:54:52 | INFO  | Task 0a348750-1b44-4b05-95c0-509296aa4862 is in state STARTED 2026-01-10 14:54:52.481900 | orchestrator | 2026-01-10 14:54:52 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:54:55.524589 | orchestrator | 2026-01-10 14:54:55 | INFO  | Task 50416474-4d7f-4703-a08e-c36f9b97b8f5 is in state STARTED 2026-01-10 14:54:55.527315 | orchestrator | 2026-01-10 14:54:55 | INFO  | Task 27a2dbc7-3cd7-4e7d-a96b-79d180378f23 is in state STARTED 2026-01-10 14:54:55.529527 | orchestrator | 2026-01-10 14:54:55 | INFO  | Task 0a348750-1b44-4b05-95c0-509296aa4862 is in state STARTED 2026-01-10 14:54:55.529575 | orchestrator | 2026-01-10 14:54:55 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:54:58.568927 | orchestrator | 2026-01-10 14:54:58 | INFO  | Task 50416474-4d7f-4703-a08e-c36f9b97b8f5 is in state STARTED 2026-01-10 14:54:58.570751 | orchestrator | 2026-01-10 14:54:58 | INFO  | Task 27a2dbc7-3cd7-4e7d-a96b-79d180378f23 is in state STARTED 2026-01-10 14:54:58.572882 | orchestrator | 2026-01-10 14:54:58 | INFO  | Task 0a348750-1b44-4b05-95c0-509296aa4862 is in state STARTED 2026-01-10 14:54:58.572930 | orchestrator | 2026-01-10 14:54:58 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:55:01.620607 | orchestrator | 2026-01-10 14:55:01 | INFO  | Task 50416474-4d7f-4703-a08e-c36f9b97b8f5 is in state STARTED 2026-01-10 14:55:01.622110 | orchestrator | 2026-01-10 14:55:01 | INFO  | Task 27a2dbc7-3cd7-4e7d-a96b-79d180378f23 is in state STARTED 2026-01-10 14:55:01.625133 | orchestrator | 2026-01-10 14:55:01 | INFO  | Task 0a348750-1b44-4b05-95c0-509296aa4862 is in state SUCCESS 2026-01-10 14:55:01.626944 | orchestrator | 2026-01-10 14:55:01.626982 | orchestrator | 2026-01-10 14:55:01.626987 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-10 14:55:01.626992 | orchestrator | 2026-01-10 14:55:01.626996 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-10 14:55:01.626999 | orchestrator | Saturday 10 January 2026 14:53:18 +0000 (0:00:00.255) 0:00:00.255 ****** 2026-01-10 14:55:01.627003 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:55:01.627008 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:55:01.627011 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:55:01.627015 | orchestrator | 2026-01-10 14:55:01.627018 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-10 14:55:01.627022 | orchestrator | Saturday 10 January 2026 14:53:19 +0000 (0:00:00.297) 0:00:00.553 ****** 2026-01-10 14:55:01.627025 | orchestrator | ok: [testbed-node-0] => (item=enable_grafana_True) 2026-01-10 14:55:01.627030 | orchestrator | ok: [testbed-node-1] => (item=enable_grafana_True) 2026-01-10 14:55:01.627034 | orchestrator | ok: [testbed-node-2] => (item=enable_grafana_True) 2026-01-10 14:55:01.627037 | orchestrator | 2026-01-10 14:55:01.627041 | orchestrator | PLAY [Apply role grafana] ****************************************************** 2026-01-10 14:55:01.627044 | orchestrator | 2026-01-10 14:55:01.627048 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-01-10 14:55:01.627051 | orchestrator | Saturday 10 January 2026 14:53:19 +0000 (0:00:00.667) 0:00:01.221 ****** 2026-01-10 14:55:01.627055 | orchestrator | included: /ansible/roles/grafana/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:55:01.627059 | orchestrator | 2026-01-10 14:55:01.627062 | orchestrator | TASK [grafana : Ensuring config directories exist] ***************************** 2026-01-10 14:55:01.627066 | orchestrator | Saturday 10 January 2026 14:53:20 +0000 (0:00:00.752) 0:00:01.973 ****** 2026-01-10 14:55:01.627070 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-10 14:55:01.627087 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-10 14:55:01.627097 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-10 14:55:01.627101 | orchestrator | 2026-01-10 14:55:01.627105 | orchestrator | TASK [grafana : Check if extra configuration file exists] ********************** 2026-01-10 14:55:01.627108 | orchestrator | Saturday 10 January 2026 14:53:21 +0000 (0:00:01.397) 0:00:03.371 ****** 2026-01-10 14:55:01.627112 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-10 14:55:01.627116 | orchestrator | 2026-01-10 14:55:01.627120 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-01-10 14:55:01.627124 | orchestrator | Saturday 10 January 2026 14:53:23 +0000 (0:00:01.624) 0:00:04.995 ****** 2026-01-10 14:55:01.627127 | orchestrator | included: /ansible/roles/grafana/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:55:01.627131 | orchestrator | 2026-01-10 14:55:01.627135 | orchestrator | TASK [service-cert-copy : grafana | Copying over extra CA certificates] ******** 2026-01-10 14:55:01.627145 | orchestrator | Saturday 10 January 2026 14:53:24 +0000 (0:00:00.850) 0:00:05.846 ****** 2026-01-10 14:55:01.627149 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-10 14:55:01.627153 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-10 14:55:01.627159 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-10 14:55:01.627162 | orchestrator | 2026-01-10 14:55:01.627165 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS certificate] *** 2026-01-10 14:55:01.627169 | orchestrator | Saturday 10 January 2026 14:53:26 +0000 (0:00:01.883) 0:00:07.729 ****** 2026-01-10 14:55:01.627174 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-01-10 14:55:01.627178 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:55:01.627181 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-01-10 14:55:01.627184 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:55:01.627190 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-01-10 14:55:01.627193 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:55:01.627196 | orchestrator | 2026-01-10 14:55:01.627199 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS key] ***** 2026-01-10 14:55:01.627202 | orchestrator | Saturday 10 January 2026 14:53:27 +0000 (0:00:00.976) 0:00:08.706 ****** 2026-01-10 14:55:01.627206 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-01-10 14:55:01.627211 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:55:01.627214 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-01-10 14:55:01.627217 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:55:01.627222 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-01-10 14:55:01.627225 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:55:01.627229 | orchestrator | 2026-01-10 14:55:01.627232 | orchestrator | TASK [grafana : Copying over config.json files] ******************************** 2026-01-10 14:55:01.627235 | orchestrator | Saturday 10 January 2026 14:53:29 +0000 (0:00:02.500) 0:00:11.207 ****** 2026-01-10 14:55:01.627240 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-10 14:55:01.627244 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-10 14:55:01.627253 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-10 14:55:01.627256 | orchestrator | 2026-01-10 14:55:01.627259 | orchestrator | TASK [grafana : Copying over grafana.ini] ************************************** 2026-01-10 14:55:01.627262 | orchestrator | Saturday 10 January 2026 14:53:31 +0000 (0:00:01.398) 0:00:12.605 ****** 2026-01-10 14:55:01.627266 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-10 14:55:01.627271 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-10 14:55:01.627274 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-10 14:55:01.627277 | orchestrator | 2026-01-10 14:55:01.627280 | orchestrator | TASK [grafana : Copying over extra configuration file] ************************* 2026-01-10 14:55:01.627285 | orchestrator | Saturday 10 January 2026 14:53:32 +0000 (0:00:01.487) 0:00:14.092 ****** 2026-01-10 14:55:01.627288 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:55:01.627291 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:55:01.627294 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:55:01.627297 | orchestrator | 2026-01-10 14:55:01.627300 | orchestrator | TASK [grafana : Configuring Prometheus as data source for Grafana] ************* 2026-01-10 14:55:01.627303 | orchestrator | Saturday 10 January 2026 14:53:33 +0000 (0:00:00.446) 0:00:14.539 ****** 2026-01-10 14:55:01.627309 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-01-10 14:55:01.627312 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-01-10 14:55:01.627315 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-01-10 14:55:01.627318 | orchestrator | 2026-01-10 14:55:01.627321 | orchestrator | TASK [grafana : Configuring dashboards provisioning] *************************** 2026-01-10 14:55:01.627324 | orchestrator | Saturday 10 January 2026 14:53:34 +0000 (0:00:01.280) 0:00:15.819 ****** 2026-01-10 14:55:01.627327 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-01-10 14:55:01.627330 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-01-10 14:55:01.627333 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-01-10 14:55:01.627336 | orchestrator | 2026-01-10 14:55:01.627339 | orchestrator | TASK [grafana : Check if the folder for custom grafana dashboards exists] ****** 2026-01-10 14:55:01.627342 | orchestrator | Saturday 10 January 2026 14:53:35 +0000 (0:00:01.310) 0:00:17.130 ****** 2026-01-10 14:55:01.627345 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-10 14:55:01.627348 | orchestrator | 2026-01-10 14:55:01.627351 | orchestrator | TASK [grafana : Remove templated Grafana dashboards] *************************** 2026-01-10 14:55:01.627354 | orchestrator | Saturday 10 January 2026 14:53:36 +0000 (0:00:01.094) 0:00:18.225 ****** 2026-01-10 14:55:01.627358 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:55:01.627361 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:55:01.627364 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:55:01.627367 | orchestrator | 2026-01-10 14:55:01.627370 | orchestrator | TASK [grafana : Copying over custom dashboards] ******************************** 2026-01-10 14:55:01.627373 | orchestrator | Saturday 10 January 2026 14:53:37 +0000 (0:00:01.046) 0:00:19.271 ****** 2026-01-10 14:55:01.627376 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:55:01.627379 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:55:01.627382 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:55:01.627385 | orchestrator | 2026-01-10 14:55:01.627388 | orchestrator | TASK [service-check-containers : grafana | Check containers] ******************* 2026-01-10 14:55:01.627391 | orchestrator | Saturday 10 January 2026 14:53:39 +0000 (0:00:01.654) 0:00:20.925 ****** 2026-01-10 14:55:01.627394 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-10 14:55:01.627399 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-10 14:55:01.627407 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-10 14:55:01.627411 | orchestrator | 2026-01-10 14:55:01.627414 | orchestrator | TASK [service-check-containers : grafana | Notify handlers to restart containers] *** 2026-01-10 14:55:01.627417 | orchestrator | Saturday 10 January 2026 14:53:40 +0000 (0:00:01.285) 0:00:22.211 ****** 2026-01-10 14:55:01.627420 | orchestrator | changed: [testbed-node-0] => { 2026-01-10 14:55:01.627423 | orchestrator |  "msg": "Notifying handlers" 2026-01-10 14:55:01.627426 | orchestrator | } 2026-01-10 14:55:01.627429 | orchestrator | changed: [testbed-node-1] => { 2026-01-10 14:55:01.627432 | orchestrator |  "msg": "Notifying handlers" 2026-01-10 14:55:01.627435 | orchestrator | } 2026-01-10 14:55:01.627438 | orchestrator | changed: [testbed-node-2] => { 2026-01-10 14:55:01.627441 | orchestrator |  "msg": "Notifying handlers" 2026-01-10 14:55:01.627444 | orchestrator | } 2026-01-10 14:55:01.627448 | orchestrator | 2026-01-10 14:55:01.627451 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-01-10 14:55:01.627454 | orchestrator | Saturday 10 January 2026 14:53:41 +0000 (0:00:00.365) 0:00:22.577 ****** 2026-01-10 14:55:01.627457 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-01-10 14:55:01.627460 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:55:01.627463 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-01-10 14:55:01.627466 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:55:01.627471 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-01-10 14:55:01.627476 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:55:01.627479 | orchestrator | 2026-01-10 14:55:01.627509 | orchestrator | TASK [grafana : Creating grafana database] ************************************* 2026-01-10 14:55:01.627513 | orchestrator | Saturday 10 January 2026 14:53:42 +0000 (0:00:01.601) 0:00:24.178 ****** 2026-01-10 14:55:01.627516 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:55:01.627519 | orchestrator | 2026-01-10 14:55:01.627522 | orchestrator | TASK [grafana : Creating grafana database user and setting permissions] ******** 2026-01-10 14:55:01.627525 | orchestrator | Saturday 10 January 2026 14:53:45 +0000 (0:00:02.317) 0:00:26.496 ****** 2026-01-10 14:55:01.627528 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:55:01.627532 | orchestrator | 2026-01-10 14:55:01.627535 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-01-10 14:55:01.627538 | orchestrator | Saturday 10 January 2026 14:53:47 +0000 (0:00:02.136) 0:00:28.633 ****** 2026-01-10 14:55:01.627541 | orchestrator | 2026-01-10 14:55:01.627544 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-01-10 14:55:01.627547 | orchestrator | Saturday 10 January 2026 14:53:47 +0000 (0:00:00.076) 0:00:28.709 ****** 2026-01-10 14:55:01.627550 | orchestrator | 2026-01-10 14:55:01.627553 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-01-10 14:55:01.627558 | orchestrator | Saturday 10 January 2026 14:53:47 +0000 (0:00:00.065) 0:00:28.775 ****** 2026-01-10 14:55:01.627561 | orchestrator | 2026-01-10 14:55:01.627564 | orchestrator | RUNNING HANDLER [grafana : Restart first grafana container] ******************** 2026-01-10 14:55:01.627567 | orchestrator | Saturday 10 January 2026 14:53:47 +0000 (0:00:00.083) 0:00:28.858 ****** 2026-01-10 14:55:01.627570 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:55:01.627573 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:55:01.627576 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:55:01.627579 | orchestrator | 2026-01-10 14:55:01.627582 | orchestrator | RUNNING HANDLER [grafana : Waiting for grafana to start on first node] ********* 2026-01-10 14:55:01.627585 | orchestrator | Saturday 10 January 2026 14:53:54 +0000 (0:00:07.102) 0:00:35.961 ****** 2026-01-10 14:55:01.627588 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:55:01.627591 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:55:01.627595 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (12 retries left). 2026-01-10 14:55:01.627598 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (11 retries left). 2026-01-10 14:55:01.627601 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (10 retries left). 2026-01-10 14:55:01.627604 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:55:01.627607 | orchestrator | 2026-01-10 14:55:01.627610 | orchestrator | RUNNING HANDLER [grafana : Restart remaining grafana containers] *************** 2026-01-10 14:55:01.627613 | orchestrator | Saturday 10 January 2026 14:54:32 +0000 (0:00:38.438) 0:01:14.399 ****** 2026-01-10 14:55:01.627616 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:55:01.627619 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:55:01.627622 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:55:01.627625 | orchestrator | 2026-01-10 14:55:01.627628 | orchestrator | TASK [grafana : Wait for grafana application ready] **************************** 2026-01-10 14:55:01.627631 | orchestrator | Saturday 10 January 2026 14:54:55 +0000 (0:00:22.424) 0:01:36.824 ****** 2026-01-10 14:55:01.627634 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:55:01.627637 | orchestrator | 2026-01-10 14:55:01.627640 | orchestrator | TASK [grafana : Remove old grafana docker volume] ****************************** 2026-01-10 14:55:01.627643 | orchestrator | Saturday 10 January 2026 14:54:57 +0000 (0:00:02.074) 0:01:38.899 ****** 2026-01-10 14:55:01.627646 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:55:01.627653 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:55:01.627656 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:55:01.627659 | orchestrator | 2026-01-10 14:55:01.627662 | orchestrator | TASK [grafana : Enable grafana datasources] ************************************ 2026-01-10 14:55:01.627665 | orchestrator | Saturday 10 January 2026 14:54:57 +0000 (0:00:00.297) 0:01:39.197 ****** 2026-01-10 14:55:01.627669 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'influxdb', 'value': {'enabled': False, 'data': {'isDefault': True, 'database': 'telegraf', 'name': 'telegraf', 'type': 'influxdb', 'url': 'https://api-int.testbed.osism.xyz:8086', 'access': 'proxy', 'basicAuth': False}}})  2026-01-10 14:55:01.627674 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'data': {'name': 'opensearch', 'type': 'grafana-opensearch-datasource', 'access': 'proxy', 'url': 'https://api-int.testbed.osism.xyz:9200', 'jsonData': {'flavor': 'OpenSearch', 'database': 'flog-*', 'version': '2.11.1', 'timeField': '@timestamp', 'logLevelField': 'log_level'}}}}) 2026-01-10 14:55:01.627679 | orchestrator | 2026-01-10 14:55:01.627682 | orchestrator | TASK [grafana : Disable Getting Started panel] ********************************* 2026-01-10 14:55:01.627685 | orchestrator | Saturday 10 January 2026 14:54:59 +0000 (0:00:02.162) 0:01:41.359 ****** 2026-01-10 14:55:01.627688 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:55:01.627691 | orchestrator | 2026-01-10 14:55:01.627694 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-10 14:55:01.627697 | orchestrator | testbed-node-0 : ok=22  changed=13  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-10 14:55:01.627703 | orchestrator | testbed-node-1 : ok=15  changed=10  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-10 14:55:01.627706 | orchestrator | testbed-node-2 : ok=15  changed=10  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-10 14:55:01.627709 | orchestrator | 2026-01-10 14:55:01.627712 | orchestrator | 2026-01-10 14:55:01.627715 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-10 14:55:01.627718 | orchestrator | Saturday 10 January 2026 14:55:00 +0000 (0:00:00.236) 0:01:41.595 ****** 2026-01-10 14:55:01.627721 | orchestrator | =============================================================================== 2026-01-10 14:55:01.627724 | orchestrator | grafana : Waiting for grafana to start on first node ------------------- 38.44s 2026-01-10 14:55:01.627728 | orchestrator | grafana : Restart remaining grafana containers ------------------------- 22.42s 2026-01-10 14:55:01.627731 | orchestrator | grafana : Restart first grafana container ------------------------------- 7.10s 2026-01-10 14:55:01.627734 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS key ----- 2.50s 2026-01-10 14:55:01.627737 | orchestrator | grafana : Creating grafana database ------------------------------------- 2.32s 2026-01-10 14:55:01.627740 | orchestrator | grafana : Enable grafana datasources ------------------------------------ 2.16s 2026-01-10 14:55:01.627743 | orchestrator | grafana : Creating grafana database user and setting permissions -------- 2.14s 2026-01-10 14:55:01.627748 | orchestrator | grafana : Wait for grafana application ready ---------------------------- 2.07s 2026-01-10 14:55:01.627751 | orchestrator | service-cert-copy : grafana | Copying over extra CA certificates -------- 1.88s 2026-01-10 14:55:01.627754 | orchestrator | grafana : Copying over custom dashboards -------------------------------- 1.65s 2026-01-10 14:55:01.627757 | orchestrator | grafana : Check if extra configuration file exists ---------------------- 1.62s 2026-01-10 14:55:01.627760 | orchestrator | service-check-containers : Include tasks -------------------------------- 1.60s 2026-01-10 14:55:01.627763 | orchestrator | grafana : Copying over grafana.ini -------------------------------------- 1.49s 2026-01-10 14:55:01.627766 | orchestrator | grafana : Copying over config.json files -------------------------------- 1.40s 2026-01-10 14:55:01.627769 | orchestrator | grafana : Ensuring config directories exist ----------------------------- 1.40s 2026-01-10 14:55:01.627775 | orchestrator | grafana : Configuring dashboards provisioning --------------------------- 1.31s 2026-01-10 14:55:01.627778 | orchestrator | service-check-containers : grafana | Check containers ------------------- 1.29s 2026-01-10 14:55:01.627781 | orchestrator | grafana : Configuring Prometheus as data source for Grafana ------------- 1.28s 2026-01-10 14:55:01.627784 | orchestrator | grafana : Check if the folder for custom grafana dashboards exists ------ 1.10s 2026-01-10 14:55:01.627787 | orchestrator | grafana : Remove templated Grafana dashboards --------------------------- 1.04s 2026-01-10 14:55:01.627790 | orchestrator | 2026-01-10 14:55:01 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:55:04.671247 | orchestrator | 2026-01-10 14:55:04 | INFO  | Task 50416474-4d7f-4703-a08e-c36f9b97b8f5 is in state STARTED 2026-01-10 14:55:04.675468 | orchestrator | 2026-01-10 14:55:04 | INFO  | Task 27a2dbc7-3cd7-4e7d-a96b-79d180378f23 is in state STARTED 2026-01-10 14:55:04.675536 | orchestrator | 2026-01-10 14:55:04 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:55:07.723915 | orchestrator | 2026-01-10 14:55:07 | INFO  | Task 50416474-4d7f-4703-a08e-c36f9b97b8f5 is in state STARTED 2026-01-10 14:55:07.725868 | orchestrator | 2026-01-10 14:55:07 | INFO  | Task 27a2dbc7-3cd7-4e7d-a96b-79d180378f23 is in state STARTED 2026-01-10 14:55:07.725922 | orchestrator | 2026-01-10 14:55:07 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:55:10.762879 | orchestrator | 2026-01-10 14:55:10 | INFO  | Task 50416474-4d7f-4703-a08e-c36f9b97b8f5 is in state STARTED 2026-01-10 14:55:10.762957 | orchestrator | 2026-01-10 14:55:10 | INFO  | Task 27a2dbc7-3cd7-4e7d-a96b-79d180378f23 is in state STARTED 2026-01-10 14:55:10.762995 | orchestrator | 2026-01-10 14:55:10 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:55:13.809306 | orchestrator | 2026-01-10 14:55:13 | INFO  | Task 50416474-4d7f-4703-a08e-c36f9b97b8f5 is in state STARTED 2026-01-10 14:55:13.811154 | orchestrator | 2026-01-10 14:55:13 | INFO  | Task 27a2dbc7-3cd7-4e7d-a96b-79d180378f23 is in state STARTED 2026-01-10 14:55:13.811205 | orchestrator | 2026-01-10 14:55:13 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:55:16.858164 | orchestrator | 2026-01-10 14:55:16 | INFO  | Task 50416474-4d7f-4703-a08e-c36f9b97b8f5 is in state STARTED 2026-01-10 14:55:16.859994 | orchestrator | 2026-01-10 14:55:16 | INFO  | Task 27a2dbc7-3cd7-4e7d-a96b-79d180378f23 is in state STARTED 2026-01-10 14:55:16.860045 | orchestrator | 2026-01-10 14:55:16 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:55:19.910885 | orchestrator | 2026-01-10 14:55:19 | INFO  | Task 50416474-4d7f-4703-a08e-c36f9b97b8f5 is in state STARTED 2026-01-10 14:55:19.913088 | orchestrator | 2026-01-10 14:55:19 | INFO  | Task 27a2dbc7-3cd7-4e7d-a96b-79d180378f23 is in state STARTED 2026-01-10 14:55:19.913275 | orchestrator | 2026-01-10 14:55:19 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:55:22.961738 | orchestrator | 2026-01-10 14:55:22 | INFO  | Task 50416474-4d7f-4703-a08e-c36f9b97b8f5 is in state STARTED 2026-01-10 14:55:22.964048 | orchestrator | 2026-01-10 14:55:22 | INFO  | Task 27a2dbc7-3cd7-4e7d-a96b-79d180378f23 is in state STARTED 2026-01-10 14:55:22.964114 | orchestrator | 2026-01-10 14:55:22 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:55:26.019772 | orchestrator | 2026-01-10 14:55:26 | INFO  | Task 50416474-4d7f-4703-a08e-c36f9b97b8f5 is in state STARTED 2026-01-10 14:55:26.021954 | orchestrator | 2026-01-10 14:55:26 | INFO  | Task 27a2dbc7-3cd7-4e7d-a96b-79d180378f23 is in state STARTED 2026-01-10 14:55:26.022046 | orchestrator | 2026-01-10 14:55:26 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:55:29.066223 | orchestrator | 2026-01-10 14:55:29 | INFO  | Task 50416474-4d7f-4703-a08e-c36f9b97b8f5 is in state STARTED 2026-01-10 14:55:29.069423 | orchestrator | 2026-01-10 14:55:29 | INFO  | Task 27a2dbc7-3cd7-4e7d-a96b-79d180378f23 is in state STARTED 2026-01-10 14:55:29.069517 | orchestrator | 2026-01-10 14:55:29 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:55:32.117411 | orchestrator | 2026-01-10 14:55:32 | INFO  | Task 50416474-4d7f-4703-a08e-c36f9b97b8f5 is in state STARTED 2026-01-10 14:55:32.119500 | orchestrator | 2026-01-10 14:55:32 | INFO  | Task 27a2dbc7-3cd7-4e7d-a96b-79d180378f23 is in state STARTED 2026-01-10 14:55:32.119561 | orchestrator | 2026-01-10 14:55:32 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:55:35.155613 | orchestrator | 2026-01-10 14:55:35 | INFO  | Task 50416474-4d7f-4703-a08e-c36f9b97b8f5 is in state STARTED 2026-01-10 14:55:35.156343 | orchestrator | 2026-01-10 14:55:35 | INFO  | Task 27a2dbc7-3cd7-4e7d-a96b-79d180378f23 is in state STARTED 2026-01-10 14:55:35.156376 | orchestrator | 2026-01-10 14:55:35 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:55:38.192514 | orchestrator | 2026-01-10 14:55:38 | INFO  | Task 50416474-4d7f-4703-a08e-c36f9b97b8f5 is in state STARTED 2026-01-10 14:55:38.194072 | orchestrator | 2026-01-10 14:55:38 | INFO  | Task 27a2dbc7-3cd7-4e7d-a96b-79d180378f23 is in state STARTED 2026-01-10 14:55:38.194139 | orchestrator | 2026-01-10 14:55:38 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:55:41.243492 | orchestrator | 2026-01-10 14:55:41 | INFO  | Task 50416474-4d7f-4703-a08e-c36f9b97b8f5 is in state STARTED 2026-01-10 14:55:41.246355 | orchestrator | 2026-01-10 14:55:41 | INFO  | Task 27a2dbc7-3cd7-4e7d-a96b-79d180378f23 is in state STARTED 2026-01-10 14:55:41.246671 | orchestrator | 2026-01-10 14:55:41 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:55:44.293026 | orchestrator | 2026-01-10 14:55:44 | INFO  | Task 50416474-4d7f-4703-a08e-c36f9b97b8f5 is in state STARTED 2026-01-10 14:55:44.294666 | orchestrator | 2026-01-10 14:55:44 | INFO  | Task 27a2dbc7-3cd7-4e7d-a96b-79d180378f23 is in state STARTED 2026-01-10 14:55:44.294958 | orchestrator | 2026-01-10 14:55:44 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:55:47.344462 | orchestrator | 2026-01-10 14:55:47 | INFO  | Task 50416474-4d7f-4703-a08e-c36f9b97b8f5 is in state STARTED 2026-01-10 14:55:47.346816 | orchestrator | 2026-01-10 14:55:47 | INFO  | Task 27a2dbc7-3cd7-4e7d-a96b-79d180378f23 is in state STARTED 2026-01-10 14:55:47.347221 | orchestrator | 2026-01-10 14:55:47 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:55:50.394327 | orchestrator | 2026-01-10 14:55:50 | INFO  | Task 50416474-4d7f-4703-a08e-c36f9b97b8f5 is in state STARTED 2026-01-10 14:55:50.394892 | orchestrator | 2026-01-10 14:55:50 | INFO  | Task 27a2dbc7-3cd7-4e7d-a96b-79d180378f23 is in state STARTED 2026-01-10 14:55:50.394933 | orchestrator | 2026-01-10 14:55:50 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:55:53.446105 | orchestrator | 2026-01-10 14:55:53 | INFO  | Task 50416474-4d7f-4703-a08e-c36f9b97b8f5 is in state STARTED 2026-01-10 14:55:53.447710 | orchestrator | 2026-01-10 14:55:53 | INFO  | Task 27a2dbc7-3cd7-4e7d-a96b-79d180378f23 is in state STARTED 2026-01-10 14:55:53.447748 | orchestrator | 2026-01-10 14:55:53 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:55:56.491933 | orchestrator | 2026-01-10 14:55:56 | INFO  | Task 50416474-4d7f-4703-a08e-c36f9b97b8f5 is in state STARTED 2026-01-10 14:55:56.492391 | orchestrator | 2026-01-10 14:55:56 | INFO  | Task 27a2dbc7-3cd7-4e7d-a96b-79d180378f23 is in state STARTED 2026-01-10 14:55:56.492544 | orchestrator | 2026-01-10 14:55:56 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:55:59.538272 | orchestrator | 2026-01-10 14:55:59 | INFO  | Task 50416474-4d7f-4703-a08e-c36f9b97b8f5 is in state STARTED 2026-01-10 14:55:59.539611 | orchestrator | 2026-01-10 14:55:59 | INFO  | Task 27a2dbc7-3cd7-4e7d-a96b-79d180378f23 is in state STARTED 2026-01-10 14:55:59.539683 | orchestrator | 2026-01-10 14:55:59 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:56:02.562187 | orchestrator | 2026-01-10 14:56:02 | INFO  | Task 50416474-4d7f-4703-a08e-c36f9b97b8f5 is in state STARTED 2026-01-10 14:56:02.562795 | orchestrator | 2026-01-10 14:56:02 | INFO  | Task 27a2dbc7-3cd7-4e7d-a96b-79d180378f23 is in state STARTED 2026-01-10 14:56:02.563010 | orchestrator | 2026-01-10 14:56:02 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:56:05.588157 | orchestrator | 2026-01-10 14:56:05 | INFO  | Task 50416474-4d7f-4703-a08e-c36f9b97b8f5 is in state STARTED 2026-01-10 14:56:05.589660 | orchestrator | 2026-01-10 14:56:05 | INFO  | Task 27a2dbc7-3cd7-4e7d-a96b-79d180378f23 is in state STARTED 2026-01-10 14:56:05.590154 | orchestrator | 2026-01-10 14:56:05 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:56:08.642663 | orchestrator | 2026-01-10 14:56:08 | INFO  | Task 50416474-4d7f-4703-a08e-c36f9b97b8f5 is in state STARTED 2026-01-10 14:56:08.644228 | orchestrator | 2026-01-10 14:56:08 | INFO  | Task 27a2dbc7-3cd7-4e7d-a96b-79d180378f23 is in state STARTED 2026-01-10 14:56:08.644274 | orchestrator | 2026-01-10 14:56:08 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:56:11.703863 | orchestrator | 2026-01-10 14:56:11 | INFO  | Task 50416474-4d7f-4703-a08e-c36f9b97b8f5 is in state STARTED 2026-01-10 14:56:11.708345 | orchestrator | 2026-01-10 14:56:11 | INFO  | Task 27a2dbc7-3cd7-4e7d-a96b-79d180378f23 is in state STARTED 2026-01-10 14:56:11.708490 | orchestrator | 2026-01-10 14:56:11 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:56:14.761129 | orchestrator | 2026-01-10 14:56:14 | INFO  | Task 50416474-4d7f-4703-a08e-c36f9b97b8f5 is in state STARTED 2026-01-10 14:56:14.763929 | orchestrator | 2026-01-10 14:56:14 | INFO  | Task 27a2dbc7-3cd7-4e7d-a96b-79d180378f23 is in state STARTED 2026-01-10 14:56:14.764018 | orchestrator | 2026-01-10 14:56:14 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:56:17.810540 | orchestrator | 2026-01-10 14:56:17 | INFO  | Task 50416474-4d7f-4703-a08e-c36f9b97b8f5 is in state STARTED 2026-01-10 14:56:17.812665 | orchestrator | 2026-01-10 14:56:17 | INFO  | Task 27a2dbc7-3cd7-4e7d-a96b-79d180378f23 is in state STARTED 2026-01-10 14:56:17.813390 | orchestrator | 2026-01-10 14:56:17 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:56:20.866071 | orchestrator | 2026-01-10 14:56:20 | INFO  | Task 50416474-4d7f-4703-a08e-c36f9b97b8f5 is in state STARTED 2026-01-10 14:56:20.868105 | orchestrator | 2026-01-10 14:56:20 | INFO  | Task 27a2dbc7-3cd7-4e7d-a96b-79d180378f23 is in state STARTED 2026-01-10 14:56:20.868180 | orchestrator | 2026-01-10 14:56:20 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:56:23.924504 | orchestrator | 2026-01-10 14:56:23 | INFO  | Task 50416474-4d7f-4703-a08e-c36f9b97b8f5 is in state STARTED 2026-01-10 14:56:23.927715 | orchestrator | 2026-01-10 14:56:23 | INFO  | Task 27a2dbc7-3cd7-4e7d-a96b-79d180378f23 is in state STARTED 2026-01-10 14:56:23.927788 | orchestrator | 2026-01-10 14:56:23 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:56:26.963941 | orchestrator | 2026-01-10 14:56:26 | INFO  | Task 50416474-4d7f-4703-a08e-c36f9b97b8f5 is in state STARTED 2026-01-10 14:56:26.964498 | orchestrator | 2026-01-10 14:56:26 | INFO  | Task 27a2dbc7-3cd7-4e7d-a96b-79d180378f23 is in state STARTED 2026-01-10 14:56:26.964535 | orchestrator | 2026-01-10 14:56:26 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:56:30.006844 | orchestrator | 2026-01-10 14:56:30 | INFO  | Task 50416474-4d7f-4703-a08e-c36f9b97b8f5 is in state STARTED 2026-01-10 14:56:30.008806 | orchestrator | 2026-01-10 14:56:30 | INFO  | Task 27a2dbc7-3cd7-4e7d-a96b-79d180378f23 is in state STARTED 2026-01-10 14:56:30.008838 | orchestrator | 2026-01-10 14:56:30 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:56:33.058240 | orchestrator | 2026-01-10 14:56:33 | INFO  | Task 50416474-4d7f-4703-a08e-c36f9b97b8f5 is in state STARTED 2026-01-10 14:56:33.058573 | orchestrator | 2026-01-10 14:56:33 | INFO  | Task 27a2dbc7-3cd7-4e7d-a96b-79d180378f23 is in state STARTED 2026-01-10 14:56:33.058607 | orchestrator | 2026-01-10 14:56:33 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:56:36.100828 | orchestrator | 2026-01-10 14:56:36 | INFO  | Task 50416474-4d7f-4703-a08e-c36f9b97b8f5 is in state STARTED 2026-01-10 14:56:36.100923 | orchestrator | 2026-01-10 14:56:36 | INFO  | Task 27a2dbc7-3cd7-4e7d-a96b-79d180378f23 is in state STARTED 2026-01-10 14:56:36.100961 | orchestrator | 2026-01-10 14:56:36 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:56:39.152697 | orchestrator | 2026-01-10 14:56:39 | INFO  | Task 50416474-4d7f-4703-a08e-c36f9b97b8f5 is in state STARTED 2026-01-10 14:56:39.154387 | orchestrator | 2026-01-10 14:56:39 | INFO  | Task 27a2dbc7-3cd7-4e7d-a96b-79d180378f23 is in state STARTED 2026-01-10 14:56:39.154429 | orchestrator | 2026-01-10 14:56:39 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:56:42.201202 | orchestrator | 2026-01-10 14:56:42 | INFO  | Task 50416474-4d7f-4703-a08e-c36f9b97b8f5 is in state STARTED 2026-01-10 14:56:42.201253 | orchestrator | 2026-01-10 14:56:42 | INFO  | Task 27a2dbc7-3cd7-4e7d-a96b-79d180378f23 is in state STARTED 2026-01-10 14:56:42.201474 | orchestrator | 2026-01-10 14:56:42 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:56:45.239010 | orchestrator | 2026-01-10 14:56:45 | INFO  | Task 50416474-4d7f-4703-a08e-c36f9b97b8f5 is in state STARTED 2026-01-10 14:56:45.240629 | orchestrator | 2026-01-10 14:56:45 | INFO  | Task 27a2dbc7-3cd7-4e7d-a96b-79d180378f23 is in state STARTED 2026-01-10 14:56:45.240789 | orchestrator | 2026-01-10 14:56:45 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:56:48.280761 | orchestrator | 2026-01-10 14:56:48 | INFO  | Task 50416474-4d7f-4703-a08e-c36f9b97b8f5 is in state STARTED 2026-01-10 14:56:48.284410 | orchestrator | 2026-01-10 14:56:48 | INFO  | Task 27a2dbc7-3cd7-4e7d-a96b-79d180378f23 is in state SUCCESS 2026-01-10 14:56:48.285905 | orchestrator | 2026-01-10 14:56:48.285964 | orchestrator | 2026-01-10 14:56:48.285980 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-10 14:56:48.286730 | orchestrator | 2026-01-10 14:56:48.286742 | orchestrator | TASK [Group hosts based on OpenStack release] ********************************** 2026-01-10 14:56:48.286748 | orchestrator | Saturday 10 January 2026 14:46:12 +0000 (0:00:00.255) 0:00:00.255 ****** 2026-01-10 14:56:48.286755 | orchestrator | changed: [testbed-manager] 2026-01-10 14:56:48.286762 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:56:48.286768 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:56:48.286775 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:56:48.286781 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:56:48.286787 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:56:48.286793 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:56:48.286800 | orchestrator | 2026-01-10 14:56:48.286806 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-10 14:56:48.286829 | orchestrator | Saturday 10 January 2026 14:46:13 +0000 (0:00:01.230) 0:00:01.485 ****** 2026-01-10 14:56:48.286836 | orchestrator | changed: [testbed-manager] 2026-01-10 14:56:48.286842 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:56:48.286849 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:56:48.286859 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:56:48.286869 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:56:48.286875 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:56:48.286881 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:56:48.286887 | orchestrator | 2026-01-10 14:56:48.286893 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-10 14:56:48.286900 | orchestrator | Saturday 10 January 2026 14:46:14 +0000 (0:00:00.942) 0:00:02.428 ****** 2026-01-10 14:56:48.286907 | orchestrator | changed: [testbed-manager] => (item=enable_nova_True) 2026-01-10 14:56:48.286914 | orchestrator | changed: [testbed-node-0] => (item=enable_nova_True) 2026-01-10 14:56:48.286920 | orchestrator | changed: [testbed-node-1] => (item=enable_nova_True) 2026-01-10 14:56:48.286926 | orchestrator | changed: [testbed-node-2] => (item=enable_nova_True) 2026-01-10 14:56:48.286932 | orchestrator | changed: [testbed-node-3] => (item=enable_nova_True) 2026-01-10 14:56:48.286938 | orchestrator | changed: [testbed-node-4] => (item=enable_nova_True) 2026-01-10 14:56:48.286944 | orchestrator | changed: [testbed-node-5] => (item=enable_nova_True) 2026-01-10 14:56:48.286950 | orchestrator | 2026-01-10 14:56:48.286956 | orchestrator | PLAY [Bootstrap nova API databases] ******************************************** 2026-01-10 14:56:48.286962 | orchestrator | 2026-01-10 14:56:48.286968 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2026-01-10 14:56:48.286983 | orchestrator | Saturday 10 January 2026 14:46:16 +0000 (0:00:01.589) 0:00:04.017 ****** 2026-01-10 14:56:48.286990 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:56:48.286996 | orchestrator | 2026-01-10 14:56:48.287002 | orchestrator | TASK [nova : Creating Nova databases] ****************************************** 2026-01-10 14:56:48.287008 | orchestrator | Saturday 10 January 2026 14:46:17 +0000 (0:00:01.435) 0:00:05.453 ****** 2026-01-10 14:56:48.287014 | orchestrator | changed: [testbed-node-0] => (item=nova_cell0) 2026-01-10 14:56:48.287020 | orchestrator | changed: [testbed-node-0] => (item=nova_api) 2026-01-10 14:56:48.287026 | orchestrator | 2026-01-10 14:56:48.287032 | orchestrator | TASK [nova : Creating Nova databases user and setting permissions] ************* 2026-01-10 14:56:48.287083 | orchestrator | Saturday 10 January 2026 14:46:22 +0000 (0:00:04.816) 0:00:10.270 ****** 2026-01-10 14:56:48.287090 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-01-10 14:56:48.287097 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-01-10 14:56:48.287103 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:56:48.287110 | orchestrator | 2026-01-10 14:56:48.287120 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2026-01-10 14:56:48.287131 | orchestrator | Saturday 10 January 2026 14:46:27 +0000 (0:00:04.787) 0:00:15.057 ****** 2026-01-10 14:56:48.287157 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:56:48.287170 | orchestrator | 2026-01-10 14:56:48.287180 | orchestrator | TASK [nova : Copying over config.json files for nova-api-bootstrap] ************ 2026-01-10 14:56:48.287190 | orchestrator | Saturday 10 January 2026 14:46:28 +0000 (0:00:01.611) 0:00:16.669 ****** 2026-01-10 14:56:48.287260 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:56:48.287271 | orchestrator | 2026-01-10 14:56:48.287281 | orchestrator | TASK [nova : Copying over nova.conf for nova-api-bootstrap] ******************** 2026-01-10 14:56:48.287289 | orchestrator | Saturday 10 January 2026 14:46:30 +0000 (0:00:01.604) 0:00:18.273 ****** 2026-01-10 14:56:48.287296 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:56:48.287302 | orchestrator | 2026-01-10 14:56:48.287325 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-01-10 14:56:48.287337 | orchestrator | Saturday 10 January 2026 14:46:33 +0000 (0:00:03.306) 0:00:21.580 ****** 2026-01-10 14:56:48.287352 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:56:48.287359 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:56:48.287366 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:56:48.287373 | orchestrator | 2026-01-10 14:56:48.287379 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2026-01-10 14:56:48.287386 | orchestrator | Saturday 10 January 2026 14:46:34 +0000 (0:00:00.552) 0:00:22.133 ****** 2026-01-10 14:56:48.287393 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:56:48.287400 | orchestrator | 2026-01-10 14:56:48.287407 | orchestrator | TASK [nova : Create cell0 mappings] ******************************************** 2026-01-10 14:56:48.287413 | orchestrator | Saturday 10 January 2026 14:47:05 +0000 (0:00:31.735) 0:00:53.869 ****** 2026-01-10 14:56:48.287420 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:56:48.287427 | orchestrator | 2026-01-10 14:56:48.287434 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-01-10 14:56:48.287441 | orchestrator | Saturday 10 January 2026 14:47:22 +0000 (0:00:16.069) 0:01:09.938 ****** 2026-01-10 14:56:48.287448 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:56:48.287455 | orchestrator | 2026-01-10 14:56:48.287463 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-01-10 14:56:48.287469 | orchestrator | Saturday 10 January 2026 14:47:35 +0000 (0:00:13.809) 0:01:23.748 ****** 2026-01-10 14:56:48.287486 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:56:48.287493 | orchestrator | 2026-01-10 14:56:48.287499 | orchestrator | TASK [nova : Update cell0 mappings] ******************************************** 2026-01-10 14:56:48.287505 | orchestrator | Saturday 10 January 2026 14:47:37 +0000 (0:00:01.903) 0:01:25.651 ****** 2026-01-10 14:56:48.287511 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:56:48.287517 | orchestrator | 2026-01-10 14:56:48.287524 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-01-10 14:56:48.287530 | orchestrator | Saturday 10 January 2026 14:47:38 +0000 (0:00:00.655) 0:01:26.306 ****** 2026-01-10 14:56:48.287536 | orchestrator | included: /ansible/roles/nova/tasks/bootstrap_service.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:56:48.287542 | orchestrator | 2026-01-10 14:56:48.287548 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2026-01-10 14:56:48.287554 | orchestrator | Saturday 10 January 2026 14:47:39 +0000 (0:00:00.641) 0:01:26.948 ****** 2026-01-10 14:56:48.287560 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:56:48.287566 | orchestrator | 2026-01-10 14:56:48.287572 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2026-01-10 14:56:48.287589 | orchestrator | Saturday 10 January 2026 14:47:57 +0000 (0:00:18.302) 0:01:45.250 ****** 2026-01-10 14:56:48.287602 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:56:48.287608 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:56:48.287614 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:56:48.287620 | orchestrator | 2026-01-10 14:56:48.287626 | orchestrator | PLAY [Bootstrap nova cell databases] ******************************************* 2026-01-10 14:56:48.287632 | orchestrator | 2026-01-10 14:56:48.287639 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2026-01-10 14:56:48.287645 | orchestrator | Saturday 10 January 2026 14:47:57 +0000 (0:00:00.359) 0:01:45.610 ****** 2026-01-10 14:56:48.287651 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:56:48.287661 | orchestrator | 2026-01-10 14:56:48.287673 | orchestrator | TASK [nova-cell : Creating Nova cell database] ********************************* 2026-01-10 14:56:48.287697 | orchestrator | Saturday 10 January 2026 14:47:58 +0000 (0:00:00.899) 0:01:46.510 ****** 2026-01-10 14:56:48.287728 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:56:48.287738 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:56:48.287748 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:56:48.287757 | orchestrator | 2026-01-10 14:56:48.287766 | orchestrator | TASK [nova-cell : Creating Nova cell database user and setting permissions] **** 2026-01-10 14:56:48.287776 | orchestrator | Saturday 10 January 2026 14:48:01 +0000 (0:00:02.475) 0:01:48.986 ****** 2026-01-10 14:56:48.287792 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:56:48.287802 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:56:48.287811 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:56:48.287820 | orchestrator | 2026-01-10 14:56:48.287829 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2026-01-10 14:56:48.287839 | orchestrator | Saturday 10 January 2026 14:48:03 +0000 (0:00:02.126) 0:01:51.113 ****** 2026-01-10 14:56:48.287963 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:56:48.287974 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:56:48.287980 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:56:48.287986 | orchestrator | 2026-01-10 14:56:48.287992 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2026-01-10 14:56:48.287998 | orchestrator | Saturday 10 January 2026 14:48:03 +0000 (0:00:00.582) 0:01:51.695 ****** 2026-01-10 14:56:48.288004 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-01-10 14:56:48.288011 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:56:48.288017 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-01-10 14:56:48.288023 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:56:48.288029 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-01-10 14:56:48.288035 | orchestrator | ok: [testbed-node-0 -> {{ service_rabbitmq_delegate_host }}] 2026-01-10 14:56:48.288041 | orchestrator | 2026-01-10 14:56:48.288053 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2026-01-10 14:56:48.288059 | orchestrator | Saturday 10 January 2026 14:48:16 +0000 (0:00:12.539) 0:02:04.234 ****** 2026-01-10 14:56:48.288065 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:56:48.288071 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:56:48.288077 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:56:48.288083 | orchestrator | 2026-01-10 14:56:48.288089 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2026-01-10 14:56:48.288095 | orchestrator | Saturday 10 January 2026 14:48:17 +0000 (0:00:00.818) 0:02:05.053 ****** 2026-01-10 14:56:48.288101 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-01-10 14:56:48.288107 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-01-10 14:56:48.288114 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:56:48.288120 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:56:48.288126 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-01-10 14:56:48.288132 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:56:48.288138 | orchestrator | 2026-01-10 14:56:48.288144 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2026-01-10 14:56:48.288150 | orchestrator | Saturday 10 January 2026 14:48:18 +0000 (0:00:01.572) 0:02:06.627 ****** 2026-01-10 14:56:48.288156 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:56:48.288162 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:56:48.288168 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:56:48.288175 | orchestrator | 2026-01-10 14:56:48.288181 | orchestrator | TASK [nova-cell : Copying over config.json files for nova-cell-bootstrap] ****** 2026-01-10 14:56:48.288187 | orchestrator | Saturday 10 January 2026 14:48:20 +0000 (0:00:01.296) 0:02:07.924 ****** 2026-01-10 14:56:48.288193 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:56:48.288199 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:56:48.288205 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:56:48.288211 | orchestrator | 2026-01-10 14:56:48.288218 | orchestrator | TASK [nova-cell : Copying over nova.conf for nova-cell-bootstrap] ************** 2026-01-10 14:56:48.288224 | orchestrator | Saturday 10 January 2026 14:48:21 +0000 (0:00:01.325) 0:02:09.249 ****** 2026-01-10 14:56:48.288230 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:56:48.288236 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:56:48.288252 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:56:48.288258 | orchestrator | 2026-01-10 14:56:48.288264 | orchestrator | TASK [nova-cell : Running Nova cell bootstrap container] *********************** 2026-01-10 14:56:48.288277 | orchestrator | Saturday 10 January 2026 14:48:23 +0000 (0:00:02.308) 0:02:11.557 ****** 2026-01-10 14:56:48.288283 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:56:48.288289 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:56:48.288295 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:56:48.288301 | orchestrator | 2026-01-10 14:56:48.288307 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-01-10 14:56:48.288332 | orchestrator | Saturday 10 January 2026 14:48:45 +0000 (0:00:21.493) 0:02:33.051 ****** 2026-01-10 14:56:48.288339 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:56:48.288345 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:56:48.288351 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:56:48.288357 | orchestrator | 2026-01-10 14:56:48.288363 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-01-10 14:56:48.288369 | orchestrator | Saturday 10 January 2026 14:48:58 +0000 (0:00:12.898) 0:02:45.950 ****** 2026-01-10 14:56:48.288375 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:56:48.288381 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:56:48.288388 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:56:48.288394 | orchestrator | 2026-01-10 14:56:48.288400 | orchestrator | TASK [nova-cell : Create cell] ************************************************* 2026-01-10 14:56:48.288406 | orchestrator | Saturday 10 January 2026 14:48:58 +0000 (0:00:00.777) 0:02:46.728 ****** 2026-01-10 14:56:48.288412 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:56:48.288419 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:56:48.288443 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:56:48.288450 | orchestrator | 2026-01-10 14:56:48.288457 | orchestrator | TASK [nova-cell : Update cell] ************************************************* 2026-01-10 14:56:48.288465 | orchestrator | Saturday 10 January 2026 14:49:11 +0000 (0:00:12.949) 0:02:59.677 ****** 2026-01-10 14:56:48.288472 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:56:48.288479 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:56:48.288486 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:56:48.288493 | orchestrator | 2026-01-10 14:56:48.288500 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2026-01-10 14:56:48.288506 | orchestrator | Saturday 10 January 2026 14:49:12 +0000 (0:00:01.169) 0:03:00.846 ****** 2026-01-10 14:56:48.288513 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:56:48.288520 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:56:48.288527 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:56:48.288534 | orchestrator | 2026-01-10 14:56:48.288541 | orchestrator | PLAY [Apply role nova] ********************************************************* 2026-01-10 14:56:48.288547 | orchestrator | 2026-01-10 14:56:48.288554 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-01-10 14:56:48.288561 | orchestrator | Saturday 10 January 2026 14:49:13 +0000 (0:00:00.568) 0:03:01.415 ****** 2026-01-10 14:56:48.288568 | orchestrator | included: /ansible/roles/nova/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:56:48.288576 | orchestrator | 2026-01-10 14:56:48.288583 | orchestrator | TASK [service-ks-register : nova | Creating/deleting services] ***************** 2026-01-10 14:56:48.288590 | orchestrator | Saturday 10 January 2026 14:49:14 +0000 (0:00:00.913) 0:03:02.328 ****** 2026-01-10 14:56:48.288597 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy (compute_legacy))  2026-01-10 14:56:48.288604 | orchestrator | changed: [testbed-node-0] => (item=nova (compute)) 2026-01-10 14:56:48.288611 | orchestrator | 2026-01-10 14:56:48.288618 | orchestrator | TASK [service-ks-register : nova | Creating/deleting endpoints] **************** 2026-01-10 14:56:48.288624 | orchestrator | Saturday 10 January 2026 14:49:17 +0000 (0:00:03.435) 0:03:05.764 ****** 2026-01-10 14:56:48.288635 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api-int.testbed.osism.xyz:8774/v2/%(tenant_id)s -> internal)  2026-01-10 14:56:48.288643 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api.testbed.osism.xyz:8774/v2/%(tenant_id)s -> public)  2026-01-10 14:56:48.288651 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api-int.testbed.osism.xyz:8774/v2.1 -> internal) 2026-01-10 14:56:48.288662 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api.testbed.osism.xyz:8774/v2.1 -> public) 2026-01-10 14:56:48.288669 | orchestrator | 2026-01-10 14:56:48.288677 | orchestrator | TASK [service-ks-register : nova | Creating projects] ************************** 2026-01-10 14:56:48.288684 | orchestrator | Saturday 10 January 2026 14:49:25 +0000 (0:00:07.546) 0:03:13.310 ****** 2026-01-10 14:56:48.288690 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-01-10 14:56:48.288699 | orchestrator | 2026-01-10 14:56:48.288709 | orchestrator | TASK [service-ks-register : nova | Creating users] ***************************** 2026-01-10 14:56:48.288720 | orchestrator | Saturday 10 January 2026 14:49:28 +0000 (0:00:03.358) 0:03:16.669 ****** 2026-01-10 14:56:48.288730 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-01-10 14:56:48.288740 | orchestrator | changed: [testbed-node-0] => (item=nova -> service) 2026-01-10 14:56:48.288751 | orchestrator | 2026-01-10 14:56:48.288760 | orchestrator | TASK [service-ks-register : nova | Creating roles] ***************************** 2026-01-10 14:56:48.288769 | orchestrator | Saturday 10 January 2026 14:49:32 +0000 (0:00:04.190) 0:03:20.860 ****** 2026-01-10 14:56:48.288779 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-01-10 14:56:48.288789 | orchestrator | 2026-01-10 14:56:48.288799 | orchestrator | TASK [service-ks-register : nova | Granting/revoking user roles] *************** 2026-01-10 14:56:48.288809 | orchestrator | Saturday 10 January 2026 14:49:36 +0000 (0:00:03.402) 0:03:24.262 ****** 2026-01-10 14:56:48.288818 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> admin) 2026-01-10 14:56:48.288829 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> service) 2026-01-10 14:56:48.288839 | orchestrator | 2026-01-10 14:56:48.288849 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2026-01-10 14:56:48.288868 | orchestrator | Saturday 10 January 2026 14:49:44 +0000 (0:00:08.006) 0:03:32.268 ****** 2026-01-10 14:56:48.288884 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-10 14:56:48.288899 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-10 14:56:48.288924 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-10 14:56:48.288945 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-10 14:56:48.288958 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-10 14:56:48.288970 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-10 14:56:48.288990 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-01-10 14:56:48.288998 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-01-10 14:56:48.289004 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-01-10 14:56:48.289011 | orchestrator | 2026-01-10 14:56:48.289021 | orchestrator | TASK [nova : Check if policies shall be overwritten] *************************** 2026-01-10 14:56:48.289028 | orchestrator | Saturday 10 January 2026 14:49:48 +0000 (0:00:03.679) 0:03:35.947 ****** 2026-01-10 14:56:48.289034 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:56:48.289040 | orchestrator | 2026-01-10 14:56:48.289046 | orchestrator | TASK [nova : Set nova policy file] ********************************************* 2026-01-10 14:56:48.289052 | orchestrator | Saturday 10 January 2026 14:49:48 +0000 (0:00:00.311) 0:03:36.262 ****** 2026-01-10 14:56:48.289058 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:56:48.289065 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:56:48.289070 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:56:48.289076 | orchestrator | 2026-01-10 14:56:48.289083 | orchestrator | TASK [nova : Check for vendordata file] **************************************** 2026-01-10 14:56:48.289089 | orchestrator | Saturday 10 January 2026 14:49:49 +0000 (0:00:01.070) 0:03:37.332 ****** 2026-01-10 14:56:48.289095 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-10 14:56:48.289101 | orchestrator | 2026-01-10 14:56:48.289107 | orchestrator | TASK [nova : Set vendordata file path] ***************************************** 2026-01-10 14:56:48.289113 | orchestrator | Saturday 10 January 2026 14:49:50 +0000 (0:00:01.041) 0:03:38.374 ****** 2026-01-10 14:56:48.289119 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:56:48.289124 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:56:48.289130 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:56:48.289136 | orchestrator | 2026-01-10 14:56:48.289142 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-01-10 14:56:48.289148 | orchestrator | Saturday 10 January 2026 14:49:50 +0000 (0:00:00.364) 0:03:38.739 ****** 2026-01-10 14:56:48.289154 | orchestrator | included: /ansible/roles/nova/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:56:48.289165 | orchestrator | 2026-01-10 14:56:48.289171 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2026-01-10 14:56:48.289177 | orchestrator | Saturday 10 January 2026 14:49:52 +0000 (0:00:01.263) 0:03:40.003 ****** 2026-01-10 14:56:48.289184 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-10 14:56:48.289194 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-10 14:56:48.289206 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-10 14:56:48.289213 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-10 14:56:48.289228 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-10 14:56:48.289238 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-10 14:56:48.289249 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-01-10 14:56:48.289256 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-01-10 14:56:48.289262 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-01-10 14:56:48.289272 | orchestrator | 2026-01-10 14:56:48.289279 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2026-01-10 14:56:48.289285 | orchestrator | Saturday 10 January 2026 14:49:57 +0000 (0:00:05.272) 0:03:45.275 ****** 2026-01-10 14:56:48.289292 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-10 14:56:48.289301 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-10 14:56:48.289376 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-10 14:56:48.289385 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-10 14:56:48.289396 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:56:48.289403 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-10 14:56:48.289413 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-10 14:56:48.289419 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:56:48.289426 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-10 14:56:48.289438 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-10 14:56:48.289449 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-10 14:56:48.289456 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:56:48.289462 | orchestrator | 2026-01-10 14:56:48.289468 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2026-01-10 14:56:48.289475 | orchestrator | Saturday 10 January 2026 14:49:58 +0000 (0:00:01.434) 0:03:46.709 ****** 2026-01-10 14:56:48.289481 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-10 14:56:48.289491 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-10 14:56:48.289502 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-10 14:56:48.289509 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:56:48.289515 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-10 14:56:48.289526 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-10 14:56:48.289536 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-10 14:56:48.289543 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-10 14:56:48.289550 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:56:48.289561 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-10 14:56:48.289572 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-10 14:56:48.289579 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:56:48.289585 | orchestrator | 2026-01-10 14:56:48.289592 | orchestrator | TASK [nova : Copying over config.json files for services] ********************** 2026-01-10 14:56:48.289598 | orchestrator | Saturday 10 January 2026 14:50:00 +0000 (0:00:01.887) 0:03:48.596 ****** 2026-01-10 14:56:48.289607 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-10 14:56:48.289614 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-10 14:56:48.289626 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-10 14:56:48.289637 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-10 14:56:48.289647 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-10 14:56:48.289654 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-10 14:56:48.289671 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-01-10 14:56:48.289678 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-01-10 14:56:48.289684 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-01-10 14:56:48.289691 | orchestrator | 2026-01-10 14:56:48.289697 | orchestrator | TASK [nova : Copying over nova.conf] ******************************************* 2026-01-10 14:56:48.289703 | orchestrator | Saturday 10 January 2026 14:50:05 +0000 (0:00:04.341) 0:03:52.938 ****** 2026-01-10 14:56:48.289715 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-10 14:56:48.289722 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-10 14:56:48.289737 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-10 14:56:48.289744 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-10 14:56:48.289754 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-10 14:56:48.289761 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-10 14:56:48.289776 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-01-10 14:56:48.289782 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-01-10 14:56:48.289789 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-01-10 14:56:48.289795 | orchestrator | 2026-01-10 14:56:48.289802 | orchestrator | TASK [nova : Copying over existing policy file] ******************************** 2026-01-10 14:56:48.289808 | orchestrator | Saturday 10 January 2026 14:50:19 +0000 (0:00:14.776) 0:04:07.715 ****** 2026-01-10 14:56:48.289817 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-10 14:56:48.289824 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-10 14:56:48.289838 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-10 14:56:48.289845 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:56:48.289852 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-10 14:56:48.289858 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-10 14:56:48.289868 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-10 14:56:48.289878 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:56:48.290422 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-10 14:56:48.290442 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-10 14:56:48.290450 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-10 14:56:48.290456 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:56:48.290463 | orchestrator | 2026-01-10 14:56:48.290469 | orchestrator | TASK [nova : Copying over nova-api-wsgi.conf] ********************************** 2026-01-10 14:56:48.290475 | orchestrator | Saturday 10 January 2026 14:50:21 +0000 (0:00:01.675) 0:04:09.390 ****** 2026-01-10 14:56:48.290481 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:56:48.290488 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:56:48.290494 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:56:48.290500 | orchestrator | 2026-01-10 14:56:48.290506 | orchestrator | TASK [nova : Copying over nova-metadata-wsgi.conf] ***************************** 2026-01-10 14:56:48.290512 | orchestrator | Saturday 10 January 2026 14:50:22 +0000 (0:00:01.268) 0:04:10.660 ****** 2026-01-10 14:56:48.290518 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:56:48.290524 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:56:48.290530 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:56:48.290536 | orchestrator | 2026-01-10 14:56:48.290542 | orchestrator | TASK [nova : Copying over vendordata file for nova services] ******************* 2026-01-10 14:56:48.290548 | orchestrator | Saturday 10 January 2026 14:50:24 +0000 (0:00:01.460) 0:04:12.120 ****** 2026-01-10 14:56:48.290554 | orchestrator | skipping: [testbed-node-0] => (item=nova-metadata)  2026-01-10 14:56:48.290567 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2026-01-10 14:56:48.290573 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:56:48.290584 | orchestrator | skipping: [testbed-node-1] => (item=nova-metadata)  2026-01-10 14:56:48.290590 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2026-01-10 14:56:48.290596 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:56:48.290602 | orchestrator | skipping: [testbed-node-2] => (item=nova-metadata)  2026-01-10 14:56:48.290608 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2026-01-10 14:56:48.290614 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:56:48.290620 | orchestrator | 2026-01-10 14:56:48.290626 | orchestrator | TASK [Configure uWSGI for Nova] ************************************************ 2026-01-10 14:56:48.290632 | orchestrator | Saturday 10 January 2026 14:50:24 +0000 (0:00:00.551) 0:04:12.672 ****** 2026-01-10 14:56:48.290638 | orchestrator | included: service-uwsgi-config for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova-api', 'port': '8774'}) 2026-01-10 14:56:48.290645 | orchestrator | included: service-uwsgi-config for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova-metadata', 'port': '8775'}) 2026-01-10 14:56:48.290651 | orchestrator | 2026-01-10 14:56:48.290657 | orchestrator | TASK [service-uwsgi-config : Copying over nova-api uWSGI config] *************** 2026-01-10 14:56:48.290663 | orchestrator | Saturday 10 January 2026 14:50:26 +0000 (0:00:02.098) 0:04:14.770 ****** 2026-01-10 14:56:48.290669 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:56:48.290675 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:56:48.290681 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:56:48.290687 | orchestrator | 2026-01-10 14:56:48.290693 | orchestrator | TASK [service-uwsgi-config : Copying over nova-metadata uWSGI config] ********** 2026-01-10 14:56:48.290700 | orchestrator | Saturday 10 January 2026 14:50:31 +0000 (0:00:04.274) 0:04:19.045 ****** 2026-01-10 14:56:48.290706 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:56:48.290712 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:56:48.290718 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:56:48.290724 | orchestrator | 2026-01-10 14:56:48.290730 | orchestrator | TASK [service-check-containers : nova | Check containers] ********************** 2026-01-10 14:56:48.290736 | orchestrator | Saturday 10 January 2026 14:50:35 +0000 (0:00:04.106) 0:04:23.152 ****** 2026-01-10 14:56:48.290748 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-10 14:56:48.290756 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-10 14:56:48.290770 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-10 14:56:48.290782 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-10 14:56:48.290789 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-10 14:56:48.290796 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-10 14:56:48.290814 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-01-10 14:56:48.290825 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-01-10 14:56:48.290836 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-01-10 14:56:48.290846 | orchestrator | 2026-01-10 14:56:48.290856 | orchestrator | TASK [service-check-containers : nova | Notify handlers to restart containers] *** 2026-01-10 14:56:48.290868 | orchestrator | Saturday 10 January 2026 14:50:38 +0000 (0:00:03.458) 0:04:26.610 ****** 2026-01-10 14:56:48.290884 | orchestrator | changed: [testbed-node-0] => { 2026-01-10 14:56:48.290895 | orchestrator |  "msg": "Notifying handlers" 2026-01-10 14:56:48.290905 | orchestrator | } 2026-01-10 14:56:48.290916 | orchestrator | changed: [testbed-node-1] => { 2026-01-10 14:56:48.290926 | orchestrator |  "msg": "Notifying handlers" 2026-01-10 14:56:48.290935 | orchestrator | } 2026-01-10 14:56:48.290942 | orchestrator | changed: [testbed-node-2] => { 2026-01-10 14:56:48.290948 | orchestrator |  "msg": "Notifying handlers" 2026-01-10 14:56:48.290954 | orchestrator | } 2026-01-10 14:56:48.290959 | orchestrator | 2026-01-10 14:56:48.290966 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-01-10 14:56:48.290972 | orchestrator | Saturday 10 January 2026 14:50:39 +0000 (0:00:01.223) 0:04:27.833 ****** 2026-01-10 14:56:48.290978 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-10 14:56:48.290990 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-10 14:56:48.291003 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-10 14:56:48.291009 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:56:48.291020 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-10 14:56:48.291027 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-10 14:56:48.291038 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-10 14:56:48.291046 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:56:48.291056 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-10 14:56:48.291064 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-10 14:56:48.291077 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-10 14:56:48.291084 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:56:48.291091 | orchestrator | 2026-01-10 14:56:48.291098 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-01-10 14:56:48.291109 | orchestrator | Saturday 10 January 2026 14:50:41 +0000 (0:00:02.044) 0:04:29.878 ****** 2026-01-10 14:56:48.291116 | orchestrator | 2026-01-10 14:56:48.291123 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-01-10 14:56:48.291130 | orchestrator | Saturday 10 January 2026 14:50:42 +0000 (0:00:00.258) 0:04:30.137 ****** 2026-01-10 14:56:48.291137 | orchestrator | 2026-01-10 14:56:48.291143 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-01-10 14:56:48.291150 | orchestrator | Saturday 10 January 2026 14:50:42 +0000 (0:00:00.226) 0:04:30.363 ****** 2026-01-10 14:56:48.291157 | orchestrator | 2026-01-10 14:56:48.291164 | orchestrator | RUNNING HANDLER [nova : Restart nova-scheduler container] ********************** 2026-01-10 14:56:48.291172 | orchestrator | Saturday 10 January 2026 14:50:43 +0000 (0:00:00.654) 0:04:31.019 ****** 2026-01-10 14:56:48.291179 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:56:48.291186 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:56:48.291193 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:56:48.291199 | orchestrator | 2026-01-10 14:56:48.291206 | orchestrator | RUNNING HANDLER [nova : Restart nova-api container] **************************** 2026-01-10 14:56:48.291213 | orchestrator | Saturday 10 January 2026 14:50:59 +0000 (0:00:16.788) 0:04:47.808 ****** 2026-01-10 14:56:48.291220 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:56:48.291227 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:56:48.291234 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:56:48.291241 | orchestrator | 2026-01-10 14:56:48.291248 | orchestrator | RUNNING HANDLER [nova : Restart nova-metadata container] *********************** 2026-01-10 14:56:48.291255 | orchestrator | Saturday 10 January 2026 14:51:06 +0000 (0:00:06.367) 0:04:54.176 ****** 2026-01-10 14:56:48.291262 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:56:48.291269 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:56:48.291275 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:56:48.291283 | orchestrator | 2026-01-10 14:56:48.291289 | orchestrator | PLAY [Apply role nova-cell] **************************************************** 2026-01-10 14:56:48.291296 | orchestrator | 2026-01-10 14:56:48.291303 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-01-10 14:56:48.291329 | orchestrator | Saturday 10 January 2026 14:51:16 +0000 (0:00:10.571) 0:05:04.747 ****** 2026-01-10 14:56:48.291340 | orchestrator | included: /ansible/roles/nova-cell/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:56:48.291348 | orchestrator | 2026-01-10 14:56:48.291355 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-01-10 14:56:48.291362 | orchestrator | Saturday 10 January 2026 14:51:17 +0000 (0:00:01.067) 0:05:05.815 ****** 2026-01-10 14:56:48.291369 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:56:48.291376 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:56:48.291383 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:56:48.291390 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:56:48.291397 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:56:48.291404 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:56:48.291411 | orchestrator | 2026-01-10 14:56:48.291422 | orchestrator | TASK [nova-cell : Get new Libvirt version] ************************************* 2026-01-10 14:56:48.291429 | orchestrator | Saturday 10 January 2026 14:51:18 +0000 (0:00:00.705) 0:05:06.520 ****** 2026-01-10 14:56:48.291436 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:56:48.291443 | orchestrator | 2026-01-10 14:56:48.291450 | orchestrator | TASK [nova-cell : Cache new Libvirt version] *********************************** 2026-01-10 14:56:48.291457 | orchestrator | Saturday 10 January 2026 14:51:40 +0000 (0:00:22.356) 0:05:28.876 ****** 2026-01-10 14:56:48.291464 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:56:48.291471 | orchestrator | 2026-01-10 14:56:48.291478 | orchestrator | TASK [Get nova_libvirt image info] ********************************************* 2026-01-10 14:56:48.291485 | orchestrator | Saturday 10 January 2026 14:51:42 +0000 (0:00:01.234) 0:05:30.111 ****** 2026-01-10 14:56:48.291490 | orchestrator | included: service-image-info for testbed-node-3 2026-01-10 14:56:48.291501 | orchestrator | 2026-01-10 14:56:48.291507 | orchestrator | TASK [service-image-info : community.docker.docker_image_info] ***************** 2026-01-10 14:56:48.291513 | orchestrator | Saturday 10 January 2026 14:51:42 +0000 (0:00:00.773) 0:05:30.884 ****** 2026-01-10 14:56:48.291519 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:56:48.291525 | orchestrator | 2026-01-10 14:56:48.291534 | orchestrator | TASK [service-image-info : set_fact] ******************************************* 2026-01-10 14:56:48.291540 | orchestrator | Saturday 10 January 2026 14:51:47 +0000 (0:00:04.230) 0:05:35.114 ****** 2026-01-10 14:56:48.291547 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:56:48.291552 | orchestrator | 2026-01-10 14:56:48.291559 | orchestrator | TASK [service-image-info : containers.podman.podman_image_info] **************** 2026-01-10 14:56:48.291565 | orchestrator | Saturday 10 January 2026 14:51:48 +0000 (0:00:01.800) 0:05:36.914 ****** 2026-01-10 14:56:48.291571 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:56:48.291577 | orchestrator | 2026-01-10 14:56:48.291583 | orchestrator | TASK [service-image-info : set_fact] ******************************************* 2026-01-10 14:56:48.291589 | orchestrator | Saturday 10 January 2026 14:51:50 +0000 (0:00:01.959) 0:05:38.873 ****** 2026-01-10 14:56:48.291595 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:56:48.291601 | orchestrator | 2026-01-10 14:56:48.291608 | orchestrator | TASK [nova-cell : Get container facts] ***************************************** 2026-01-10 14:56:48.291620 | orchestrator | Saturday 10 January 2026 14:51:53 +0000 (0:00:02.056) 0:05:40.930 ****** 2026-01-10 14:56:48.291630 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-01-10 14:56:48.291640 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-01-10 14:56:48.291650 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-01-10 14:56:48.291661 | orchestrator | 2026-01-10 14:56:48.291671 | orchestrator | TASK [nova-cell : Get current Libvirt version] ********************************* 2026-01-10 14:56:48.291681 | orchestrator | Saturday 10 January 2026 14:52:04 +0000 (0:00:11.926) 0:05:52.857 ****** 2026-01-10 14:56:48.291690 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-10 14:56:48.291701 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-10 14:56:48.291711 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-10 14:56:48.291722 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:56:48.291733 | orchestrator | 2026-01-10 14:56:48.291744 | orchestrator | TASK [nova-cell : Check that the new Libvirt version is >= current] ************ 2026-01-10 14:56:48.291754 | orchestrator | Saturday 10 January 2026 14:52:10 +0000 (0:00:05.575) 0:05:58.433 ****** 2026-01-10 14:56:48.291765 | orchestrator | skipping: [testbed-node-3] => (item={'result': False, 'changed': False, 'containers': {}, 'invocation': {'module_args': {'action': 'get_containers', 'container_engine': 'docker', 'name': ['nova_libvirt'], 'api_version': 'auto'}}, 'failed': False, 'item': 'testbed-node-3', 'ansible_loop_var': 'item'})  2026-01-10 14:56:48.291772 | orchestrator | skipping: [testbed-node-3] => (item={'result': False, 'changed': False, 'containers': {}, 'invocation': {'module_args': {'action': 'get_containers', 'container_engine': 'docker', 'name': ['nova_libvirt'], 'api_version': 'auto'}}, 'failed': False, 'item': 'testbed-node-4', 'ansible_loop_var': 'item'})  2026-01-10 14:56:48.291778 | orchestrator | skipping: [testbed-node-3] => (item={'result': False, 'changed': False, 'containers': {}, 'invocation': {'module_args': {'action': 'get_containers', 'container_engine': 'docker', 'name': ['nova_libvirt'], 'api_version': 'auto'}}, 'failed': False, 'item': 'testbed-node-5', 'ansible_loop_var': 'item'})  2026-01-10 14:56:48.291784 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:56:48.291791 | orchestrator | 2026-01-10 14:56:48.291797 | orchestrator | TASK [Load and persist br_netfilter module] ************************************ 2026-01-10 14:56:48.291803 | orchestrator | Saturday 10 January 2026 14:52:14 +0000 (0:00:03.620) 0:06:02.054 ****** 2026-01-10 14:56:48.291809 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:56:48.291820 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:56:48.291826 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:56:48.291832 | orchestrator | included: module-load for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-10 14:56:48.291838 | orchestrator | 2026-01-10 14:56:48.291845 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-01-10 14:56:48.291856 | orchestrator | Saturday 10 January 2026 14:52:15 +0000 (0:00:01.019) 0:06:03.073 ****** 2026-01-10 14:56:48.291866 | orchestrator | ok: [testbed-node-5] => (item=br_netfilter) 2026-01-10 14:56:48.291877 | orchestrator | ok: [testbed-node-4] => (item=br_netfilter) 2026-01-10 14:56:48.291888 | orchestrator | ok: [testbed-node-3] => (item=br_netfilter) 2026-01-10 14:56:48.291899 | orchestrator | 2026-01-10 14:56:48.291909 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-01-10 14:56:48.291923 | orchestrator | Saturday 10 January 2026 14:52:15 +0000 (0:00:00.707) 0:06:03.781 ****** 2026-01-10 14:56:48.291929 | orchestrator | changed: [testbed-node-3] => (item=br_netfilter) 2026-01-10 14:56:48.291935 | orchestrator | changed: [testbed-node-4] => (item=br_netfilter) 2026-01-10 14:56:48.291941 | orchestrator | changed: [testbed-node-5] => (item=br_netfilter) 2026-01-10 14:56:48.291947 | orchestrator | 2026-01-10 14:56:48.291953 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-01-10 14:56:48.291959 | orchestrator | Saturday 10 January 2026 14:52:16 +0000 (0:00:01.107) 0:06:04.889 ****** 2026-01-10 14:56:48.291965 | orchestrator | skipping: [testbed-node-3] => (item=br_netfilter)  2026-01-10 14:56:48.291971 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:56:48.291977 | orchestrator | skipping: [testbed-node-4] => (item=br_netfilter)  2026-01-10 14:56:48.291983 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:56:48.291989 | orchestrator | skipping: [testbed-node-5] => (item=br_netfilter)  2026-01-10 14:56:48.291995 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:56:48.292001 | orchestrator | 2026-01-10 14:56:48.292007 | orchestrator | TASK [nova-cell : Enable bridge-nf-call sysctl variables] ********************** 2026-01-10 14:56:48.292013 | orchestrator | Saturday 10 January 2026 14:52:17 +0000 (0:00:00.721) 0:06:05.611 ****** 2026-01-10 14:56:48.292019 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-01-10 14:56:48.292025 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-01-10 14:56:48.292031 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:56:48.292037 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables) 2026-01-10 14:56:48.292043 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-01-10 14:56:48.292049 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables) 2026-01-10 14:56:48.292055 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-01-10 14:56:48.292061 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:56:48.292072 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-01-10 14:56:48.292079 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-01-10 14:56:48.292085 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:56:48.292091 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-01-10 14:56:48.292097 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables) 2026-01-10 14:56:48.292103 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-01-10 14:56:48.292109 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-01-10 14:56:48.292115 | orchestrator | 2026-01-10 14:56:48.292121 | orchestrator | TASK [nova-cell : Install udev kolla kvm rules] ******************************** 2026-01-10 14:56:48.292127 | orchestrator | Saturday 10 January 2026 14:52:19 +0000 (0:00:02.006) 0:06:07.617 ****** 2026-01-10 14:56:48.292138 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:56:48.292144 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:56:48.292150 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:56:48.292156 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:56:48.292162 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:56:48.292168 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:56:48.292174 | orchestrator | 2026-01-10 14:56:48.292181 | orchestrator | TASK [nova-cell : Mask qemu-kvm service] *************************************** 2026-01-10 14:56:48.292187 | orchestrator | Saturday 10 January 2026 14:52:20 +0000 (0:00:01.107) 0:06:08.725 ****** 2026-01-10 14:56:48.292193 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:56:48.292199 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:56:48.292205 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:56:48.292211 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:56:48.292217 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:56:48.292223 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:56:48.292229 | orchestrator | 2026-01-10 14:56:48.292235 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2026-01-10 14:56:48.292241 | orchestrator | Saturday 10 January 2026 14:52:22 +0000 (0:00:01.716) 0:06:10.441 ****** 2026-01-10 14:56:48.292248 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-01-10 14:56:48.292259 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-01-10 14:56:48.292266 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-01-10 14:56:48.292278 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-01-10 14:56:48.292290 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-01-10 14:56:48.292296 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-01-10 14:56:48.292303 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-01-10 14:56:48.292346 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-01-10 14:56:48.292355 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-01-10 14:56:48.292367 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-01-10 14:56:48.292380 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-01-10 14:56:48.292387 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-01-10 14:56:48.292393 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-01-10 14:56:48.292403 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-01-10 14:56:48.292409 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-01-10 14:56:48.292416 | orchestrator | 2026-01-10 14:56:48.292422 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-01-10 14:56:48.292428 | orchestrator | Saturday 10 January 2026 14:52:26 +0000 (0:00:03.840) 0:06:14.281 ****** 2026-01-10 14:56:48.292434 | orchestrator | included: /ansible/roles/nova-cell/tasks/copy-certs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:56:48.292445 | orchestrator | 2026-01-10 14:56:48.292451 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2026-01-10 14:56:48.292457 | orchestrator | Saturday 10 January 2026 14:52:28 +0000 (0:00:01.784) 0:06:16.066 ****** 2026-01-10 14:56:48.292467 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-01-10 14:56:48.292474 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-01-10 14:56:48.292480 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-01-10 14:56:48.292492 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-01-10 14:56:48.292498 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-01-10 14:56:48.292513 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-01-10 14:56:48.292520 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-01-10 14:56:48.292527 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-01-10 14:56:48.292533 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-01-10 14:56:48.292542 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-01-10 14:56:48.292549 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-01-10 14:56:48.292562 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-01-10 14:56:48.292569 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-01-10 14:56:48.292575 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-01-10 14:56:48.292582 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-01-10 14:56:48.292588 | orchestrator | 2026-01-10 14:56:48.292594 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2026-01-10 14:56:48.292600 | orchestrator | Saturday 10 January 2026 14:52:32 +0000 (0:00:04.178) 0:06:20.244 ****** 2026-01-10 14:56:48.292610 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-01-10 14:56:48.292621 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-01-10 14:56:48.292630 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-01-10 14:56:48.292637 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:56:48.292643 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-01-10 14:56:48.292650 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-01-10 14:56:48.292656 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-01-10 14:56:48.292665 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-01-10 14:56:48.292675 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:56:48.292685 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-01-10 14:56:48.292692 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-10 14:56:48.292698 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:56:48.292704 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-01-10 14:56:48.292711 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-01-10 14:56:48.292717 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-10 14:56:48.292723 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:56:48.292733 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-01-10 14:56:48.292743 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-01-10 14:56:48.292749 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:56:48.292760 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-10 14:56:48.292766 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:56:48.292772 | orchestrator | 2026-01-10 14:56:48.292778 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2026-01-10 14:56:48.292784 | orchestrator | Saturday 10 January 2026 14:52:35 +0000 (0:00:03.413) 0:06:23.657 ****** 2026-01-10 14:56:48.292791 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-01-10 14:56:48.292798 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-01-10 14:56:48.292807 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-01-10 14:56:48.292817 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:56:48.292824 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-01-10 14:56:48.292908 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-01-10 14:56:48.292924 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-01-10 14:56:48.292935 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:56:48.292946 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-01-10 14:56:48.292958 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-10 14:56:48.292976 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:56:48.292986 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-01-10 14:56:48.292993 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-01-10 14:56:48.293019 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-10 14:56:48.293027 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:56:48.293033 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-10 14:56:48.293039 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:56:48.293045 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-01-10 14:56:48.293052 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-01-10 14:56:48.293068 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-01-10 14:56:48.293075 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:56:48.293081 | orchestrator | 2026-01-10 14:56:48.293087 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-01-10 14:56:48.293093 | orchestrator | Saturday 10 January 2026 14:52:38 +0000 (0:00:02.871) 0:06:26.529 ****** 2026-01-10 14:56:48.293100 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:56:48.293106 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:56:48.293111 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:56:48.293117 | orchestrator | included: /ansible/roles/nova-cell/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-10 14:56:48.293123 | orchestrator | 2026-01-10 14:56:48.293129 | orchestrator | TASK [nova-cell : Check nova keyring file] ************************************* 2026-01-10 14:56:48.293135 | orchestrator | Saturday 10 January 2026 14:52:39 +0000 (0:00:01.036) 0:06:27.565 ****** 2026-01-10 14:56:48.293142 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-01-10 14:56:48.293148 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-01-10 14:56:48.293154 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-01-10 14:56:48.293160 | orchestrator | 2026-01-10 14:56:48.293166 | orchestrator | TASK [nova-cell : Check cinder keyring file] *********************************** 2026-01-10 14:56:48.293172 | orchestrator | Saturday 10 January 2026 14:52:40 +0000 (0:00:01.229) 0:06:28.795 ****** 2026-01-10 14:56:48.293178 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-01-10 14:56:48.293184 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-01-10 14:56:48.293190 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-01-10 14:56:48.293196 | orchestrator | 2026-01-10 14:56:48.293202 | orchestrator | TASK [nova-cell : Extract nova key from file] ********************************** 2026-01-10 14:56:48.293224 | orchestrator | Saturday 10 January 2026 14:52:41 +0000 (0:00:01.067) 0:06:29.862 ****** 2026-01-10 14:56:48.293231 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:56:48.293237 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:56:48.293243 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:56:48.293249 | orchestrator | 2026-01-10 14:56:48.293256 | orchestrator | TASK [nova-cell : Extract cinder key from file] ******************************** 2026-01-10 14:56:48.293262 | orchestrator | Saturday 10 January 2026 14:52:42 +0000 (0:00:00.511) 0:06:30.374 ****** 2026-01-10 14:56:48.293268 | orchestrator | ok: [testbed-node-3] 2026-01-10 14:56:48.293274 | orchestrator | ok: [testbed-node-4] 2026-01-10 14:56:48.293280 | orchestrator | ok: [testbed-node-5] 2026-01-10 14:56:48.293286 | orchestrator | 2026-01-10 14:56:48.293292 | orchestrator | TASK [nova-cell : Copy over ceph nova keyring file] **************************** 2026-01-10 14:56:48.293298 | orchestrator | Saturday 10 January 2026 14:52:42 +0000 (0:00:00.451) 0:06:30.826 ****** 2026-01-10 14:56:48.293304 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-01-10 14:56:48.293351 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-01-10 14:56:48.293360 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-01-10 14:56:48.293371 | orchestrator | 2026-01-10 14:56:48.293378 | orchestrator | TASK [nova-cell : Copy over ceph cinder keyring file] ************************** 2026-01-10 14:56:48.293384 | orchestrator | Saturday 10 January 2026 14:52:44 +0000 (0:00:01.164) 0:06:31.990 ****** 2026-01-10 14:56:48.293390 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-01-10 14:56:48.293396 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-01-10 14:56:48.293402 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-01-10 14:56:48.293408 | orchestrator | 2026-01-10 14:56:48.293414 | orchestrator | TASK [nova-cell : Copy over ceph.conf] ***************************************** 2026-01-10 14:56:48.293421 | orchestrator | Saturday 10 January 2026 14:52:45 +0000 (0:00:01.062) 0:06:33.052 ****** 2026-01-10 14:56:48.293427 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-01-10 14:56:48.293432 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-01-10 14:56:48.293438 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-01-10 14:56:48.293444 | orchestrator | changed: [testbed-node-3] => (item=nova-libvirt) 2026-01-10 14:56:48.293451 | orchestrator | changed: [testbed-node-4] => (item=nova-libvirt) 2026-01-10 14:56:48.293461 | orchestrator | changed: [testbed-node-5] => (item=nova-libvirt) 2026-01-10 14:56:48.293473 | orchestrator | 2026-01-10 14:56:48.293485 | orchestrator | TASK [nova-cell : Ensure /etc/ceph directory exists (host libvirt)] ************ 2026-01-10 14:56:48.293498 | orchestrator | Saturday 10 January 2026 14:52:48 +0000 (0:00:03.526) 0:06:36.578 ****** 2026-01-10 14:56:48.293511 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:56:48.293519 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:56:48.293527 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:56:48.293534 | orchestrator | 2026-01-10 14:56:48.293541 | orchestrator | TASK [nova-cell : Copy over ceph.conf (host libvirt)] ************************** 2026-01-10 14:56:48.293548 | orchestrator | Saturday 10 January 2026 14:52:48 +0000 (0:00:00.285) 0:06:36.863 ****** 2026-01-10 14:56:48.293555 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:56:48.293562 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:56:48.293569 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:56:48.293576 | orchestrator | 2026-01-10 14:56:48.293582 | orchestrator | TASK [nova-cell : Ensuring libvirt secrets directory exists] ******************* 2026-01-10 14:56:48.293590 | orchestrator | Saturday 10 January 2026 14:52:49 +0000 (0:00:00.409) 0:06:37.273 ****** 2026-01-10 14:56:48.293597 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:56:48.293604 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:56:48.293611 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:56:48.293618 | orchestrator | 2026-01-10 14:56:48.293625 | orchestrator | TASK [nova-cell : Pushing nova secret xml for libvirt] ************************* 2026-01-10 14:56:48.293632 | orchestrator | Saturday 10 January 2026 14:52:50 +0000 (0:00:01.193) 0:06:38.466 ****** 2026-01-10 14:56:48.293645 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'ceph-ephemeral-nova', 'desc': 'Ceph Client Secret for Ephemeral Storage (Nova)', 'enabled': True}) 2026-01-10 14:56:48.293653 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'ceph-ephemeral-nova', 'desc': 'Ceph Client Secret for Ephemeral Storage (Nova)', 'enabled': True}) 2026-01-10 14:56:48.293661 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'ceph-ephemeral-nova', 'desc': 'Ceph Client Secret for Ephemeral Storage (Nova)', 'enabled': True}) 2026-01-10 14:56:48.293668 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'ceph-persistent-cinder', 'desc': 'Ceph Client Secret for Persistent Storage (Cinder)', 'enabled': 'yes'}) 2026-01-10 14:56:48.293676 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'ceph-persistent-cinder', 'desc': 'Ceph Client Secret for Persistent Storage (Cinder)', 'enabled': 'yes'}) 2026-01-10 14:56:48.293688 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'ceph-persistent-cinder', 'desc': 'Ceph Client Secret for Persistent Storage (Cinder)', 'enabled': 'yes'}) 2026-01-10 14:56:48.293695 | orchestrator | 2026-01-10 14:56:48.293703 | orchestrator | TASK [nova-cell : Pushing secrets key for libvirt] ***************************** 2026-01-10 14:56:48.293710 | orchestrator | Saturday 10 January 2026 14:52:53 +0000 (0:00:03.288) 0:06:41.755 ****** 2026-01-10 14:56:48.293717 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-01-10 14:56:48.293724 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-01-10 14:56:48.293760 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-01-10 14:56:48.293768 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-01-10 14:56:48.293776 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:56:48.293783 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-01-10 14:56:48.293790 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:56:48.293797 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-01-10 14:56:48.293805 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:56:48.293812 | orchestrator | 2026-01-10 14:56:48.293819 | orchestrator | TASK [nova-cell : Check if policies shall be overwritten] ********************** 2026-01-10 14:56:48.293826 | orchestrator | Saturday 10 January 2026 14:52:57 +0000 (0:00:03.224) 0:06:44.979 ****** 2026-01-10 14:56:48.293833 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:56:48.293840 | orchestrator | 2026-01-10 14:56:48.293847 | orchestrator | TASK [nova-cell : Set nova policy file] **************************************** 2026-01-10 14:56:48.293855 | orchestrator | Saturday 10 January 2026 14:52:57 +0000 (0:00:00.147) 0:06:45.126 ****** 2026-01-10 14:56:48.293862 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:56:48.293869 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:56:48.293876 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:56:48.293883 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:56:48.293890 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:56:48.293897 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:56:48.293904 | orchestrator | 2026-01-10 14:56:48.293917 | orchestrator | TASK [nova-cell : Check for vendordata file] *********************************** 2026-01-10 14:56:48.293929 | orchestrator | Saturday 10 January 2026 14:52:58 +0000 (0:00:00.842) 0:06:45.969 ****** 2026-01-10 14:56:48.293941 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-01-10 14:56:48.293955 | orchestrator | 2026-01-10 14:56:48.293970 | orchestrator | TASK [nova-cell : Set vendordata file path] ************************************ 2026-01-10 14:56:48.293982 | orchestrator | Saturday 10 January 2026 14:52:58 +0000 (0:00:00.686) 0:06:46.656 ****** 2026-01-10 14:56:48.293996 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:56:48.294009 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:56:48.294042 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:56:48.294049 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:56:48.294056 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:56:48.294063 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:56:48.294070 | orchestrator | 2026-01-10 14:56:48.294078 | orchestrator | TASK [nova-cell : Copying over config.json files for services] ***************** 2026-01-10 14:56:48.294085 | orchestrator | Saturday 10 January 2026 14:52:59 +0000 (0:00:00.570) 0:06:47.227 ****** 2026-01-10 14:56:48.294093 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-01-10 14:56:48.294113 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-01-10 14:56:48.294163 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-01-10 14:56:48.294180 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-01-10 14:56:48.294192 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-01-10 14:56:48.294200 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-01-10 14:56:48.294212 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-01-10 14:56:48.294225 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-01-10 14:56:48.294232 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-01-10 14:56:48.294262 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-01-10 14:56:48.294271 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-01-10 14:56:48.294278 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-01-10 14:56:48.294286 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-01-10 14:56:48.294302 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-01-10 14:56:48.294332 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-01-10 14:56:48.294345 | orchestrator | 2026-01-10 14:56:48.294355 | orchestrator | TASK [nova-cell : Copying over nova.conf] ************************************** 2026-01-10 14:56:48.294362 | orchestrator | Saturday 10 January 2026 14:53:03 +0000 (0:00:03.925) 0:06:51.152 ****** 2026-01-10 14:56:48.294399 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-01-10 14:56:48.294415 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-01-10 14:56:48.294429 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-01-10 14:56:48.294447 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-01-10 14:56:48.294455 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-01-10 14:56:48.294466 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-01-10 14:56:48.294474 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-01-10 14:56:48.294498 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-01-10 14:56:48.294520 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-01-10 14:56:48.294537 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-01-10 14:56:48.294545 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-01-10 14:56:48.294558 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-01-10 14:56:48.294565 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-01-10 14:56:48.294573 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-01-10 14:56:48.294580 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-01-10 14:56:48.294595 | orchestrator | 2026-01-10 14:56:48.294611 | orchestrator | TASK [nova-cell : Copying over Nova compute provider config] ******************* 2026-01-10 14:56:48.294630 | orchestrator | Saturday 10 January 2026 14:53:11 +0000 (0:00:07.801) 0:06:58.954 ****** 2026-01-10 14:56:48.294641 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:56:48.294652 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:56:48.294664 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:56:48.294677 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:56:48.294690 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:56:48.294705 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:56:48.294719 | orchestrator | 2026-01-10 14:56:48.294732 | orchestrator | TASK [nova-cell : Copying over libvirt configuration] ************************** 2026-01-10 14:56:48.294740 | orchestrator | Saturday 10 January 2026 14:53:13 +0000 (0:00:02.336) 0:07:01.291 ****** 2026-01-10 14:56:48.294747 | orchestrator | changed: [testbed-node-3] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-01-10 14:56:48.294762 | orchestrator | changed: [testbed-node-5] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-01-10 14:56:48.294769 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-01-10 14:56:48.294777 | orchestrator | changed: [testbed-node-4] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-01-10 14:56:48.294784 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-01-10 14:56:48.294791 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-01-10 14:56:48.294798 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-01-10 14:56:48.294806 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:56:48.294814 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-01-10 14:56:48.294821 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:56:48.294828 | orchestrator | changed: [testbed-node-3] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-01-10 14:56:48.294835 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-01-10 14:56:48.294842 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:56:48.294849 | orchestrator | changed: [testbed-node-5] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-01-10 14:56:48.294856 | orchestrator | changed: [testbed-node-4] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-01-10 14:56:48.294863 | orchestrator | 2026-01-10 14:56:48.294870 | orchestrator | TASK [nova-cell : Copying over libvirt TLS keys] ******************************* 2026-01-10 14:56:48.294878 | orchestrator | Saturday 10 January 2026 14:53:18 +0000 (0:00:05.066) 0:07:06.357 ****** 2026-01-10 14:56:48.294885 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:56:48.294892 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:56:48.294899 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:56:48.294912 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:56:48.294920 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:56:48.294927 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:56:48.294936 | orchestrator | 2026-01-10 14:56:48.294948 | orchestrator | TASK [nova-cell : Copying over libvirt SASL configuration] ********************* 2026-01-10 14:56:48.294961 | orchestrator | Saturday 10 January 2026 14:53:19 +0000 (0:00:00.601) 0:07:06.958 ****** 2026-01-10 14:56:48.294972 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-01-10 14:56:48.294992 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-01-10 14:56:48.295005 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-01-10 14:56:48.295018 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-01-10 14:56:48.295031 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-01-10 14:56:48.295044 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-01-10 14:56:48.295056 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-01-10 14:56:48.295068 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-01-10 14:56:48.295080 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-01-10 14:56:48.295090 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-01-10 14:56:48.295101 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:56:48.295111 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-01-10 14:56:48.295122 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-01-10 14:56:48.295132 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:56:48.295142 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-01-10 14:56:48.295153 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:56:48.295164 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-01-10 14:56:48.295177 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-01-10 14:56:48.295190 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-01-10 14:56:48.295202 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-01-10 14:56:48.295214 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-01-10 14:56:48.295225 | orchestrator | 2026-01-10 14:56:48.295233 | orchestrator | TASK [nova-cell : Copying files for nova-ssh] ********************************** 2026-01-10 14:56:48.295245 | orchestrator | Saturday 10 January 2026 14:53:25 +0000 (0:00:06.828) 0:07:13.787 ****** 2026-01-10 14:56:48.295252 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-01-10 14:56:48.295259 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-01-10 14:56:48.295266 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-01-10 14:56:48.295273 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-01-10 14:56:48.295280 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-01-10 14:56:48.295287 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-01-10 14:56:48.295294 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-01-10 14:56:48.295301 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-01-10 14:56:48.295308 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-01-10 14:56:48.295342 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-01-10 14:56:48.295349 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-01-10 14:56:48.295356 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-01-10 14:56:48.295363 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-01-10 14:56:48.295370 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:56:48.295377 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-01-10 14:56:48.295391 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-01-10 14:56:48.295399 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-01-10 14:56:48.295406 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:56:48.295413 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-01-10 14:56:48.295420 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-01-10 14:56:48.295427 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:56:48.295434 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-01-10 14:56:48.295441 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-01-10 14:56:48.295448 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-01-10 14:56:48.295455 | orchestrator | changed: [testbed-node-4] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-01-10 14:56:48.295462 | orchestrator | changed: [testbed-node-3] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-01-10 14:56:48.295469 | orchestrator | changed: [testbed-node-5] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-01-10 14:56:48.295476 | orchestrator | 2026-01-10 14:56:48.295483 | orchestrator | TASK [nova-cell : Copying VMware vCenter CA file] ****************************** 2026-01-10 14:56:48.295491 | orchestrator | Saturday 10 January 2026 14:53:34 +0000 (0:00:08.601) 0:07:22.388 ****** 2026-01-10 14:56:48.295498 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:56:48.295505 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:56:48.295512 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:56:48.295519 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:56:48.295526 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:56:48.295533 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:56:48.295540 | orchestrator | 2026-01-10 14:56:48.295548 | orchestrator | TASK [nova-cell : Copying 'release' file for nova_compute] ********************* 2026-01-10 14:56:48.295555 | orchestrator | Saturday 10 January 2026 14:53:35 +0000 (0:00:00.618) 0:07:23.006 ****** 2026-01-10 14:56:48.295562 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:56:48.295569 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:56:48.295576 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:56:48.295583 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:56:48.295590 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:56:48.295597 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:56:48.295603 | orchestrator | 2026-01-10 14:56:48.295611 | orchestrator | TASK [nova-cell : Generating 'hostnqn' file for nova_compute] ****************** 2026-01-10 14:56:48.295618 | orchestrator | Saturday 10 January 2026 14:53:35 +0000 (0:00:00.855) 0:07:23.862 ****** 2026-01-10 14:56:48.295625 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:56:48.295632 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:56:48.295639 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:56:48.295646 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:56:48.295653 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:56:48.295660 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:56:48.295667 | orchestrator | 2026-01-10 14:56:48.295679 | orchestrator | TASK [nova-cell : Copying over existing policy file] *************************** 2026-01-10 14:56:48.295686 | orchestrator | Saturday 10 January 2026 14:53:38 +0000 (0:00:02.361) 0:07:26.224 ****** 2026-01-10 14:56:48.295699 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-01-10 14:56:48.295707 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-01-10 14:56:48.295721 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-01-10 14:56:48.295730 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:56:48.295737 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-01-10 14:56:48.295745 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-01-10 14:56:48.295760 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-01-10 14:56:48.295767 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:56:48.295775 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-01-10 14:56:48.295786 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-01-10 14:56:48.295794 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-01-10 14:56:48.295802 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:56:48.295809 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-01-10 14:56:48.295817 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-10 14:56:48.295828 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:56:48.295839 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-01-10 14:56:48.295847 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-10 14:56:48.295854 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:56:48.295861 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-01-10 14:56:48.295874 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-10 14:56:48.295882 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:56:48.295889 | orchestrator | 2026-01-10 14:56:48.295896 | orchestrator | TASK [nova-cell : Copying over vendordata file to containers] ****************** 2026-01-10 14:56:48.295903 | orchestrator | Saturday 10 January 2026 14:53:40 +0000 (0:00:02.042) 0:07:28.266 ****** 2026-01-10 14:56:48.295911 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2026-01-10 14:56:48.295918 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2026-01-10 14:56:48.295924 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:56:48.295931 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2026-01-10 14:56:48.295938 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2026-01-10 14:56:48.295946 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:56:48.295953 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2026-01-10 14:56:48.295960 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2026-01-10 14:56:48.295971 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:56:48.295980 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2026-01-10 14:56:48.295992 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2026-01-10 14:56:48.296004 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:56:48.296016 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2026-01-10 14:56:48.296028 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2026-01-10 14:56:48.296040 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:56:48.296051 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2026-01-10 14:56:48.296064 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2026-01-10 14:56:48.296078 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:56:48.296090 | orchestrator | 2026-01-10 14:56:48.296103 | orchestrator | TASK [service-check-containers : nova_cell | Check containers] ***************** 2026-01-10 14:56:48.296115 | orchestrator | Saturday 10 January 2026 14:53:41 +0000 (0:00:00.702) 0:07:28.969 ****** 2026-01-10 14:56:48.296132 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-01-10 14:56:48.296141 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-01-10 14:56:48.296155 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-01-10 14:56:48.296163 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-01-10 14:56:48.296176 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-01-10 14:56:48.296183 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-01-10 14:56:48.296194 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-01-10 14:56:48.296202 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-01-10 14:56:48.296209 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-01-10 14:56:48.296221 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-01-10 14:56:48.296233 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-01-10 14:56:48.296241 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-01-10 14:56:48.296248 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-01-10 14:56:48.296261 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-01-10 14:56:48.296269 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-01-10 14:56:48.296277 | orchestrator | 2026-01-10 14:56:48.296284 | orchestrator | TASK [service-check-containers : nova_cell | Notify handlers to restart containers] *** 2026-01-10 14:56:48.296295 | orchestrator | Saturday 10 January 2026 14:53:44 +0000 (0:00:03.415) 0:07:32.385 ****** 2026-01-10 14:56:48.296302 | orchestrator | changed: [testbed-node-3] => { 2026-01-10 14:56:48.296354 | orchestrator |  "msg": "Notifying handlers" 2026-01-10 14:56:48.296368 | orchestrator | } 2026-01-10 14:56:48.296384 | orchestrator | changed: [testbed-node-4] => { 2026-01-10 14:56:48.296395 | orchestrator |  "msg": "Notifying handlers" 2026-01-10 14:56:48.296408 | orchestrator | } 2026-01-10 14:56:48.296421 | orchestrator | changed: [testbed-node-5] => { 2026-01-10 14:56:48.296433 | orchestrator |  "msg": "Notifying handlers" 2026-01-10 14:56:48.296443 | orchestrator | } 2026-01-10 14:56:48.296450 | orchestrator | changed: [testbed-node-0] => { 2026-01-10 14:56:48.296458 | orchestrator |  "msg": "Notifying handlers" 2026-01-10 14:56:48.296465 | orchestrator | } 2026-01-10 14:56:48.296472 | orchestrator | changed: [testbed-node-1] => { 2026-01-10 14:56:48.296479 | orchestrator |  "msg": "Notifying handlers" 2026-01-10 14:56:48.296486 | orchestrator | } 2026-01-10 14:56:48.296493 | orchestrator | changed: [testbed-node-2] => { 2026-01-10 14:56:48.296500 | orchestrator |  "msg": "Notifying handlers" 2026-01-10 14:56:48.296507 | orchestrator | } 2026-01-10 14:56:48.296516 | orchestrator | 2026-01-10 14:56:48.296529 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-01-10 14:56:48.296537 | orchestrator | Saturday 10 January 2026 14:53:45 +0000 (0:00:00.640) 0:07:33.026 ****** 2026-01-10 14:56:48.296545 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-01-10 14:56:48.296553 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-01-10 14:56:48.296564 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-01-10 14:56:48.296572 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-01-10 14:56:48.296589 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-01-10 14:56:48.296597 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:56:48.296603 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-01-10 14:56:48.296610 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:56:48.296617 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-01-10 14:56:48.296627 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-01-10 14:56:48.296634 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-01-10 14:56:48.296645 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:56:48.296652 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-01-10 14:56:48.296663 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-10 14:56:48.296670 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:56:48.296677 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-01-10 14:56:48.296684 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-10 14:56:48.296691 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:56:48.296698 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-01-10 14:56:48.296708 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-10 14:56:48.296724 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:56:48.296731 | orchestrator | 2026-01-10 14:56:48.296737 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-01-10 14:56:48.296744 | orchestrator | Saturday 10 January 2026 14:53:47 +0000 (0:00:02.328) 0:07:35.354 ****** 2026-01-10 14:56:48.296751 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:56:48.296757 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:56:48.296763 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:56:48.296770 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:56:48.296776 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:56:48.296783 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:56:48.296789 | orchestrator | 2026-01-10 14:56:48.296796 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-01-10 14:56:48.296802 | orchestrator | Saturday 10 January 2026 14:53:48 +0000 (0:00:01.114) 0:07:36.469 ****** 2026-01-10 14:56:48.296809 | orchestrator | 2026-01-10 14:56:48.296815 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-01-10 14:56:48.296822 | orchestrator | Saturday 10 January 2026 14:53:48 +0000 (0:00:00.133) 0:07:36.603 ****** 2026-01-10 14:56:48.296828 | orchestrator | 2026-01-10 14:56:48.296835 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-01-10 14:56:48.296841 | orchestrator | Saturday 10 January 2026 14:53:48 +0000 (0:00:00.135) 0:07:36.738 ****** 2026-01-10 14:56:48.296847 | orchestrator | 2026-01-10 14:56:48.296858 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-01-10 14:56:48.296869 | orchestrator | Saturday 10 January 2026 14:53:48 +0000 (0:00:00.135) 0:07:36.873 ****** 2026-01-10 14:56:48.296885 | orchestrator | 2026-01-10 14:56:48.296898 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-01-10 14:56:48.296909 | orchestrator | Saturday 10 January 2026 14:53:49 +0000 (0:00:00.141) 0:07:37.014 ****** 2026-01-10 14:56:48.296920 | orchestrator | 2026-01-10 14:56:48.296930 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-01-10 14:56:48.296941 | orchestrator | Saturday 10 January 2026 14:53:49 +0000 (0:00:00.328) 0:07:37.343 ****** 2026-01-10 14:56:48.296951 | orchestrator | 2026-01-10 14:56:48.296962 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-conductor container] ***************** 2026-01-10 14:56:48.296973 | orchestrator | Saturday 10 January 2026 14:53:49 +0000 (0:00:00.129) 0:07:37.473 ****** 2026-01-10 14:56:48.296984 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:56:48.296996 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:56:48.297007 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:56:48.297019 | orchestrator | 2026-01-10 14:56:48.297029 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-novncproxy container] **************** 2026-01-10 14:56:48.297041 | orchestrator | Saturday 10 January 2026 14:53:57 +0000 (0:00:07.643) 0:07:45.116 ****** 2026-01-10 14:56:48.297052 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:56:48.297064 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:56:48.297074 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:56:48.297085 | orchestrator | 2026-01-10 14:56:48.297096 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-ssh container] *********************** 2026-01-10 14:56:48.297106 | orchestrator | Saturday 10 January 2026 14:54:16 +0000 (0:00:19.793) 0:08:04.909 ****** 2026-01-10 14:56:48.297117 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:56:48.297128 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:56:48.297140 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:56:48.297151 | orchestrator | 2026-01-10 14:56:48.297162 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-libvirt container] ******************* 2026-01-10 14:56:48.297171 | orchestrator | Saturday 10 January 2026 14:54:37 +0000 (0:00:20.460) 0:08:25.370 ****** 2026-01-10 14:56:48.297178 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:56:48.297184 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:56:48.297191 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:56:48.297197 | orchestrator | 2026-01-10 14:56:48.297204 | orchestrator | RUNNING HANDLER [nova-cell : Checking libvirt container is ready] ************** 2026-01-10 14:56:48.297220 | orchestrator | Saturday 10 January 2026 14:55:01 +0000 (0:00:24.470) 0:08:49.840 ****** 2026-01-10 14:56:48.297230 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:56:48.297237 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:56:48.297244 | orchestrator | FAILED - RETRYING: [testbed-node-5]: Checking libvirt container is ready (10 retries left). 2026-01-10 14:56:48.297251 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:56:48.297257 | orchestrator | 2026-01-10 14:56:48.297264 | orchestrator | RUNNING HANDLER [nova-cell : Create libvirt SASL user] ************************* 2026-01-10 14:56:48.297270 | orchestrator | Saturday 10 January 2026 14:55:08 +0000 (0:00:06.194) 0:08:56.036 ****** 2026-01-10 14:56:48.297277 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:56:48.297283 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:56:48.297290 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:56:48.297296 | orchestrator | 2026-01-10 14:56:48.297303 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-compute container] ******************* 2026-01-10 14:56:48.297324 | orchestrator | Saturday 10 January 2026 14:55:08 +0000 (0:00:00.833) 0:08:56.869 ****** 2026-01-10 14:56:48.297335 | orchestrator | changed: [testbed-node-4] 2026-01-10 14:56:48.297396 | orchestrator | changed: [testbed-node-3] 2026-01-10 14:56:48.297405 | orchestrator | changed: [testbed-node-5] 2026-01-10 14:56:48.297412 | orchestrator | 2026-01-10 14:56:48.297418 | orchestrator | RUNNING HANDLER [nova-cell : Wait for nova-compute services to update service versions] *** 2026-01-10 14:56:48.297425 | orchestrator | Saturday 10 January 2026 14:55:32 +0000 (0:00:23.990) 0:09:20.860 ****** 2026-01-10 14:56:48.297437 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:56:48.297444 | orchestrator | 2026-01-10 14:56:48.297450 | orchestrator | TASK [nova-cell : Waiting for nova-compute services to register themselves] **** 2026-01-10 14:56:48.297457 | orchestrator | Saturday 10 January 2026 14:55:33 +0000 (0:00:00.357) 0:09:21.218 ****** 2026-01-10 14:56:48.297463 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:56:48.297470 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:56:48.297476 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:56:48.297483 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:56:48.297489 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:56:48.297496 | orchestrator | FAILED - RETRYING: [testbed-node-3 -> testbed-node-0]: Waiting for nova-compute services to register themselves (20 retries left). 2026-01-10 14:56:48.297503 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-01-10 14:56:48.297509 | orchestrator | 2026-01-10 14:56:48.297516 | orchestrator | TASK [nova-cell : Fail if nova-compute service failed to register] ************* 2026-01-10 14:56:48.297522 | orchestrator | Saturday 10 January 2026 14:55:54 +0000 (0:00:21.050) 0:09:42.268 ****** 2026-01-10 14:56:48.297531 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:56:48.297543 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:56:48.297550 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:56:48.297557 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:56:48.297563 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:56:48.297575 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:56:48.297582 | orchestrator | 2026-01-10 14:56:48.297594 | orchestrator | TASK [nova-cell : Include discover_computes.yml] ******************************* 2026-01-10 14:56:48.297601 | orchestrator | Saturday 10 January 2026 14:56:03 +0000 (0:00:09.649) 0:09:51.918 ****** 2026-01-10 14:56:48.297607 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:56:48.297614 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:56:48.297620 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:56:48.297627 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:56:48.297633 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:56:48.297640 | orchestrator | included: /ansible/roles/nova-cell/tasks/discover_computes.yml for testbed-node-3 2026-01-10 14:56:48.297646 | orchestrator | 2026-01-10 14:56:48.297660 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-01-10 14:56:48.297667 | orchestrator | Saturday 10 January 2026 14:56:08 +0000 (0:00:04.887) 0:09:56.805 ****** 2026-01-10 14:56:48.297678 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-01-10 14:56:48.297685 | orchestrator | 2026-01-10 14:56:48.297691 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-01-10 14:56:48.297698 | orchestrator | Saturday 10 January 2026 14:56:24 +0000 (0:00:15.488) 0:10:12.294 ****** 2026-01-10 14:56:48.297705 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-01-10 14:56:48.297711 | orchestrator | 2026-01-10 14:56:48.297718 | orchestrator | TASK [nova-cell : Fail if cell settings not found] ***************************** 2026-01-10 14:56:48.297724 | orchestrator | Saturday 10 January 2026 14:56:25 +0000 (0:00:01.573) 0:10:13.868 ****** 2026-01-10 14:56:48.297731 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:56:48.297737 | orchestrator | 2026-01-10 14:56:48.297744 | orchestrator | TASK [nova-cell : Discover nova hosts] ***************************************** 2026-01-10 14:56:48.297750 | orchestrator | Saturday 10 January 2026 14:56:27 +0000 (0:00:01.612) 0:10:15.481 ****** 2026-01-10 14:56:48.297757 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-01-10 14:56:48.297763 | orchestrator | 2026-01-10 14:56:48.297770 | orchestrator | PLAY [Refresh nova scheduler cell cache] *************************************** 2026-01-10 14:56:48.297776 | orchestrator | 2026-01-10 14:56:48.297783 | orchestrator | TASK [nova : Refresh cell cache in nova scheduler] ***************************** 2026-01-10 14:56:48.297789 | orchestrator | Saturday 10 January 2026 14:56:41 +0000 (0:00:13.480) 0:10:28.961 ****** 2026-01-10 14:56:48.297796 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:56:48.297802 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:56:48.297809 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:56:48.297815 | orchestrator | 2026-01-10 14:56:48.297822 | orchestrator | PLAY [Reload global Nova super conductor services] ***************************** 2026-01-10 14:56:48.297828 | orchestrator | 2026-01-10 14:56:48.297835 | orchestrator | TASK [nova : Reload nova super conductor services to remove RPC version pin] *** 2026-01-10 14:56:48.297841 | orchestrator | Saturday 10 January 2026 14:56:41 +0000 (0:00:00.951) 0:10:29.913 ****** 2026-01-10 14:56:48.297848 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:56:48.297855 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:56:48.297861 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:56:48.297868 | orchestrator | 2026-01-10 14:56:48.297874 | orchestrator | PLAY [Reload Nova cell services] *********************************************** 2026-01-10 14:56:48.297881 | orchestrator | 2026-01-10 14:56:48.297888 | orchestrator | TASK [nova-cell : Reload nova cell services to remove RPC version cap] ********* 2026-01-10 14:56:48.297894 | orchestrator | Saturday 10 January 2026 14:56:42 +0000 (0:00:00.762) 0:10:30.675 ****** 2026-01-10 14:56:48.297901 | orchestrator | skipping: [testbed-node-3] => (item=nova-conductor)  2026-01-10 14:56:48.297907 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2026-01-10 14:56:48.297914 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2026-01-10 14:56:48.297920 | orchestrator | skipping: [testbed-node-3] => (item=nova-novncproxy)  2026-01-10 14:56:48.297927 | orchestrator | skipping: [testbed-node-3] => (item=nova-serialproxy)  2026-01-10 14:56:48.297933 | orchestrator | skipping: [testbed-node-3] => (item=nova-spicehtml5proxy)  2026-01-10 14:56:48.297940 | orchestrator | skipping: [testbed-node-3] 2026-01-10 14:56:48.297947 | orchestrator | skipping: [testbed-node-4] => (item=nova-conductor)  2026-01-10 14:56:48.297953 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2026-01-10 14:56:48.297960 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2026-01-10 14:56:48.297966 | orchestrator | skipping: [testbed-node-4] => (item=nova-novncproxy)  2026-01-10 14:56:48.297973 | orchestrator | skipping: [testbed-node-4] => (item=nova-serialproxy)  2026-01-10 14:56:48.297983 | orchestrator | skipping: [testbed-node-4] => (item=nova-spicehtml5proxy)  2026-01-10 14:56:48.297989 | orchestrator | skipping: [testbed-node-4] 2026-01-10 14:56:48.297996 | orchestrator | skipping: [testbed-node-5] => (item=nova-conductor)  2026-01-10 14:56:48.298006 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2026-01-10 14:56:48.298037 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2026-01-10 14:56:48.298045 | orchestrator | skipping: [testbed-node-5] => (item=nova-novncproxy)  2026-01-10 14:56:48.298051 | orchestrator | skipping: [testbed-node-5] => (item=nova-serialproxy)  2026-01-10 14:56:48.298058 | orchestrator | skipping: [testbed-node-5] => (item=nova-spicehtml5proxy)  2026-01-10 14:56:48.298064 | orchestrator | skipping: [testbed-node-5] 2026-01-10 14:56:48.298071 | orchestrator | skipping: [testbed-node-0] => (item=nova-conductor)  2026-01-10 14:56:48.298077 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2026-01-10 14:56:48.298084 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2026-01-10 14:56:48.298090 | orchestrator | skipping: [testbed-node-0] => (item=nova-novncproxy)  2026-01-10 14:56:48.298097 | orchestrator | skipping: [testbed-node-0] => (item=nova-serialproxy)  2026-01-10 14:56:48.298103 | orchestrator | skipping: [testbed-node-0] => (item=nova-spicehtml5proxy)  2026-01-10 14:56:48.298110 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:56:48.298122 | orchestrator | skipping: [testbed-node-1] => (item=nova-conductor)  2026-01-10 14:56:48.298132 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2026-01-10 14:56:48.298143 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2026-01-10 14:56:48.298155 | orchestrator | skipping: [testbed-node-1] => (item=nova-novncproxy)  2026-01-10 14:56:48.298168 | orchestrator | skipping: [testbed-node-1] => (item=nova-serialproxy)  2026-01-10 14:56:48.298178 | orchestrator | skipping: [testbed-node-1] => (item=nova-spicehtml5proxy)  2026-01-10 14:56:48.298190 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:56:48.298201 | orchestrator | skipping: [testbed-node-2] => (item=nova-conductor)  2026-01-10 14:56:48.298210 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2026-01-10 14:56:48.298221 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2026-01-10 14:56:48.298228 | orchestrator | skipping: [testbed-node-2] => (item=nova-novncproxy)  2026-01-10 14:56:48.298235 | orchestrator | skipping: [testbed-node-2] => (item=nova-serialproxy)  2026-01-10 14:56:48.298241 | orchestrator | skipping: [testbed-node-2] => (item=nova-spicehtml5proxy)  2026-01-10 14:56:48.298248 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:56:48.298254 | orchestrator | 2026-01-10 14:56:48.298261 | orchestrator | PLAY [Reload global Nova API services] ***************************************** 2026-01-10 14:56:48.298267 | orchestrator | 2026-01-10 14:56:48.298274 | orchestrator | TASK [nova : Reload nova API services to remove RPC version pin] *************** 2026-01-10 14:56:48.298280 | orchestrator | Saturday 10 January 2026 14:56:44 +0000 (0:00:01.358) 0:10:32.034 ****** 2026-01-10 14:56:48.298287 | orchestrator | skipping: [testbed-node-0] => (item=nova-scheduler)  2026-01-10 14:56:48.298293 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2026-01-10 14:56:48.298300 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:56:48.298306 | orchestrator | skipping: [testbed-node-1] => (item=nova-scheduler)  2026-01-10 14:56:48.298330 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2026-01-10 14:56:48.298337 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:56:48.298343 | orchestrator | skipping: [testbed-node-2] => (item=nova-scheduler)  2026-01-10 14:56:48.298350 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2026-01-10 14:56:48.298356 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:56:48.298363 | orchestrator | 2026-01-10 14:56:48.298369 | orchestrator | PLAY [Run Nova API online data migrations] ************************************* 2026-01-10 14:56:48.298376 | orchestrator | 2026-01-10 14:56:48.298382 | orchestrator | TASK [nova : Run Nova API online database migrations] ************************** 2026-01-10 14:56:48.298389 | orchestrator | Saturday 10 January 2026 14:56:44 +0000 (0:00:00.571) 0:10:32.605 ****** 2026-01-10 14:56:48.298395 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:56:48.298402 | orchestrator | 2026-01-10 14:56:48.298416 | orchestrator | PLAY [Run Nova cell online data migrations] ************************************ 2026-01-10 14:56:48.298423 | orchestrator | 2026-01-10 14:56:48.298429 | orchestrator | TASK [nova-cell : Run Nova cell online database migrations] ******************** 2026-01-10 14:56:48.298436 | orchestrator | Saturday 10 January 2026 14:56:46 +0000 (0:00:01.360) 0:10:33.965 ****** 2026-01-10 14:56:48.298442 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:56:48.298449 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:56:48.298455 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:56:48.298462 | orchestrator | 2026-01-10 14:56:48.298468 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-10 14:56:48.298475 | orchestrator | testbed-manager : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-10 14:56:48.298482 | orchestrator | testbed-node-0 : ok=59  changed=39  unreachable=0 failed=0 skipped=48  rescued=0 ignored=0 2026-01-10 14:56:48.298489 | orchestrator | testbed-node-1 : ok=32  changed=23  unreachable=0 failed=0 skipped=55  rescued=0 ignored=0 2026-01-10 14:56:48.298495 | orchestrator | testbed-node-2 : ok=32  changed=23  unreachable=0 failed=0 skipped=55  rescued=0 ignored=0 2026-01-10 14:56:48.298502 | orchestrator | testbed-node-3 : ok=49  changed=29  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-01-10 14:56:48.298513 | orchestrator | testbed-node-4 : ok=37  changed=28  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-01-10 14:56:48.298519 | orchestrator | testbed-node-5 : ok=37  changed=28  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-01-10 14:56:48.298526 | orchestrator | 2026-01-10 14:56:48.298532 | orchestrator | 2026-01-10 14:56:48.298539 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-10 14:56:48.298546 | orchestrator | Saturday 10 January 2026 14:56:46 +0000 (0:00:00.472) 0:10:34.438 ****** 2026-01-10 14:56:48.298552 | orchestrator | =============================================================================== 2026-01-10 14:56:48.298559 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 31.74s 2026-01-10 14:56:48.298565 | orchestrator | nova-cell : Restart nova-libvirt container ----------------------------- 24.47s 2026-01-10 14:56:48.298571 | orchestrator | nova-cell : Restart nova-compute container ----------------------------- 23.99s 2026-01-10 14:56:48.298578 | orchestrator | nova-cell : Get new Libvirt version ------------------------------------ 22.36s 2026-01-10 14:56:48.298584 | orchestrator | nova-cell : Running Nova cell bootstrap container ---------------------- 21.49s 2026-01-10 14:56:48.298591 | orchestrator | nova-cell : Waiting for nova-compute services to register themselves --- 21.05s 2026-01-10 14:56:48.298597 | orchestrator | nova-cell : Restart nova-ssh container --------------------------------- 20.46s 2026-01-10 14:56:48.298604 | orchestrator | nova-cell : Restart nova-novncproxy container -------------------------- 19.79s 2026-01-10 14:56:48.298611 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 18.30s 2026-01-10 14:56:48.298617 | orchestrator | nova : Restart nova-scheduler container -------------------------------- 16.79s 2026-01-10 14:56:48.298623 | orchestrator | nova : Create cell0 mappings ------------------------------------------- 16.07s 2026-01-10 14:56:48.298630 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 15.49s 2026-01-10 14:56:48.298640 | orchestrator | nova : Copying over nova.conf ------------------------------------------ 14.78s 2026-01-10 14:56:48.298647 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 13.81s 2026-01-10 14:56:48.298653 | orchestrator | nova-cell : Discover nova hosts ---------------------------------------- 13.48s 2026-01-10 14:56:48.298660 | orchestrator | nova-cell : Create cell ------------------------------------------------ 12.95s 2026-01-10 14:56:48.298673 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 12.90s 2026-01-10 14:56:48.298680 | orchestrator | service-rabbitmq : nova | Ensure RabbitMQ users exist ------------------ 12.54s 2026-01-10 14:56:48.298686 | orchestrator | nova-cell : Get container facts ---------------------------------------- 11.93s 2026-01-10 14:56:48.298693 | orchestrator | nova : Restart nova-metadata container --------------------------------- 10.57s 2026-01-10 14:56:48.298699 | orchestrator | 2026-01-10 14:56:48 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:56:51.334746 | orchestrator | 2026-01-10 14:56:51 | INFO  | Task 50416474-4d7f-4703-a08e-c36f9b97b8f5 is in state STARTED 2026-01-10 14:56:51.334806 | orchestrator | 2026-01-10 14:56:51 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:56:54.367801 | orchestrator | 2026-01-10 14:56:54 | INFO  | Task 50416474-4d7f-4703-a08e-c36f9b97b8f5 is in state STARTED 2026-01-10 14:56:54.367910 | orchestrator | 2026-01-10 14:56:54 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:56:57.414436 | orchestrator | 2026-01-10 14:56:57 | INFO  | Task 50416474-4d7f-4703-a08e-c36f9b97b8f5 is in state STARTED 2026-01-10 14:56:57.414530 | orchestrator | 2026-01-10 14:56:57 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:57:00.459611 | orchestrator | 2026-01-10 14:57:00 | INFO  | Task 50416474-4d7f-4703-a08e-c36f9b97b8f5 is in state STARTED 2026-01-10 14:57:00.459670 | orchestrator | 2026-01-10 14:57:00 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:57:03.500649 | orchestrator | 2026-01-10 14:57:03 | INFO  | Task 50416474-4d7f-4703-a08e-c36f9b97b8f5 is in state STARTED 2026-01-10 14:57:03.500739 | orchestrator | 2026-01-10 14:57:03 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:57:06.543756 | orchestrator | 2026-01-10 14:57:06 | INFO  | Task 50416474-4d7f-4703-a08e-c36f9b97b8f5 is in state STARTED 2026-01-10 14:57:06.543845 | orchestrator | 2026-01-10 14:57:06 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:57:09.582168 | orchestrator | 2026-01-10 14:57:09 | INFO  | Task 50416474-4d7f-4703-a08e-c36f9b97b8f5 is in state STARTED 2026-01-10 14:57:09.582226 | orchestrator | 2026-01-10 14:57:09 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:57:12.643745 | orchestrator | 2026-01-10 14:57:12 | INFO  | Task 50416474-4d7f-4703-a08e-c36f9b97b8f5 is in state STARTED 2026-01-10 14:57:12.643841 | orchestrator | 2026-01-10 14:57:12 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:57:15.688584 | orchestrator | 2026-01-10 14:57:15 | INFO  | Task 50416474-4d7f-4703-a08e-c36f9b97b8f5 is in state STARTED 2026-01-10 14:57:15.688640 | orchestrator | 2026-01-10 14:57:15 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:57:18.736676 | orchestrator | 2026-01-10 14:57:18 | INFO  | Task 50416474-4d7f-4703-a08e-c36f9b97b8f5 is in state STARTED 2026-01-10 14:57:18.736749 | orchestrator | 2026-01-10 14:57:18 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:57:21.785455 | orchestrator | 2026-01-10 14:57:21 | INFO  | Task 50416474-4d7f-4703-a08e-c36f9b97b8f5 is in state STARTED 2026-01-10 14:57:21.785534 | orchestrator | 2026-01-10 14:57:21 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:57:24.834153 | orchestrator | 2026-01-10 14:57:24 | INFO  | Task 50416474-4d7f-4703-a08e-c36f9b97b8f5 is in state STARTED 2026-01-10 14:57:24.834201 | orchestrator | 2026-01-10 14:57:24 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:57:27.879967 | orchestrator | 2026-01-10 14:57:27 | INFO  | Task 50416474-4d7f-4703-a08e-c36f9b97b8f5 is in state STARTED 2026-01-10 14:57:27.880018 | orchestrator | 2026-01-10 14:57:27 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:57:30.922061 | orchestrator | 2026-01-10 14:57:30 | INFO  | Task 50416474-4d7f-4703-a08e-c36f9b97b8f5 is in state STARTED 2026-01-10 14:57:30.922135 | orchestrator | 2026-01-10 14:57:30 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:57:33.957509 | orchestrator | 2026-01-10 14:57:33 | INFO  | Task 50416474-4d7f-4703-a08e-c36f9b97b8f5 is in state STARTED 2026-01-10 14:57:33.957616 | orchestrator | 2026-01-10 14:57:33 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:57:37.004779 | orchestrator | 2026-01-10 14:57:37 | INFO  | Task 50416474-4d7f-4703-a08e-c36f9b97b8f5 is in state STARTED 2026-01-10 14:57:37.004843 | orchestrator | 2026-01-10 14:57:37 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:57:40.055312 | orchestrator | 2026-01-10 14:57:40 | INFO  | Task 50416474-4d7f-4703-a08e-c36f9b97b8f5 is in state STARTED 2026-01-10 14:57:40.055371 | orchestrator | 2026-01-10 14:57:40 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:57:43.101002 | orchestrator | 2026-01-10 14:57:43 | INFO  | Task 50416474-4d7f-4703-a08e-c36f9b97b8f5 is in state STARTED 2026-01-10 14:57:43.101062 | orchestrator | 2026-01-10 14:57:43 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:57:46.150552 | orchestrator | 2026-01-10 14:57:46 | INFO  | Task 50416474-4d7f-4703-a08e-c36f9b97b8f5 is in state STARTED 2026-01-10 14:57:46.150644 | orchestrator | 2026-01-10 14:57:46 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:57:49.195191 | orchestrator | 2026-01-10 14:57:49 | INFO  | Task 50416474-4d7f-4703-a08e-c36f9b97b8f5 is in state STARTED 2026-01-10 14:57:49.195316 | orchestrator | 2026-01-10 14:57:49 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:57:52.238181 | orchestrator | 2026-01-10 14:57:52 | INFO  | Task 50416474-4d7f-4703-a08e-c36f9b97b8f5 is in state STARTED 2026-01-10 14:57:52.238255 | orchestrator | 2026-01-10 14:57:52 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:57:55.287599 | orchestrator | 2026-01-10 14:57:55 | INFO  | Task 50416474-4d7f-4703-a08e-c36f9b97b8f5 is in state STARTED 2026-01-10 14:57:55.287650 | orchestrator | 2026-01-10 14:57:55 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:57:58.339965 | orchestrator | 2026-01-10 14:57:58 | INFO  | Task 50416474-4d7f-4703-a08e-c36f9b97b8f5 is in state STARTED 2026-01-10 14:57:58.340015 | orchestrator | 2026-01-10 14:57:58 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:58:01.382374 | orchestrator | 2026-01-10 14:58:01 | INFO  | Task 50416474-4d7f-4703-a08e-c36f9b97b8f5 is in state STARTED 2026-01-10 14:58:01.382475 | orchestrator | 2026-01-10 14:58:01 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:58:04.431084 | orchestrator | 2026-01-10 14:58:04 | INFO  | Task 50416474-4d7f-4703-a08e-c36f9b97b8f5 is in state STARTED 2026-01-10 14:58:04.431162 | orchestrator | 2026-01-10 14:58:04 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:58:07.482742 | orchestrator | 2026-01-10 14:58:07 | INFO  | Task 50416474-4d7f-4703-a08e-c36f9b97b8f5 is in state STARTED 2026-01-10 14:58:07.482841 | orchestrator | 2026-01-10 14:58:07 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:58:10.531279 | orchestrator | 2026-01-10 14:58:10 | INFO  | Task 50416474-4d7f-4703-a08e-c36f9b97b8f5 is in state STARTED 2026-01-10 14:58:10.531371 | orchestrator | 2026-01-10 14:58:10 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:58:13.569850 | orchestrator | 2026-01-10 14:58:13 | INFO  | Task 50416474-4d7f-4703-a08e-c36f9b97b8f5 is in state STARTED 2026-01-10 14:58:13.569968 | orchestrator | 2026-01-10 14:58:13 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:58:16.609333 | orchestrator | 2026-01-10 14:58:16 | INFO  | Task 50416474-4d7f-4703-a08e-c36f9b97b8f5 is in state STARTED 2026-01-10 14:58:16.609398 | orchestrator | 2026-01-10 14:58:16 | INFO  | Wait 1 second(s) until the next check 2026-01-10 14:58:19.659662 | orchestrator | 2026-01-10 14:58:19 | INFO  | Task 50416474-4d7f-4703-a08e-c36f9b97b8f5 is in state SUCCESS 2026-01-10 14:58:19.660118 | orchestrator | 2026-01-10 14:58:19 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-10 14:58:19.661641 | orchestrator | 2026-01-10 14:58:19.661674 | orchestrator | 2026-01-10 14:58:19.661683 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-10 14:58:19.661691 | orchestrator | 2026-01-10 14:58:19.661698 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-10 14:58:19.661705 | orchestrator | Saturday 10 January 2026 14:53:32 +0000 (0:00:00.236) 0:00:00.236 ****** 2026-01-10 14:58:19.661712 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:58:19.661719 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:58:19.661726 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:58:19.661733 | orchestrator | 2026-01-10 14:58:19.661740 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-10 14:58:19.661746 | orchestrator | Saturday 10 January 2026 14:53:33 +0000 (0:00:00.305) 0:00:00.542 ****** 2026-01-10 14:58:19.661753 | orchestrator | ok: [testbed-node-0] => (item=enable_octavia_True) 2026-01-10 14:58:19.661759 | orchestrator | ok: [testbed-node-1] => (item=enable_octavia_True) 2026-01-10 14:58:19.661765 | orchestrator | ok: [testbed-node-2] => (item=enable_octavia_True) 2026-01-10 14:58:19.661772 | orchestrator | 2026-01-10 14:58:19.661778 | orchestrator | PLAY [Apply role octavia] ****************************************************** 2026-01-10 14:58:19.661784 | orchestrator | 2026-01-10 14:58:19.661790 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-01-10 14:58:19.661797 | orchestrator | Saturday 10 January 2026 14:53:33 +0000 (0:00:00.401) 0:00:00.943 ****** 2026-01-10 14:58:19.661804 | orchestrator | included: /ansible/roles/octavia/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:58:19.661811 | orchestrator | 2026-01-10 14:58:19.661817 | orchestrator | TASK [service-ks-register : octavia | Creating/deleting services] ************** 2026-01-10 14:58:19.661824 | orchestrator | Saturday 10 January 2026 14:53:34 +0000 (0:00:00.606) 0:00:01.550 ****** 2026-01-10 14:58:19.661830 | orchestrator | changed: [testbed-node-0] => (item=octavia (load-balancer)) 2026-01-10 14:58:19.661836 | orchestrator | 2026-01-10 14:58:19.661842 | orchestrator | TASK [service-ks-register : octavia | Creating/deleting endpoints] ************* 2026-01-10 14:58:19.661848 | orchestrator | Saturday 10 January 2026 14:53:38 +0000 (0:00:03.934) 0:00:05.484 ****** 2026-01-10 14:58:19.661854 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api-int.testbed.osism.xyz:9876 -> internal) 2026-01-10 14:58:19.661860 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api.testbed.osism.xyz:9876 -> public) 2026-01-10 14:58:19.661866 | orchestrator | 2026-01-10 14:58:19.661872 | orchestrator | TASK [service-ks-register : octavia | Creating projects] *********************** 2026-01-10 14:58:19.661878 | orchestrator | Saturday 10 January 2026 14:53:44 +0000 (0:00:06.801) 0:00:12.286 ****** 2026-01-10 14:58:19.661884 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-01-10 14:58:19.661890 | orchestrator | 2026-01-10 14:58:19.661895 | orchestrator | TASK [service-ks-register : octavia | Creating users] ************************** 2026-01-10 14:58:19.661902 | orchestrator | Saturday 10 January 2026 14:53:48 +0000 (0:00:03.154) 0:00:15.441 ****** 2026-01-10 14:58:19.661907 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-01-10 14:58:19.661913 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2026-01-10 14:58:19.661920 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2026-01-10 14:58:19.661926 | orchestrator | 2026-01-10 14:58:19.661931 | orchestrator | TASK [service-ks-register : octavia | Creating roles] ************************** 2026-01-10 14:58:19.661952 | orchestrator | Saturday 10 January 2026 14:53:56 +0000 (0:00:08.713) 0:00:24.154 ****** 2026-01-10 14:58:19.661959 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-01-10 14:58:19.661965 | orchestrator | 2026-01-10 14:58:19.661971 | orchestrator | TASK [service-ks-register : octavia | Granting/revoking user roles] ************ 2026-01-10 14:58:19.661977 | orchestrator | Saturday 10 January 2026 14:54:00 +0000 (0:00:03.247) 0:00:27.401 ****** 2026-01-10 14:58:19.661983 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service -> admin) 2026-01-10 14:58:19.661989 | orchestrator | ok: [testbed-node-0] => (item=octavia -> service -> admin) 2026-01-10 14:58:19.661996 | orchestrator | 2026-01-10 14:58:19.662002 | orchestrator | TASK [octavia : Adding octavia related roles] ********************************** 2026-01-10 14:58:19.662008 | orchestrator | Saturday 10 January 2026 14:54:07 +0000 (0:00:07.467) 0:00:34.869 ****** 2026-01-10 14:58:19.662064 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_observer) 2026-01-10 14:58:19.662071 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_global_observer) 2026-01-10 14:58:19.662078 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_member) 2026-01-10 14:58:19.662085 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_admin) 2026-01-10 14:58:19.662092 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_quota_admin) 2026-01-10 14:58:19.662099 | orchestrator | 2026-01-10 14:58:19.662107 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-01-10 14:58:19.662114 | orchestrator | Saturday 10 January 2026 14:54:23 +0000 (0:00:16.056) 0:00:50.926 ****** 2026-01-10 14:58:19.662121 | orchestrator | included: /ansible/roles/octavia/tasks/prepare.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:58:19.662129 | orchestrator | 2026-01-10 14:58:19.662144 | orchestrator | TASK [octavia : Create amphora flavor] ***************************************** 2026-01-10 14:58:19.662152 | orchestrator | Saturday 10 January 2026 14:54:24 +0000 (0:00:00.577) 0:00:51.504 ****** 2026-01-10 14:58:19.662190 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:58:19.662197 | orchestrator | 2026-01-10 14:58:19.662204 | orchestrator | TASK [octavia : Create nova keypair for amphora] ******************************* 2026-01-10 14:58:19.662210 | orchestrator | Saturday 10 January 2026 14:54:29 +0000 (0:00:04.989) 0:00:56.494 ****** 2026-01-10 14:58:19.662217 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:58:19.662223 | orchestrator | 2026-01-10 14:58:19.662229 | orchestrator | TASK [octavia : Get service project id] **************************************** 2026-01-10 14:58:19.662247 | orchestrator | Saturday 10 January 2026 14:54:33 +0000 (0:00:04.288) 0:01:00.782 ****** 2026-01-10 14:58:19.662256 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:58:19.662263 | orchestrator | 2026-01-10 14:58:19.662271 | orchestrator | TASK [octavia : Create security groups for octavia] **************************** 2026-01-10 14:58:19.662279 | orchestrator | Saturday 10 January 2026 14:54:36 +0000 (0:00:03.115) 0:01:03.898 ****** 2026-01-10 14:58:19.662331 | orchestrator | changed: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2026-01-10 14:58:19.662340 | orchestrator | changed: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2026-01-10 14:58:19.662376 | orchestrator | 2026-01-10 14:58:19.662382 | orchestrator | TASK [octavia : Add rules for security groups] ********************************* 2026-01-10 14:58:19.662388 | orchestrator | Saturday 10 January 2026 14:54:46 +0000 (0:00:09.894) 0:01:13.792 ****** 2026-01-10 14:58:19.662394 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'icmp'}]) 2026-01-10 14:58:19.662401 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': 22, 'dst_port': 22}]) 2026-01-10 14:58:19.662409 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': '9443', 'dst_port': '9443'}]) 2026-01-10 14:58:19.662415 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-health-mgr-sec-grp', 'enabled': True}, {'protocol': 'udp', 'src_port': '5555', 'dst_port': '5555'}]) 2026-01-10 14:58:19.662430 | orchestrator | 2026-01-10 14:58:19.662477 | orchestrator | TASK [octavia : Create loadbalancer management network] ************************ 2026-01-10 14:58:19.662484 | orchestrator | Saturday 10 January 2026 14:55:02 +0000 (0:00:16.455) 0:01:30.247 ****** 2026-01-10 14:58:19.662489 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:58:19.662496 | orchestrator | 2026-01-10 14:58:19.662502 | orchestrator | TASK [octavia : Create loadbalancer management subnet] ************************* 2026-01-10 14:58:19.662508 | orchestrator | Saturday 10 January 2026 14:55:07 +0000 (0:00:04.393) 0:01:34.640 ****** 2026-01-10 14:58:19.662514 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:58:19.662521 | orchestrator | 2026-01-10 14:58:19.662527 | orchestrator | TASK [octavia : Create loadbalancer management router for IPv6] **************** 2026-01-10 14:58:19.662533 | orchestrator | Saturday 10 January 2026 14:55:13 +0000 (0:00:05.771) 0:01:40.411 ****** 2026-01-10 14:58:19.662764 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:58:19.662770 | orchestrator | 2026-01-10 14:58:19.662776 | orchestrator | TASK [octavia : Update loadbalancer management subnet] ************************* 2026-01-10 14:58:19.662782 | orchestrator | Saturday 10 January 2026 14:55:13 +0000 (0:00:00.204) 0:01:40.616 ****** 2026-01-10 14:58:19.662788 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:58:19.662794 | orchestrator | 2026-01-10 14:58:19.662836 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-01-10 14:58:19.662843 | orchestrator | Saturday 10 January 2026 14:55:17 +0000 (0:00:04.477) 0:01:45.093 ****** 2026-01-10 14:58:19.662849 | orchestrator | included: /ansible/roles/octavia/tasks/hm-interface.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:58:19.662856 | orchestrator | 2026-01-10 14:58:19.662863 | orchestrator | TASK [octavia : Create ports for Octavia health-manager nodes] ***************** 2026-01-10 14:58:19.662869 | orchestrator | Saturday 10 January 2026 14:55:18 +0000 (0:00:01.004) 0:01:46.098 ****** 2026-01-10 14:58:19.662892 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:58:19.662899 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:58:19.662941 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:58:19.662949 | orchestrator | 2026-01-10 14:58:19.662955 | orchestrator | TASK [octavia : Update Octavia health manager port host_id] ******************** 2026-01-10 14:58:19.662962 | orchestrator | Saturday 10 January 2026 14:55:23 +0000 (0:00:04.415) 0:01:50.513 ****** 2026-01-10 14:58:19.663000 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:58:19.663009 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:58:19.663016 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:58:19.663023 | orchestrator | 2026-01-10 14:58:19.663030 | orchestrator | TASK [octavia : Add Octavia port to openvswitch br-int] ************************ 2026-01-10 14:58:19.663037 | orchestrator | Saturday 10 January 2026 14:55:27 +0000 (0:00:04.559) 0:01:55.072 ****** 2026-01-10 14:58:19.663043 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:58:19.663050 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:58:19.663057 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:58:19.663266 | orchestrator | 2026-01-10 14:58:19.663275 | orchestrator | TASK [octavia : Install isc-dhcp-client package] ******************************* 2026-01-10 14:58:19.663282 | orchestrator | Saturday 10 January 2026 14:55:28 +0000 (0:00:00.761) 0:01:55.834 ****** 2026-01-10 14:58:19.663289 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:58:19.663296 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:58:19.663303 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:58:19.663310 | orchestrator | 2026-01-10 14:58:19.663316 | orchestrator | TASK [octavia : Create octavia dhclient conf] ********************************** 2026-01-10 14:58:19.663323 | orchestrator | Saturday 10 January 2026 14:55:30 +0000 (0:00:01.995) 0:01:57.829 ****** 2026-01-10 14:58:19.663330 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:58:19.663337 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:58:19.663343 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:58:19.663350 | orchestrator | 2026-01-10 14:58:19.663357 | orchestrator | TASK [octavia : Create octavia-interface service] ****************************** 2026-01-10 14:58:19.663378 | orchestrator | Saturday 10 January 2026 14:55:31 +0000 (0:00:01.366) 0:01:59.196 ****** 2026-01-10 14:58:19.663385 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:58:19.663392 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:58:19.663399 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:58:19.663405 | orchestrator | 2026-01-10 14:58:19.663412 | orchestrator | TASK [octavia : Restart octavia-interface.service if required] ***************** 2026-01-10 14:58:19.663418 | orchestrator | Saturday 10 January 2026 14:55:33 +0000 (0:00:01.301) 0:02:00.497 ****** 2026-01-10 14:58:19.663425 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:58:19.663432 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:58:19.663439 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:58:19.663445 | orchestrator | 2026-01-10 14:58:19.663472 | orchestrator | TASK [octavia : Enable and start octavia-interface.service] ******************** 2026-01-10 14:58:19.663479 | orchestrator | Saturday 10 January 2026 14:55:35 +0000 (0:00:02.341) 0:02:02.839 ****** 2026-01-10 14:58:19.663485 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:58:19.663491 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:58:19.663498 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:58:19.663504 | orchestrator | 2026-01-10 14:58:19.663511 | orchestrator | TASK [octavia : Wait for interface ohm0 ip appear] ***************************** 2026-01-10 14:58:19.663517 | orchestrator | Saturday 10 January 2026 14:55:37 +0000 (0:00:02.259) 0:02:05.098 ****** 2026-01-10 14:58:19.663523 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:58:19.663531 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:58:19.663537 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:58:19.663544 | orchestrator | 2026-01-10 14:58:19.663550 | orchestrator | TASK [octavia : Gather facts] ************************************************** 2026-01-10 14:58:19.663558 | orchestrator | Saturday 10 January 2026 14:55:38 +0000 (0:00:00.620) 0:02:05.719 ****** 2026-01-10 14:58:19.663565 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:58:19.663572 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:58:19.663580 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:58:19.663587 | orchestrator | 2026-01-10 14:58:19.663594 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-01-10 14:58:19.663602 | orchestrator | Saturday 10 January 2026 14:55:40 +0000 (0:00:02.493) 0:02:08.212 ****** 2026-01-10 14:58:19.663609 | orchestrator | included: /ansible/roles/octavia/tasks/get_resources_info.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:58:19.663617 | orchestrator | 2026-01-10 14:58:19.663625 | orchestrator | TASK [octavia : Get amphora flavor info] *************************************** 2026-01-10 14:58:19.663632 | orchestrator | Saturday 10 January 2026 14:55:41 +0000 (0:00:00.781) 0:02:08.994 ****** 2026-01-10 14:58:19.663640 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:58:19.663647 | orchestrator | 2026-01-10 14:58:19.663654 | orchestrator | TASK [octavia : Get service project id] **************************************** 2026-01-10 14:58:19.663662 | orchestrator | Saturday 10 January 2026 14:55:45 +0000 (0:00:03.961) 0:02:12.956 ****** 2026-01-10 14:58:19.663670 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:58:19.663677 | orchestrator | 2026-01-10 14:58:19.663685 | orchestrator | TASK [octavia : Get security groups for octavia] ******************************* 2026-01-10 14:58:19.663693 | orchestrator | Saturday 10 January 2026 14:55:48 +0000 (0:00:03.325) 0:02:16.282 ****** 2026-01-10 14:58:19.663700 | orchestrator | ok: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2026-01-10 14:58:19.663708 | orchestrator | ok: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2026-01-10 14:58:19.663716 | orchestrator | 2026-01-10 14:58:19.663723 | orchestrator | TASK [octavia : Get loadbalancer management network] *************************** 2026-01-10 14:58:19.663731 | orchestrator | Saturday 10 January 2026 14:55:56 +0000 (0:00:07.619) 0:02:23.901 ****** 2026-01-10 14:58:19.663738 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:58:19.663746 | orchestrator | 2026-01-10 14:58:19.663754 | orchestrator | TASK [octavia : Set octavia resources facts] *********************************** 2026-01-10 14:58:19.663761 | orchestrator | Saturday 10 January 2026 14:56:00 +0000 (0:00:03.887) 0:02:27.788 ****** 2026-01-10 14:58:19.663775 | orchestrator | ok: [testbed-node-0] 2026-01-10 14:58:19.663783 | orchestrator | ok: [testbed-node-1] 2026-01-10 14:58:19.663791 | orchestrator | ok: [testbed-node-2] 2026-01-10 14:58:19.663799 | orchestrator | 2026-01-10 14:58:19.663806 | orchestrator | TASK [octavia : Ensuring config directories exist] ***************************** 2026-01-10 14:58:19.663814 | orchestrator | Saturday 10 January 2026 14:56:00 +0000 (0:00:00.507) 0:02:28.296 ****** 2026-01-10 14:58:19.663823 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-10 14:58:19.663859 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-10 14:58:19.663868 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-10 14:58:19.663875 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-01-10 14:58:19.663883 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-01-10 14:58:19.663895 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-01-10 14:58:19.663903 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-01-10 14:58:19.663913 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-01-10 14:58:19.663937 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-01-10 14:58:19.663947 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-01-10 14:58:19.663955 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-01-10 14:58:19.663968 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-01-10 14:58:19.663975 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-01-10 14:58:19.663982 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-01-10 14:58:19.664006 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-01-10 14:58:19.664015 | orchestrator | 2026-01-10 14:58:19.664022 | orchestrator | TASK [octavia : Check if policies shall be overwritten] ************************ 2026-01-10 14:58:19.664029 | orchestrator | Saturday 10 January 2026 14:56:04 +0000 (0:00:03.106) 0:02:31.403 ****** 2026-01-10 14:58:19.664036 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:58:19.664043 | orchestrator | 2026-01-10 14:58:19.664050 | orchestrator | TASK [octavia : Set octavia policy file] *************************************** 2026-01-10 14:58:19.664056 | orchestrator | Saturday 10 January 2026 14:56:04 +0000 (0:00:00.129) 0:02:31.533 ****** 2026-01-10 14:58:19.664063 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:58:19.664070 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:58:19.664077 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:58:19.664084 | orchestrator | 2026-01-10 14:58:19.664091 | orchestrator | TASK [octavia : Copying over existing policy file] ***************************** 2026-01-10 14:58:19.664098 | orchestrator | Saturday 10 January 2026 14:56:04 +0000 (0:00:00.418) 0:02:31.951 ****** 2026-01-10 14:58:19.664105 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-01-10 14:58:19.664116 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-10 14:58:19.664124 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-10 14:58:19.664131 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-10 14:58:19.664139 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-10 14:58:19.664146 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:58:19.664183 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-01-10 14:58:19.664197 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-10 14:58:19.664227 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-10 14:58:19.664236 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-10 14:58:19.664243 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-10 14:58:19.664250 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:58:19.664280 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-01-10 14:58:19.664289 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-10 14:58:19.664301 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-10 14:58:19.664309 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-10 14:58:19.664316 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-10 14:58:19.664323 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:58:19.664330 | orchestrator | 2026-01-10 14:58:19.664337 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-01-10 14:58:19.664344 | orchestrator | Saturday 10 January 2026 14:56:05 +0000 (0:00:01.427) 0:02:33.378 ****** 2026-01-10 14:58:19.664352 | orchestrator | included: /ansible/roles/octavia/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 14:58:19.664359 | orchestrator | 2026-01-10 14:58:19.664365 | orchestrator | TASK [service-cert-copy : octavia | Copying over extra CA certificates] ******** 2026-01-10 14:58:19.664372 | orchestrator | Saturday 10 January 2026 14:56:06 +0000 (0:00:00.758) 0:02:34.136 ****** 2026-01-10 14:58:19.664383 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-10 14:58:19.664407 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-10 14:58:19.664419 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-10 14:58:19.664427 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-01-10 14:58:19.664434 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-01-10 14:58:19.664441 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-01-10 14:58:19.664452 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-01-10 14:58:19.664472 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-01-10 14:58:19.664484 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-01-10 14:58:19.664491 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-01-10 14:58:19.664499 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-01-10 14:58:19.664506 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-01-10 14:58:19.664516 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-01-10 14:58:19.664539 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-01-10 14:58:19.664552 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-01-10 14:58:19.664559 | orchestrator | 2026-01-10 14:58:19.664566 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS certificate] *** 2026-01-10 14:58:19.664573 | orchestrator | Saturday 10 January 2026 14:56:12 +0000 (0:00:05.459) 0:02:39.596 ****** 2026-01-10 14:58:19.664579 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-01-10 14:58:19.664586 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-10 14:58:19.664592 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-10 14:58:19.664601 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-10 14:58:19.664621 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-10 14:58:19.664633 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:58:19.664640 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-01-10 14:58:19.664646 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-10 14:58:19.664653 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-10 14:58:19.664660 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-10 14:58:19.664667 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-10 14:58:19.664673 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:58:19.664690 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-01-10 14:58:19.664697 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-10 14:58:19.664703 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-10 14:58:19.664710 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-10 14:58:19.664716 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-10 14:58:19.664722 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:58:19.664728 | orchestrator | 2026-01-10 14:58:19.664733 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS key] ***** 2026-01-10 14:58:19.664739 | orchestrator | Saturday 10 January 2026 14:56:13 +0000 (0:00:00.954) 0:02:40.550 ****** 2026-01-10 14:58:19.664747 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-01-10 14:58:19.664762 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-10 14:58:19.664768 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-10 14:58:19.664774 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-10 14:58:19.664780 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-10 14:58:19.664786 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:58:19.664792 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-01-10 14:58:19.664806 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-10 14:58:19.664817 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-10 14:58:19.664823 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-10 14:58:19.664830 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-10 14:58:19.664836 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:58:19.664842 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-01-10 14:58:19.664848 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-10 14:58:19.664860 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-10 14:58:19.664871 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-10 14:58:19.664878 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-10 14:58:19.664884 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:58:19.664890 | orchestrator | 2026-01-10 14:58:19.664896 | orchestrator | TASK [octavia : Copying over config.json files for services] ******************* 2026-01-10 14:58:19.664902 | orchestrator | Saturday 10 January 2026 14:56:14 +0000 (0:00:00.949) 0:02:41.500 ****** 2026-01-10 14:58:19.664909 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-10 14:58:19.664916 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-10 14:58:19.664929 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-10 14:58:19.664939 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-01-10 14:58:19.664946 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-01-10 14:58:19.664952 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-01-10 14:58:19.664960 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-01-10 14:58:19.664967 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-01-10 14:58:19.664978 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-01-10 14:58:19.664987 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-01-10 14:58:19.664997 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-01-10 14:58:19.665004 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-01-10 14:58:19.665012 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-01-10 14:58:19.665019 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-01-10 14:58:19.665026 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-01-10 14:58:19.665037 | orchestrator | 2026-01-10 14:58:19.665044 | orchestrator | TASK [octavia : Copying over octavia-wsgi.conf] ******************************** 2026-01-10 14:58:19.665051 | orchestrator | Saturday 10 January 2026 14:56:19 +0000 (0:00:04.997) 0:02:46.497 ****** 2026-01-10 14:58:19.665058 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-01-10 14:58:19.665065 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-01-10 14:58:19.665071 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-01-10 14:58:19.665077 | orchestrator | 2026-01-10 14:58:19.665083 | orchestrator | TASK [octavia : Copying over octavia.conf] ************************************* 2026-01-10 14:58:19.665089 | orchestrator | Saturday 10 January 2026 14:56:21 +0000 (0:00:02.014) 0:02:48.511 ****** 2026-01-10 14:58:19.665101 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-10 14:58:19.665108 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-10 14:58:19.665115 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-10 14:58:19.665125 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-01-10 14:58:19.665132 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-01-10 14:58:19.665141 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-01-10 14:58:19.665151 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-01-10 14:58:19.665178 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-01-10 14:58:19.665185 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-01-10 14:58:19.665192 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-01-10 14:58:19.665203 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-01-10 14:58:19.665210 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-01-10 14:58:19.665221 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-01-10 14:58:19.665233 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-01-10 14:58:19.665240 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-01-10 14:58:19.665247 | orchestrator | 2026-01-10 14:58:19.665253 | orchestrator | TASK [octavia : Copying over Octavia SSH key] ********************************** 2026-01-10 14:58:19.665260 | orchestrator | Saturday 10 January 2026 14:56:38 +0000 (0:00:17.408) 0:03:05.920 ****** 2026-01-10 14:58:19.665266 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:58:19.665274 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:58:19.665284 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:58:19.665290 | orchestrator | 2026-01-10 14:58:19.665297 | orchestrator | TASK [octavia : Copying certificate files for octavia-worker] ****************** 2026-01-10 14:58:19.665303 | orchestrator | Saturday 10 January 2026 14:56:39 +0000 (0:00:01.401) 0:03:07.322 ****** 2026-01-10 14:58:19.665309 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-01-10 14:58:19.665316 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-01-10 14:58:19.665322 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-01-10 14:58:19.665328 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-01-10 14:58:19.665334 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-01-10 14:58:19.665340 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-01-10 14:58:19.665345 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-01-10 14:58:19.665351 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-01-10 14:58:19.665357 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-01-10 14:58:19.665363 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-01-10 14:58:19.665369 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-01-10 14:58:19.665375 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-01-10 14:58:19.665381 | orchestrator | 2026-01-10 14:58:19.665387 | orchestrator | TASK [octavia : Copying certificate files for octavia-housekeeping] ************ 2026-01-10 14:58:19.665393 | orchestrator | Saturday 10 January 2026 14:56:44 +0000 (0:00:05.037) 0:03:12.360 ****** 2026-01-10 14:58:19.665399 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-01-10 14:58:19.665404 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-01-10 14:58:19.665410 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-01-10 14:58:19.665416 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-01-10 14:58:19.665422 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-01-10 14:58:19.665427 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-01-10 14:58:19.665433 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-01-10 14:58:19.665438 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-01-10 14:58:19.665444 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-01-10 14:58:19.665450 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-01-10 14:58:19.665456 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-01-10 14:58:19.665462 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-01-10 14:58:19.665468 | orchestrator | 2026-01-10 14:58:19.665474 | orchestrator | TASK [octavia : Copying certificate files for octavia-health-manager] ********** 2026-01-10 14:58:19.665480 | orchestrator | Saturday 10 January 2026 14:56:50 +0000 (0:00:05.480) 0:03:17.840 ****** 2026-01-10 14:58:19.665489 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-01-10 14:58:19.665495 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-01-10 14:58:19.665501 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-01-10 14:58:19.665507 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-01-10 14:58:19.665513 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-01-10 14:58:19.665519 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-01-10 14:58:19.665525 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-01-10 14:58:19.665531 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-01-10 14:58:19.665541 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-01-10 14:58:19.665547 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-01-10 14:58:19.665553 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-01-10 14:58:19.665566 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-01-10 14:58:19.665572 | orchestrator | 2026-01-10 14:58:19.665578 | orchestrator | TASK [service-check-containers : octavia | Check containers] ******************* 2026-01-10 14:58:19.665583 | orchestrator | Saturday 10 January 2026 14:56:55 +0000 (0:00:05.096) 0:03:22.937 ****** 2026-01-10 14:58:19.665590 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-10 14:58:19.665596 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-10 14:58:19.665603 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-10 14:58:19.665613 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-01-10 14:58:19.665622 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-01-10 14:58:19.665632 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-01-10 14:58:19.665637 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-01-10 14:58:19.665643 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-01-10 14:58:19.665649 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-01-10 14:58:19.665655 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-01-10 14:58:19.665663 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-01-10 14:58:19.665676 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-01-10 14:58:19.665682 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-01-10 14:58:19.665688 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-01-10 14:58:19.665694 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-01-10 14:58:19.665700 | orchestrator | 2026-01-10 14:58:19.665706 | orchestrator | TASK [service-check-containers : octavia | Notify handlers to restart containers] *** 2026-01-10 14:58:19.665712 | orchestrator | Saturday 10 January 2026 14:56:59 +0000 (0:00:04.321) 0:03:27.258 ****** 2026-01-10 14:58:19.665719 | orchestrator | changed: [testbed-node-0] => { 2026-01-10 14:58:19.665725 | orchestrator |  "msg": "Notifying handlers" 2026-01-10 14:58:19.665731 | orchestrator | } 2026-01-10 14:58:19.665738 | orchestrator | changed: [testbed-node-1] => { 2026-01-10 14:58:19.665744 | orchestrator |  "msg": "Notifying handlers" 2026-01-10 14:58:19.665750 | orchestrator | } 2026-01-10 14:58:19.665756 | orchestrator | changed: [testbed-node-2] => { 2026-01-10 14:58:19.665762 | orchestrator |  "msg": "Notifying handlers" 2026-01-10 14:58:19.665769 | orchestrator | } 2026-01-10 14:58:19.665776 | orchestrator | 2026-01-10 14:58:19.665783 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-01-10 14:58:19.665789 | orchestrator | Saturday 10 January 2026 14:57:00 +0000 (0:00:00.421) 0:03:27.680 ****** 2026-01-10 14:58:19.665799 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-01-10 14:58:19.665814 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-10 14:58:19.665821 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-10 14:58:19.665828 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-10 14:58:19.665834 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-10 14:58:19.665841 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:58:19.665847 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-01-10 14:58:19.665857 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-10 14:58:19.665870 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-10 14:58:19.665877 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-10 14:58:19.665885 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-10 14:58:19.665891 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:58:19.665898 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-01-10 14:58:19.665906 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-10 14:58:19.665917 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-10 14:58:19.665930 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-10 14:58:19.665938 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-10 14:58:19.665945 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:58:19.665952 | orchestrator | 2026-01-10 14:58:19.665959 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-01-10 14:58:19.665966 | orchestrator | Saturday 10 January 2026 14:57:01 +0000 (0:00:01.514) 0:03:29.194 ****** 2026-01-10 14:58:19.665973 | orchestrator | skipping: [testbed-node-0] 2026-01-10 14:58:19.665980 | orchestrator | skipping: [testbed-node-1] 2026-01-10 14:58:19.665986 | orchestrator | skipping: [testbed-node-2] 2026-01-10 14:58:19.665993 | orchestrator | 2026-01-10 14:58:19.666000 | orchestrator | TASK [octavia : Creating Octavia database] ************************************* 2026-01-10 14:58:19.666007 | orchestrator | Saturday 10 January 2026 14:57:02 +0000 (0:00:00.339) 0:03:29.534 ****** 2026-01-10 14:58:19.666054 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:58:19.666061 | orchestrator | 2026-01-10 14:58:19.666069 | orchestrator | TASK [octavia : Creating Octavia persistence database] ************************* 2026-01-10 14:58:19.666075 | orchestrator | Saturday 10 January 2026 14:57:04 +0000 (0:00:02.447) 0:03:31.982 ****** 2026-01-10 14:58:19.666083 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:58:19.666090 | orchestrator | 2026-01-10 14:58:19.666097 | orchestrator | TASK [octavia : Creating Octavia database user and setting permissions] ******** 2026-01-10 14:58:19.666103 | orchestrator | Saturday 10 January 2026 14:57:07 +0000 (0:00:02.517) 0:03:34.499 ****** 2026-01-10 14:58:19.666110 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:58:19.666118 | orchestrator | 2026-01-10 14:58:19.666124 | orchestrator | TASK [octavia : Creating Octavia persistence database user and setting permissions] *** 2026-01-10 14:58:19.666131 | orchestrator | Saturday 10 January 2026 14:57:09 +0000 (0:00:01.983) 0:03:36.483 ****** 2026-01-10 14:58:19.666138 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:58:19.666145 | orchestrator | 2026-01-10 14:58:19.666152 | orchestrator | TASK [octavia : Running Octavia bootstrap container] *************************** 2026-01-10 14:58:19.666172 | orchestrator | Saturday 10 January 2026 14:57:11 +0000 (0:00:02.089) 0:03:38.572 ****** 2026-01-10 14:58:19.666180 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:58:19.666191 | orchestrator | 2026-01-10 14:58:19.666198 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-01-10 14:58:19.666205 | orchestrator | Saturday 10 January 2026 14:57:32 +0000 (0:00:21.587) 0:04:00.160 ****** 2026-01-10 14:58:19.666212 | orchestrator | 2026-01-10 14:58:19.666218 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-01-10 14:58:19.666225 | orchestrator | Saturday 10 January 2026 14:57:32 +0000 (0:00:00.070) 0:04:00.231 ****** 2026-01-10 14:58:19.666232 | orchestrator | 2026-01-10 14:58:19.666239 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-01-10 14:58:19.666246 | orchestrator | Saturday 10 January 2026 14:57:32 +0000 (0:00:00.069) 0:04:00.300 ****** 2026-01-10 14:58:19.666252 | orchestrator | 2026-01-10 14:58:19.666259 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-api container] ********************** 2026-01-10 14:58:19.666266 | orchestrator | Saturday 10 January 2026 14:57:33 +0000 (0:00:00.318) 0:04:00.618 ****** 2026-01-10 14:58:19.666274 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:58:19.666281 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:58:19.666289 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:58:19.666296 | orchestrator | 2026-01-10 14:58:19.666303 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-driver-agent container] ************* 2026-01-10 14:58:19.666311 | orchestrator | Saturday 10 January 2026 14:57:48 +0000 (0:00:14.837) 0:04:15.456 ****** 2026-01-10 14:58:19.666318 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:58:19.666326 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:58:19.666333 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:58:19.666340 | orchestrator | 2026-01-10 14:58:19.666348 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-health-manager container] *********** 2026-01-10 14:58:19.666355 | orchestrator | Saturday 10 January 2026 14:57:54 +0000 (0:00:06.291) 0:04:21.747 ****** 2026-01-10 14:58:19.666362 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:58:19.666370 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:58:19.666377 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:58:19.666384 | orchestrator | 2026-01-10 14:58:19.666392 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-housekeeping container] ************* 2026-01-10 14:58:19.666399 | orchestrator | Saturday 10 January 2026 14:58:04 +0000 (0:00:10.257) 0:04:32.005 ****** 2026-01-10 14:58:19.666406 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:58:19.666414 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:58:19.666421 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:58:19.666428 | orchestrator | 2026-01-10 14:58:19.666439 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-worker container] ******************* 2026-01-10 14:58:19.666447 | orchestrator | Saturday 10 January 2026 14:58:10 +0000 (0:00:06.014) 0:04:38.019 ****** 2026-01-10 14:58:19.666455 | orchestrator | changed: [testbed-node-0] 2026-01-10 14:58:19.666461 | orchestrator | changed: [testbed-node-1] 2026-01-10 14:58:19.666467 | orchestrator | changed: [testbed-node-2] 2026-01-10 14:58:19.666473 | orchestrator | 2026-01-10 14:58:19.666480 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-10 14:58:19.666489 | orchestrator | testbed-node-0 : ok=58  changed=39  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-01-10 14:58:19.666501 | orchestrator | testbed-node-1 : ok=34  changed=23  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-01-10 14:58:19.666509 | orchestrator | testbed-node-2 : ok=34  changed=23  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-01-10 14:58:19.666516 | orchestrator | 2026-01-10 14:58:19.666523 | orchestrator | 2026-01-10 14:58:19.666530 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-10 14:58:19.666538 | orchestrator | Saturday 10 January 2026 14:58:16 +0000 (0:00:06.017) 0:04:44.037 ****** 2026-01-10 14:58:19.666546 | orchestrator | =============================================================================== 2026-01-10 14:58:19.666558 | orchestrator | octavia : Running Octavia bootstrap container -------------------------- 21.59s 2026-01-10 14:58:19.666566 | orchestrator | octavia : Copying over octavia.conf ------------------------------------ 17.41s 2026-01-10 14:58:19.666574 | orchestrator | octavia : Add rules for security groups -------------------------------- 16.46s 2026-01-10 14:58:19.666581 | orchestrator | octavia : Adding octavia related roles --------------------------------- 16.06s 2026-01-10 14:58:19.666588 | orchestrator | octavia : Restart octavia-api container -------------------------------- 14.84s 2026-01-10 14:58:19.666596 | orchestrator | octavia : Restart octavia-health-manager container --------------------- 10.26s 2026-01-10 14:58:19.666603 | orchestrator | octavia : Create security groups for octavia ---------------------------- 9.89s 2026-01-10 14:58:19.666610 | orchestrator | service-ks-register : octavia | Creating users -------------------------- 8.71s 2026-01-10 14:58:19.666617 | orchestrator | octavia : Get security groups for octavia ------------------------------- 7.62s 2026-01-10 14:58:19.666624 | orchestrator | service-ks-register : octavia | Granting/revoking user roles ------------ 7.47s 2026-01-10 14:58:19.666631 | orchestrator | service-ks-register : octavia | Creating/deleting endpoints ------------- 6.80s 2026-01-10 14:58:19.666639 | orchestrator | octavia : Restart octavia-driver-agent container ------------------------ 6.29s 2026-01-10 14:58:19.666646 | orchestrator | octavia : Restart octavia-worker container ------------------------------ 6.02s 2026-01-10 14:58:19.666653 | orchestrator | octavia : Restart octavia-housekeeping container ------------------------ 6.01s 2026-01-10 14:58:19.666660 | orchestrator | octavia : Create loadbalancer management subnet ------------------------- 5.77s 2026-01-10 14:58:19.666667 | orchestrator | octavia : Copying certificate files for octavia-housekeeping ------------ 5.48s 2026-01-10 14:58:19.666673 | orchestrator | service-cert-copy : octavia | Copying over extra CA certificates -------- 5.46s 2026-01-10 14:58:19.666680 | orchestrator | octavia : Copying certificate files for octavia-health-manager ---------- 5.10s 2026-01-10 14:58:19.666687 | orchestrator | octavia : Copying certificate files for octavia-worker ------------------ 5.04s 2026-01-10 14:58:19.666694 | orchestrator | octavia : Copying over config.json files for services ------------------- 5.00s 2026-01-10 14:58:22.693214 | orchestrator | 2026-01-10 14:58:22 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-10 14:58:25.735921 | orchestrator | 2026-01-10 14:58:25 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-10 14:58:28.784622 | orchestrator | 2026-01-10 14:58:28 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-10 14:58:31.823324 | orchestrator | 2026-01-10 14:58:31 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-10 14:58:34.867976 | orchestrator | 2026-01-10 14:58:34 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-10 14:58:37.910875 | orchestrator | 2026-01-10 14:58:37 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-10 14:58:40.962184 | orchestrator | 2026-01-10 14:58:40 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-10 14:58:43.996539 | orchestrator | 2026-01-10 14:58:43 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-10 14:58:47.036934 | orchestrator | 2026-01-10 14:58:47 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-10 14:58:50.082530 | orchestrator | 2026-01-10 14:58:50 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-10 14:58:53.121247 | orchestrator | 2026-01-10 14:58:53 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-10 14:58:56.164433 | orchestrator | 2026-01-10 14:58:56 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-10 14:58:59.215519 | orchestrator | 2026-01-10 14:58:59 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-10 14:59:02.268388 | orchestrator | 2026-01-10 14:59:02 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-10 14:59:05.313601 | orchestrator | 2026-01-10 14:59:05 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-10 14:59:08.357628 | orchestrator | 2026-01-10 14:59:08 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-10 14:59:11.401603 | orchestrator | 2026-01-10 14:59:11 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-10 14:59:14.440525 | orchestrator | 2026-01-10 14:59:14 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-10 14:59:17.481909 | orchestrator | 2026-01-10 14:59:17 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-10 14:59:20.515560 | orchestrator | 2026-01-10 14:59:20.871099 | orchestrator | 2026-01-10 14:59:20.880377 | orchestrator | --> DEPLOY IN A NUTSHELL -- END -- Sat Jan 10 14:59:20 UTC 2026 2026-01-10 14:59:20.880437 | orchestrator | 2026-01-10 14:59:21.344878 | orchestrator | ok: Runtime: 0:35:14.195040 2026-01-10 14:59:21.617307 | 2026-01-10 14:59:21.617484 | TASK [Bootstrap services] 2026-01-10 14:59:22.366644 | orchestrator | 2026-01-10 14:59:22.366752 | orchestrator | # BOOTSTRAP 2026-01-10 14:59:22.366765 | orchestrator | 2026-01-10 14:59:22.366773 | orchestrator | + set -e 2026-01-10 14:59:22.366780 | orchestrator | + echo 2026-01-10 14:59:22.366787 | orchestrator | + echo '# BOOTSTRAP' 2026-01-10 14:59:22.366796 | orchestrator | + echo 2026-01-10 14:59:22.366818 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap-services.sh 2026-01-10 14:59:22.375885 | orchestrator | + set -e 2026-01-10 14:59:22.375937 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/300-openstack.sh 2026-01-10 14:59:27.597522 | orchestrator | 2026-01-10 14:59:27 | INFO  | It takes a moment until task 565fab1c-d9f1-44a5-9be1-dff6fa4114b1 (flavor-manager) has been started and output is visible here. 2026-01-10 14:59:34.700691 | orchestrator | 2026-01-10 14:59:30 | INFO  | Flavor SCS-1L-1 created 2026-01-10 14:59:34.700768 | orchestrator | 2026-01-10 14:59:30 | INFO  | Flavor SCS-1L-1-5 created 2026-01-10 14:59:34.700779 | orchestrator | 2026-01-10 14:59:31 | INFO  | Flavor SCS-1V-2 created 2026-01-10 14:59:34.700787 | orchestrator | 2026-01-10 14:59:31 | INFO  | Flavor SCS-1V-2-5 created 2026-01-10 14:59:34.700795 | orchestrator | 2026-01-10 14:59:31 | INFO  | Flavor SCS-1V-4 created 2026-01-10 14:59:34.700803 | orchestrator | 2026-01-10 14:59:31 | INFO  | Flavor SCS-1V-4-10 created 2026-01-10 14:59:34.700811 | orchestrator | 2026-01-10 14:59:31 | INFO  | Flavor SCS-1V-8 created 2026-01-10 14:59:34.700818 | orchestrator | 2026-01-10 14:59:31 | INFO  | Flavor SCS-1V-8-20 created 2026-01-10 14:59:34.700831 | orchestrator | 2026-01-10 14:59:31 | INFO  | Flavor SCS-2V-4 created 2026-01-10 14:59:34.700840 | orchestrator | 2026-01-10 14:59:32 | INFO  | Flavor SCS-2V-4-10 created 2026-01-10 14:59:34.700847 | orchestrator | 2026-01-10 14:59:32 | INFO  | Flavor SCS-2V-8 created 2026-01-10 14:59:34.700855 | orchestrator | 2026-01-10 14:59:32 | INFO  | Flavor SCS-2V-8-20 created 2026-01-10 14:59:34.700862 | orchestrator | 2026-01-10 14:59:32 | INFO  | Flavor SCS-2V-16 created 2026-01-10 14:59:34.700870 | orchestrator | 2026-01-10 14:59:32 | INFO  | Flavor SCS-2V-16-50 created 2026-01-10 14:59:34.700877 | orchestrator | 2026-01-10 14:59:32 | INFO  | Flavor SCS-4V-8 created 2026-01-10 14:59:34.700891 | orchestrator | 2026-01-10 14:59:32 | INFO  | Flavor SCS-4V-8-20 created 2026-01-10 14:59:34.700898 | orchestrator | 2026-01-10 14:59:33 | INFO  | Flavor SCS-4V-16 created 2026-01-10 14:59:34.700905 | orchestrator | 2026-01-10 14:59:33 | INFO  | Flavor SCS-4V-16-50 created 2026-01-10 14:59:34.700912 | orchestrator | 2026-01-10 14:59:33 | INFO  | Flavor SCS-4V-32 created 2026-01-10 14:59:34.700919 | orchestrator | 2026-01-10 14:59:33 | INFO  | Flavor SCS-4V-32-100 created 2026-01-10 14:59:34.700926 | orchestrator | 2026-01-10 14:59:33 | INFO  | Flavor SCS-8V-16 created 2026-01-10 14:59:34.700933 | orchestrator | 2026-01-10 14:59:33 | INFO  | Flavor SCS-8V-16-50 created 2026-01-10 14:59:34.700941 | orchestrator | 2026-01-10 14:59:33 | INFO  | Flavor SCS-8V-32 created 2026-01-10 14:59:34.700948 | orchestrator | 2026-01-10 14:59:33 | INFO  | Flavor SCS-8V-32-100 created 2026-01-10 14:59:34.700955 | orchestrator | 2026-01-10 14:59:33 | INFO  | Flavor SCS-16V-32 created 2026-01-10 14:59:34.700962 | orchestrator | 2026-01-10 14:59:34 | INFO  | Flavor SCS-16V-32-100 created 2026-01-10 14:59:34.700970 | orchestrator | 2026-01-10 14:59:34 | INFO  | Flavor SCS-2V-4-20s created 2026-01-10 14:59:34.700977 | orchestrator | 2026-01-10 14:59:34 | INFO  | Flavor SCS-4V-8-50s created 2026-01-10 14:59:34.700984 | orchestrator | 2026-01-10 14:59:34 | INFO  | Flavor SCS-8V-32-100s created 2026-01-10 14:59:37.143757 | orchestrator | 2026-01-10 14:59:37 | INFO  | Trying to run play bootstrap-basic in environment openstack 2026-01-10 14:59:47.267228 | orchestrator | 2026-01-10 14:59:47 | INFO  | Task 7004c960-1bee-4529-b154-80bc05db7ee8 (bootstrap-basic) was prepared for execution. 2026-01-10 14:59:47.267325 | orchestrator | 2026-01-10 14:59:47 | INFO  | It takes a moment until task 7004c960-1bee-4529-b154-80bc05db7ee8 (bootstrap-basic) has been started and output is visible here. 2026-01-10 15:00:35.152131 | orchestrator | 2026-01-10 15:00:35.152219 | orchestrator | PLAY [Bootstrap basic OpenStack services] ************************************** 2026-01-10 15:00:35.152226 | orchestrator | 2026-01-10 15:00:35.152231 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-01-10 15:00:35.152236 | orchestrator | Saturday 10 January 2026 14:59:51 +0000 (0:00:00.073) 0:00:00.073 ****** 2026-01-10 15:00:35.152240 | orchestrator | ok: [localhost] 2026-01-10 15:00:35.152245 | orchestrator | 2026-01-10 15:00:35.152250 | orchestrator | TASK [Get volume type LUKS] **************************************************** 2026-01-10 15:00:35.152254 | orchestrator | Saturday 10 January 2026 14:59:53 +0000 (0:00:01.892) 0:00:01.965 ****** 2026-01-10 15:00:35.152258 | orchestrator | ok: [localhost] 2026-01-10 15:00:35.152262 | orchestrator | 2026-01-10 15:00:35.152266 | orchestrator | TASK [Create volume type LUKS] ************************************************* 2026-01-10 15:00:35.152271 | orchestrator | Saturday 10 January 2026 15:00:02 +0000 (0:00:09.112) 0:00:11.078 ****** 2026-01-10 15:00:35.152275 | orchestrator | changed: [localhost] 2026-01-10 15:00:35.152279 | orchestrator | 2026-01-10 15:00:35.152283 | orchestrator | TASK [Create public network] *************************************************** 2026-01-10 15:00:35.152288 | orchestrator | Saturday 10 January 2026 15:00:10 +0000 (0:00:08.175) 0:00:19.253 ****** 2026-01-10 15:00:35.152292 | orchestrator | changed: [localhost] 2026-01-10 15:00:35.152296 | orchestrator | 2026-01-10 15:00:35.152300 | orchestrator | TASK [Set public network to default] ******************************************* 2026-01-10 15:00:35.152304 | orchestrator | Saturday 10 January 2026 15:00:16 +0000 (0:00:05.366) 0:00:24.620 ****** 2026-01-10 15:00:35.152311 | orchestrator | changed: [localhost] 2026-01-10 15:00:35.152315 | orchestrator | 2026-01-10 15:00:35.152320 | orchestrator | TASK [Create public subnet] **************************************************** 2026-01-10 15:00:35.152324 | orchestrator | Saturday 10 January 2026 15:00:22 +0000 (0:00:06.551) 0:00:31.171 ****** 2026-01-10 15:00:35.152328 | orchestrator | changed: [localhost] 2026-01-10 15:00:35.152331 | orchestrator | 2026-01-10 15:00:35.152335 | orchestrator | TASK [Create default IPv4 subnet pool] ***************************************** 2026-01-10 15:00:35.152339 | orchestrator | Saturday 10 January 2026 15:00:27 +0000 (0:00:04.449) 0:00:35.621 ****** 2026-01-10 15:00:35.152343 | orchestrator | changed: [localhost] 2026-01-10 15:00:35.152347 | orchestrator | 2026-01-10 15:00:35.152351 | orchestrator | TASK [Create manager role] ***************************************************** 2026-01-10 15:00:35.152361 | orchestrator | Saturday 10 January 2026 15:00:31 +0000 (0:00:03.893) 0:00:39.515 ****** 2026-01-10 15:00:35.152365 | orchestrator | ok: [localhost] 2026-01-10 15:00:35.152369 | orchestrator | 2026-01-10 15:00:35.152373 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-10 15:00:35.152377 | orchestrator | localhost : ok=8  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-10 15:00:35.152381 | orchestrator | 2026-01-10 15:00:35.152385 | orchestrator | 2026-01-10 15:00:35.152389 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-10 15:00:35.152393 | orchestrator | Saturday 10 January 2026 15:00:34 +0000 (0:00:03.697) 0:00:43.212 ****** 2026-01-10 15:00:35.152397 | orchestrator | =============================================================================== 2026-01-10 15:00:35.152401 | orchestrator | Get volume type LUKS ---------------------------------------------------- 9.11s 2026-01-10 15:00:35.152405 | orchestrator | Create volume type LUKS ------------------------------------------------- 8.18s 2026-01-10 15:00:35.152409 | orchestrator | Set public network to default ------------------------------------------- 6.55s 2026-01-10 15:00:35.152412 | orchestrator | Create public network --------------------------------------------------- 5.37s 2026-01-10 15:00:35.152431 | orchestrator | Create public subnet ---------------------------------------------------- 4.45s 2026-01-10 15:00:35.152438 | orchestrator | Create default IPv4 subnet pool ----------------------------------------- 3.89s 2026-01-10 15:00:35.152444 | orchestrator | Create manager role ----------------------------------------------------- 3.70s 2026-01-10 15:00:35.152450 | orchestrator | Gathering Facts --------------------------------------------------------- 1.89s 2026-01-10 15:00:37.750473 | orchestrator | 2026-01-10 15:00:37 | INFO  | It takes a moment until task 4a0d8ef7-3a7e-460f-b01a-10a6c0e53498 (image-manager) has been started and output is visible here. 2026-01-10 15:01:16.165174 | orchestrator | 2026-01-10 15:00:40 | INFO  | Processing image 'Cirros 0.6.2' 2026-01-10 15:01:16.165346 | orchestrator | 2026-01-10 15:00:40 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img: 302 2026-01-10 15:01:16.165365 | orchestrator | 2026-01-10 15:00:40 | INFO  | Importing image Cirros 0.6.2 2026-01-10 15:01:16.165373 | orchestrator | 2026-01-10 15:00:40 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2026-01-10 15:01:16.165381 | orchestrator | 2026-01-10 15:00:43 | INFO  | Waiting for image to leave queued state... 2026-01-10 15:01:16.165390 | orchestrator | 2026-01-10 15:00:45 | INFO  | Waiting for import to complete... 2026-01-10 15:01:16.165396 | orchestrator | 2026-01-10 15:00:55 | INFO  | Import of 'Cirros 0.6.2' successfully completed, reloading images 2026-01-10 15:01:16.165405 | orchestrator | 2026-01-10 15:00:55 | INFO  | Checking parameters of 'Cirros 0.6.2' 2026-01-10 15:01:16.165412 | orchestrator | 2026-01-10 15:00:55 | INFO  | Setting internal_version = 0.6.2 2026-01-10 15:01:16.165419 | orchestrator | 2026-01-10 15:00:55 | INFO  | Setting image_original_user = cirros 2026-01-10 15:01:16.165427 | orchestrator | 2026-01-10 15:00:55 | INFO  | Adding tag os:cirros 2026-01-10 15:01:16.165435 | orchestrator | 2026-01-10 15:00:55 | INFO  | Setting property architecture: x86_64 2026-01-10 15:01:16.165444 | orchestrator | 2026-01-10 15:00:55 | INFO  | Setting property hw_disk_bus: scsi 2026-01-10 15:01:16.165451 | orchestrator | 2026-01-10 15:00:55 | INFO  | Setting property hw_rng_model: virtio 2026-01-10 15:01:16.165459 | orchestrator | 2026-01-10 15:00:56 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-01-10 15:01:16.165466 | orchestrator | 2026-01-10 15:00:56 | INFO  | Setting property hw_watchdog_action: reset 2026-01-10 15:01:16.165473 | orchestrator | 2026-01-10 15:00:56 | INFO  | Setting property hypervisor_type: qemu 2026-01-10 15:01:16.165480 | orchestrator | 2026-01-10 15:00:56 | INFO  | Setting property os_distro: cirros 2026-01-10 15:01:16.165488 | orchestrator | 2026-01-10 15:00:56 | INFO  | Setting property os_purpose: minimal 2026-01-10 15:01:16.165495 | orchestrator | 2026-01-10 15:00:57 | INFO  | Setting property replace_frequency: never 2026-01-10 15:01:16.165502 | orchestrator | 2026-01-10 15:00:57 | INFO  | Setting property uuid_validity: none 2026-01-10 15:01:16.165510 | orchestrator | 2026-01-10 15:00:57 | INFO  | Setting property provided_until: none 2026-01-10 15:01:16.165517 | orchestrator | 2026-01-10 15:00:57 | INFO  | Setting property image_description: Cirros 2026-01-10 15:01:16.165525 | orchestrator | 2026-01-10 15:00:57 | INFO  | Setting property image_name: Cirros 2026-01-10 15:01:16.165532 | orchestrator | 2026-01-10 15:00:57 | INFO  | Setting property internal_version: 0.6.2 2026-01-10 15:01:16.165539 | orchestrator | 2026-01-10 15:00:58 | INFO  | Setting property image_original_user: cirros 2026-01-10 15:01:16.165585 | orchestrator | 2026-01-10 15:00:58 | INFO  | Setting property os_version: 0.6.2 2026-01-10 15:01:16.165614 | orchestrator | 2026-01-10 15:00:58 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2026-01-10 15:01:16.165623 | orchestrator | 2026-01-10 15:00:58 | INFO  | Setting property image_build_date: 2023-05-30 2026-01-10 15:01:16.165631 | orchestrator | 2026-01-10 15:00:58 | INFO  | Checking status of 'Cirros 0.6.2' 2026-01-10 15:01:16.165638 | orchestrator | 2026-01-10 15:00:58 | INFO  | Checking visibility of 'Cirros 0.6.2' 2026-01-10 15:01:16.165644 | orchestrator | 2026-01-10 15:00:58 | INFO  | Setting visibility of 'Cirros 0.6.2' to 'public' 2026-01-10 15:01:16.165651 | orchestrator | 2026-01-10 15:00:59 | INFO  | Processing image 'Cirros 0.6.3' 2026-01-10 15:01:16.165662 | orchestrator | 2026-01-10 15:00:59 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img: 302 2026-01-10 15:01:16.165668 | orchestrator | 2026-01-10 15:00:59 | INFO  | Importing image Cirros 0.6.3 2026-01-10 15:01:16.165675 | orchestrator | 2026-01-10 15:00:59 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2026-01-10 15:01:16.165682 | orchestrator | 2026-01-10 15:00:59 | INFO  | Waiting for image to leave queued state... 2026-01-10 15:01:16.165689 | orchestrator | 2026-01-10 15:01:01 | INFO  | Waiting for import to complete... 2026-01-10 15:01:16.165713 | orchestrator | 2026-01-10 15:01:11 | INFO  | Import of 'Cirros 0.6.3' successfully completed, reloading images 2026-01-10 15:01:16.165727 | orchestrator | 2026-01-10 15:01:12 | INFO  | Checking parameters of 'Cirros 0.6.3' 2026-01-10 15:01:16.165735 | orchestrator | 2026-01-10 15:01:12 | INFO  | Setting internal_version = 0.6.3 2026-01-10 15:01:16.165742 | orchestrator | 2026-01-10 15:01:12 | INFO  | Setting image_original_user = cirros 2026-01-10 15:01:16.165750 | orchestrator | 2026-01-10 15:01:12 | INFO  | Adding tag os:cirros 2026-01-10 15:01:16.165761 | orchestrator | 2026-01-10 15:01:12 | INFO  | Setting property architecture: x86_64 2026-01-10 15:01:16.165773 | orchestrator | 2026-01-10 15:01:12 | INFO  | Setting property hw_disk_bus: scsi 2026-01-10 15:01:16.165785 | orchestrator | 2026-01-10 15:01:12 | INFO  | Setting property hw_rng_model: virtio 2026-01-10 15:01:16.165798 | orchestrator | 2026-01-10 15:01:12 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-01-10 15:01:16.165810 | orchestrator | 2026-01-10 15:01:12 | INFO  | Setting property hw_watchdog_action: reset 2026-01-10 15:01:16.165823 | orchestrator | 2026-01-10 15:01:13 | INFO  | Setting property hypervisor_type: qemu 2026-01-10 15:01:16.165836 | orchestrator | 2026-01-10 15:01:13 | INFO  | Setting property os_distro: cirros 2026-01-10 15:01:16.165849 | orchestrator | 2026-01-10 15:01:13 | INFO  | Setting property os_purpose: minimal 2026-01-10 15:01:16.165861 | orchestrator | 2026-01-10 15:01:13 | INFO  | Setting property replace_frequency: never 2026-01-10 15:01:16.165874 | orchestrator | 2026-01-10 15:01:13 | INFO  | Setting property uuid_validity: none 2026-01-10 15:01:16.165886 | orchestrator | 2026-01-10 15:01:14 | INFO  | Setting property provided_until: none 2026-01-10 15:01:16.165898 | orchestrator | 2026-01-10 15:01:14 | INFO  | Setting property image_description: Cirros 2026-01-10 15:01:16.165910 | orchestrator | 2026-01-10 15:01:14 | INFO  | Setting property image_name: Cirros 2026-01-10 15:01:16.165917 | orchestrator | 2026-01-10 15:01:14 | INFO  | Setting property internal_version: 0.6.3 2026-01-10 15:01:16.165931 | orchestrator | 2026-01-10 15:01:14 | INFO  | Setting property image_original_user: cirros 2026-01-10 15:01:16.165939 | orchestrator | 2026-01-10 15:01:14 | INFO  | Setting property os_version: 0.6.3 2026-01-10 15:01:16.165944 | orchestrator | 2026-01-10 15:01:15 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2026-01-10 15:01:16.165951 | orchestrator | 2026-01-10 15:01:15 | INFO  | Setting property image_build_date: 2024-09-26 2026-01-10 15:01:16.165958 | orchestrator | 2026-01-10 15:01:15 | INFO  | Checking status of 'Cirros 0.6.3' 2026-01-10 15:01:16.165966 | orchestrator | 2026-01-10 15:01:15 | INFO  | Checking visibility of 'Cirros 0.6.3' 2026-01-10 15:01:16.165972 | orchestrator | 2026-01-10 15:01:15 | INFO  | Setting visibility of 'Cirros 0.6.3' to 'public' 2026-01-10 15:01:16.495587 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh 2026-01-10 15:01:18.842472 | orchestrator | 2026-01-10 15:01:18 | INFO  | date: 2026-01-10 2026-01-10 15:01:18.842558 | orchestrator | 2026-01-10 15:01:18 | INFO  | image: octavia-amphora-haproxy-2025.1.20260110.qcow2 2026-01-10 15:01:18.842598 | orchestrator | 2026-01-10 15:01:18 | INFO  | url: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2025.1.20260110.qcow2 2026-01-10 15:01:18.842610 | orchestrator | 2026-01-10 15:01:18 | INFO  | checksum_url: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2025.1.20260110.qcow2.CHECKSUM 2026-01-10 15:01:19.197969 | orchestrator | 2026-01-10 15:01:19 | INFO  | checksum: c5ea04bca8a01758b5f07ec62ab2524912de8516c71a0715ab8e56ab21ebbd36 2026-01-10 15:01:19.281978 | orchestrator | 2026-01-10 15:01:19 | INFO  | It takes a moment until task e88eda1d-c286-464f-9d63-dcdfc508b0c6 (image-manager) has been started and output is visible here. 2026-01-10 15:02:29.881242 | orchestrator | 2026-01-10 15:01:21 | INFO  | Processing image 'OpenStack Octavia Amphora 2026-01-10' 2026-01-10 15:02:29.881363 | orchestrator | 2026-01-10 15:01:21 | INFO  | Tested URL https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2025.1.20260110.qcow2: 200 2026-01-10 15:02:29.881375 | orchestrator | 2026-01-10 15:01:21 | INFO  | Importing image OpenStack Octavia Amphora 2026-01-10 2026-01-10 15:02:29.881380 | orchestrator | 2026-01-10 15:01:21 | INFO  | Importing from URL https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2025.1.20260110.qcow2 2026-01-10 15:02:29.881386 | orchestrator | 2026-01-10 15:01:23 | INFO  | Waiting for image to leave queued state... 2026-01-10 15:02:29.881390 | orchestrator | 2026-01-10 15:01:25 | INFO  | Waiting for import to complete... 2026-01-10 15:02:29.881395 | orchestrator | 2026-01-10 15:01:35 | INFO  | Waiting for import to complete... 2026-01-10 15:02:29.881399 | orchestrator | 2026-01-10 15:01:45 | INFO  | Waiting for import to complete... 2026-01-10 15:02:29.881403 | orchestrator | 2026-01-10 15:01:55 | INFO  | Waiting for import to complete... 2026-01-10 15:02:29.881409 | orchestrator | 2026-01-10 15:02:05 | INFO  | Waiting for import to complete... 2026-01-10 15:02:29.881413 | orchestrator | 2026-01-10 15:02:15 | INFO  | Waiting for import to complete... 2026-01-10 15:02:29.881473 | orchestrator | 2026-01-10 15:02:25 | INFO  | Import of 'OpenStack Octavia Amphora 2026-01-10' successfully completed, reloading images 2026-01-10 15:02:29.881482 | orchestrator | 2026-01-10 15:02:25 | INFO  | Checking parameters of 'OpenStack Octavia Amphora 2026-01-10' 2026-01-10 15:02:29.881516 | orchestrator | 2026-01-10 15:02:25 | INFO  | Setting internal_version = 2026-01-10 2026-01-10 15:02:29.881523 | orchestrator | 2026-01-10 15:02:25 | INFO  | Setting image_original_user = ubuntu 2026-01-10 15:02:29.881531 | orchestrator | 2026-01-10 15:02:25 | INFO  | Adding tag amphora 2026-01-10 15:02:29.881538 | orchestrator | 2026-01-10 15:02:26 | INFO  | Adding tag os:ubuntu 2026-01-10 15:02:29.881545 | orchestrator | 2026-01-10 15:02:26 | INFO  | Setting property architecture: x86_64 2026-01-10 15:02:29.881552 | orchestrator | 2026-01-10 15:02:26 | INFO  | Setting property hw_disk_bus: scsi 2026-01-10 15:02:29.881558 | orchestrator | 2026-01-10 15:02:26 | INFO  | Setting property hw_rng_model: virtio 2026-01-10 15:02:29.881576 | orchestrator | 2026-01-10 15:02:26 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-01-10 15:02:29.881581 | orchestrator | 2026-01-10 15:02:27 | INFO  | Setting property hw_watchdog_action: reset 2026-01-10 15:02:29.881585 | orchestrator | 2026-01-10 15:02:27 | INFO  | Setting property hypervisor_type: qemu 2026-01-10 15:02:29.881588 | orchestrator | 2026-01-10 15:02:27 | INFO  | Setting property os_distro: ubuntu 2026-01-10 15:02:29.881592 | orchestrator | 2026-01-10 15:02:27 | INFO  | Setting property replace_frequency: quarterly 2026-01-10 15:02:29.881598 | orchestrator | 2026-01-10 15:02:27 | INFO  | Setting property uuid_validity: last-1 2026-01-10 15:02:29.881604 | orchestrator | 2026-01-10 15:02:27 | INFO  | Setting property provided_until: none 2026-01-10 15:02:29.881611 | orchestrator | 2026-01-10 15:02:28 | INFO  | Setting property os_purpose: network 2026-01-10 15:02:29.881632 | orchestrator | 2026-01-10 15:02:28 | INFO  | Setting property image_description: OpenStack Octavia Amphora 2026-01-10 15:02:29.881639 | orchestrator | 2026-01-10 15:02:28 | INFO  | Setting property image_name: OpenStack Octavia Amphora 2026-01-10 15:02:29.881643 | orchestrator | 2026-01-10 15:02:28 | INFO  | Setting property internal_version: 2026-01-10 2026-01-10 15:02:29.881647 | orchestrator | 2026-01-10 15:02:28 | INFO  | Setting property image_original_user: ubuntu 2026-01-10 15:02:29.881651 | orchestrator | 2026-01-10 15:02:29 | INFO  | Setting property os_version: 2026-01-10 2026-01-10 15:02:29.881655 | orchestrator | 2026-01-10 15:02:29 | INFO  | Setting property image_source: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2025.1.20260110.qcow2 2026-01-10 15:02:29.881659 | orchestrator | 2026-01-10 15:02:29 | INFO  | Setting property image_build_date: 2026-01-10 2026-01-10 15:02:29.881663 | orchestrator | 2026-01-10 15:02:29 | INFO  | Checking status of 'OpenStack Octavia Amphora 2026-01-10' 2026-01-10 15:02:29.881667 | orchestrator | 2026-01-10 15:02:29 | INFO  | Checking visibility of 'OpenStack Octavia Amphora 2026-01-10' 2026-01-10 15:02:29.881686 | orchestrator | 2026-01-10 15:02:29 | INFO  | Processing image 'Cirros 0.6.3' (removal candidate) 2026-01-10 15:02:29.881690 | orchestrator | 2026-01-10 15:02:29 | WARNING  | No image definition found for 'Cirros 0.6.3', image will be ignored 2026-01-10 15:02:29.881696 | orchestrator | 2026-01-10 15:02:29 | INFO  | Processing image 'Cirros 0.6.2' (removal candidate) 2026-01-10 15:02:29.881699 | orchestrator | 2026-01-10 15:02:29 | WARNING  | No image definition found for 'Cirros 0.6.2', image will be ignored 2026-01-10 15:02:30.334595 | orchestrator | ok: Runtime: 0:03:08.279226 2026-01-10 15:02:30.356338 | 2026-01-10 15:02:30.356524 | TASK [Run checks] 2026-01-10 15:02:31.118511 | orchestrator | + set -e 2026-01-10 15:02:31.118690 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-01-10 15:02:31.118707 | orchestrator | ++ export INTERACTIVE=false 2026-01-10 15:02:31.118716 | orchestrator | ++ INTERACTIVE=false 2026-01-10 15:02:31.118722 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-01-10 15:02:31.118729 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-01-10 15:02:31.118737 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-01-10 15:02:31.119646 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-01-10 15:02:31.123938 | orchestrator | 2026-01-10 15:02:31.124030 | orchestrator | # CHECK 2026-01-10 15:02:31.124042 | orchestrator | 2026-01-10 15:02:31.124052 | orchestrator | ++ export MANAGER_VERSION=latest 2026-01-10 15:02:31.124064 | orchestrator | ++ MANAGER_VERSION=latest 2026-01-10 15:02:31.124071 | orchestrator | + echo 2026-01-10 15:02:31.124078 | orchestrator | + echo '# CHECK' 2026-01-10 15:02:31.124085 | orchestrator | + echo 2026-01-10 15:02:31.124096 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-01-10 15:02:31.124669 | orchestrator | ++ semver latest 5.0.0 2026-01-10 15:02:31.188761 | orchestrator | 2026-01-10 15:02:31.188862 | orchestrator | ## Containers @ testbed-manager 2026-01-10 15:02:31.188874 | orchestrator | 2026-01-10 15:02:31.188889 | orchestrator | + [[ -1 -eq -1 ]] 2026-01-10 15:02:31.188893 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-01-10 15:02:31.188898 | orchestrator | + echo 2026-01-10 15:02:31.188903 | orchestrator | + echo '## Containers @ testbed-manager' 2026-01-10 15:02:31.188908 | orchestrator | + echo 2026-01-10 15:02:31.188912 | orchestrator | + osism container testbed-manager ps 2026-01-10 15:02:33.299341 | orchestrator | 2026-01-10 15:02:33 | INFO  | Creating empty known_hosts file: /share/known_hosts 2026-01-10 15:02:33.712958 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-01-10 15:02:33.713124 | orchestrator | ca38f849edc7 registry.osism.tech/kolla/prometheus-blackbox-exporter:2025.1 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_blackbox_exporter 2026-01-10 15:02:33.713141 | orchestrator | 273804e2e55c registry.osism.tech/kolla/prometheus-alertmanager:2025.1 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_alertmanager 2026-01-10 15:02:33.713146 | orchestrator | 61268d5200e1 registry.osism.tech/kolla/prometheus-cadvisor:2025.1 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_cadvisor 2026-01-10 15:02:33.713154 | orchestrator | aa12df6f7677 registry.osism.tech/kolla/prometheus-node-exporter:2025.1 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_node_exporter 2026-01-10 15:02:33.713162 | orchestrator | 6c5c7b4bb5eb registry.osism.tech/kolla/prometheus-server:2025.1 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_server 2026-01-10 15:02:33.713167 | orchestrator | f5364b251a6e registry.osism.tech/osism/cephclient:reef "/usr/bin/dumb-init …" 17 minutes ago Up 17 minutes cephclient 2026-01-10 15:02:33.713171 | orchestrator | 48a4b9ec7c61 registry.osism.tech/kolla/cron:2025.1 "dumb-init --single-…" 30 minutes ago Up 30 minutes cron 2026-01-10 15:02:33.713175 | orchestrator | 6bbb37ff7644 registry.osism.tech/kolla/kolla-toolbox:2025.1 "dumb-init --single-…" 30 minutes ago Up 30 minutes kolla_toolbox 2026-01-10 15:02:33.713200 | orchestrator | b8f78c2d3571 registry.osism.tech/kolla/fluentd:2025.1 "dumb-init --single-…" 31 minutes ago Up 31 minutes fluentd 2026-01-10 15:02:33.713205 | orchestrator | 78315606805e phpmyadmin/phpmyadmin:5.2 "/docker-entrypoint.…" 31 minutes ago Up 31 minutes (healthy) 80/tcp phpmyadmin 2026-01-10 15:02:33.713208 | orchestrator | 7320e857add2 registry.osism.tech/osism/openstackclient:2025.1 "/usr/bin/dumb-init …" 32 minutes ago Up 32 minutes openstackclient 2026-01-10 15:02:33.713212 | orchestrator | f761e70775ce registry.osism.tech/osism/homer:v25.10.1 "/bin/sh /entrypoint…" 32 minutes ago Up 32 minutes (healthy) 8080/tcp homer 2026-01-10 15:02:33.713216 | orchestrator | 2d6bdf2caf82 registry.osism.tech/dockerhub/ubuntu/squid:6.1-23.10_beta "entrypoint.sh -f /e…" 56 minutes ago Up 56 minutes (healthy) 192.168.16.5:3128->3128/tcp squid 2026-01-10 15:02:33.713220 | orchestrator | be17b9d90494 registry.osism.tech/osism/inventory-reconciler:latest "/sbin/tini -- /entr…" About an hour ago Up 38 minutes (healthy) manager-inventory_reconciler-1 2026-01-10 15:02:33.713224 | orchestrator | fd2d40f730fe registry.osism.tech/osism/ceph-ansible:reef "/entrypoint.sh osis…" About an hour ago Up 39 minutes (healthy) ceph-ansible 2026-01-10 15:02:33.713245 | orchestrator | 7637cae0da83 registry.osism.tech/osism/osism-kubernetes:latest "/entrypoint.sh osis…" About an hour ago Up 39 minutes (healthy) osism-kubernetes 2026-01-10 15:02:33.713250 | orchestrator | 8b2d701f65bc registry.osism.tech/osism/kolla-ansible:2025.1 "/entrypoint.sh osis…" About an hour ago Up 39 minutes (healthy) kolla-ansible 2026-01-10 15:02:33.713254 | orchestrator | 2a07714fada2 registry.osism.tech/osism/osism-ansible:latest "/entrypoint.sh osis…" About an hour ago Up 39 minutes (healthy) osism-ansible 2026-01-10 15:02:33.713258 | orchestrator | cd27848637df registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" About an hour ago Up 39 minutes (healthy) 8000/tcp manager-ara-server-1 2026-01-10 15:02:33.713262 | orchestrator | dad9def28714 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" About an hour ago Up 40 minutes (healthy) manager-flower-1 2026-01-10 15:02:33.713265 | orchestrator | 2efde213ab44 registry.osism.tech/osism/osism-frontend:latest "docker-entrypoint.s…" About an hour ago Up 40 minutes 192.168.16.5:3000->3000/tcp osism-frontend 2026-01-10 15:02:33.713269 | orchestrator | dce48a797ddf registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" About an hour ago Up 40 minutes (healthy) 192.168.16.5:8000->8000/tcp manager-api-1 2026-01-10 15:02:33.713273 | orchestrator | da1f568350b2 registry.osism.tech/dockerhub/library/redis:7.4.7-alpine "docker-entrypoint.s…" About an hour ago Up 40 minutes (healthy) 6379/tcp manager-redis-1 2026-01-10 15:02:33.713281 | orchestrator | 55ca975118a8 registry.osism.tech/dockerhub/library/mariadb:11.8.4 "docker-entrypoint.s…" About an hour ago Up 40 minutes (healthy) 3306/tcp manager-mariadb-1 2026-01-10 15:02:33.713285 | orchestrator | 3b8e11e05fef registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" About an hour ago Up 40 minutes (healthy) manager-openstack-1 2026-01-10 15:02:33.713289 | orchestrator | 8990ee86cbb8 registry.osism.tech/osism/osism:latest "/sbin/tini -- sleep…" About an hour ago Up 40 minutes (healthy) osismclient 2026-01-10 15:02:33.713293 | orchestrator | 45343f66a891 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" About an hour ago Up 40 minutes (healthy) manager-listener-1 2026-01-10 15:02:33.713297 | orchestrator | 62e9d4a28f91 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" About an hour ago Up 40 minutes (healthy) manager-beat-1 2026-01-10 15:02:33.713301 | orchestrator | a6748ca31a3c registry.osism.tech/dockerhub/library/traefik:v3.5.0 "/entrypoint.sh trae…" About an hour ago Up About an hour (healthy) 192.168.16.5:80->80/tcp, 192.168.16.5:443->443/tcp, 192.168.16.5:8122->8080/tcp traefik 2026-01-10 15:02:34.036282 | orchestrator | 2026-01-10 15:02:34.036374 | orchestrator | ## Images @ testbed-manager 2026-01-10 15:02:34.036387 | orchestrator | 2026-01-10 15:02:34.036393 | orchestrator | + echo 2026-01-10 15:02:34.036400 | orchestrator | + echo '## Images @ testbed-manager' 2026-01-10 15:02:34.036452 | orchestrator | + echo 2026-01-10 15:02:34.036463 | orchestrator | + osism container testbed-manager images 2026-01-10 15:02:36.503737 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-01-10 15:02:36.503872 | orchestrator | registry.osism.tech/osism/openstackclient 2025.1 4372d3f9a6df 12 hours ago 211MB 2026-01-10 15:02:36.503881 | orchestrator | registry.osism.tech/osism/cephclient reef b441644e2eee 12 hours ago 453MB 2026-01-10 15:02:36.503886 | orchestrator | registry.osism.tech/kolla/cron 2025.1 9d120b7105f5 13 hours ago 271MB 2026-01-10 15:02:36.503892 | orchestrator | registry.osism.tech/kolla/fluentd 2025.1 746f114d0355 13 hours ago 585MB 2026-01-10 15:02:36.503897 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2025.1 a91be85ab8f4 13 hours ago 679MB 2026-01-10 15:02:36.503903 | orchestrator | registry.osism.tech/kolla/prometheus-blackbox-exporter 2025.1 1f400ebd1ed6 13 hours ago 314MB 2026-01-10 15:02:36.503908 | orchestrator | registry.osism.tech/kolla/prometheus-node-exporter 2025.1 dbb749517df4 13 hours ago 311MB 2026-01-10 15:02:36.503913 | orchestrator | registry.osism.tech/kolla/prometheus-cadvisor 2025.1 d5c15f74b87b 13 hours ago 363MB 2026-01-10 15:02:36.503918 | orchestrator | registry.osism.tech/kolla/prometheus-alertmanager 2025.1 ed094b9d0cbf 13 hours ago 409MB 2026-01-10 15:02:36.503923 | orchestrator | registry.osism.tech/kolla/prometheus-server 2025.1 e773ffe71960 13 hours ago 855MB 2026-01-10 15:02:36.503929 | orchestrator | registry.osism.tech/osism/osism-ansible latest b59653aa95c7 15 hours ago 611MB 2026-01-10 15:02:36.503934 | orchestrator | registry.osism.tech/osism/ceph-ansible reef 773d8b3ff6ac 15 hours ago 560MB 2026-01-10 15:02:36.503939 | orchestrator | registry.osism.tech/osism/osism-kubernetes latest f30a21f2039f 15 hours ago 1.23GB 2026-01-10 15:02:36.503960 | orchestrator | registry.osism.tech/osism/osism latest 8c29e414bab3 15 hours ago 384MB 2026-01-10 15:02:36.503966 | orchestrator | registry.osism.tech/osism/osism-frontend latest 5e85434cda64 15 hours ago 239MB 2026-01-10 15:02:36.503971 | orchestrator | registry.osism.tech/osism/inventory-reconciler latest c82dc72740b7 15 hours ago 335MB 2026-01-10 15:02:36.503976 | orchestrator | registry.osism.tech/osism/kolla-ansible 2025.1 0afd47ab248b 30 hours ago 607MB 2026-01-10 15:02:36.503981 | orchestrator | registry.osism.tech/osism/homer v25.10.1 ea34b371c716 5 weeks ago 11.5MB 2026-01-10 15:02:36.503986 | orchestrator | registry.osism.tech/dockerhub/library/mariadb 11.8.4 70745dd8f1d0 8 weeks ago 334MB 2026-01-10 15:02:36.503991 | orchestrator | registry.osism.tech/dockerhub/library/redis 7.4.7-alpine 13105d2858de 2 months ago 41.4MB 2026-01-10 15:02:36.503996 | orchestrator | phpmyadmin/phpmyadmin 5.2 e66b1f5a8c58 3 months ago 742MB 2026-01-10 15:02:36.504001 | orchestrator | registry.osism.tech/osism/ara-server 1.7.3 d1b687333f2f 4 months ago 275MB 2026-01-10 15:02:36.504006 | orchestrator | registry.osism.tech/dockerhub/library/traefik v3.5.0 11cc59587f6a 5 months ago 226MB 2026-01-10 15:02:36.504011 | orchestrator | registry.osism.tech/dockerhub/ubuntu/squid 6.1-23.10_beta 34b6bbbcf74b 19 months ago 146MB 2026-01-10 15:02:36.852863 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-01-10 15:02:36.852938 | orchestrator | ++ semver latest 5.0.0 2026-01-10 15:02:36.900988 | orchestrator | 2026-01-10 15:02:36.901060 | orchestrator | ## Containers @ testbed-node-0 2026-01-10 15:02:36.901067 | orchestrator | 2026-01-10 15:02:36.901071 | orchestrator | + [[ -1 -eq -1 ]] 2026-01-10 15:02:36.901075 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-01-10 15:02:36.901080 | orchestrator | + echo 2026-01-10 15:02:36.901084 | orchestrator | + echo '## Containers @ testbed-node-0' 2026-01-10 15:02:36.901090 | orchestrator | + echo 2026-01-10 15:02:36.901094 | orchestrator | + osism container testbed-node-0 ps 2026-01-10 15:02:39.383804 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-01-10 15:02:39.383992 | orchestrator | 0d2d54519c82 registry.osism.tech/kolla/octavia-worker:2025.1 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_worker 2026-01-10 15:02:39.384002 | orchestrator | 390c1d0ea61b registry.osism.tech/kolla/octavia-housekeeping:2025.1 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_housekeeping 2026-01-10 15:02:39.384006 | orchestrator | 7236817b0df6 registry.osism.tech/kolla/octavia-health-manager:2025.1 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_health_manager 2026-01-10 15:02:39.384023 | orchestrator | 83efeb898303 registry.osism.tech/kolla/octavia-driver-agent:2025.1 "dumb-init --single-…" 4 minutes ago Up 4 minutes octavia_driver_agent 2026-01-10 15:02:39.384027 | orchestrator | 2e309c5f6abd registry.osism.tech/kolla/octavia-api:2025.1 "dumb-init --single-…" 5 minutes ago Up 5 minutes (healthy) octavia_api 2026-01-10 15:02:39.384032 | orchestrator | 0513b0b58d38 registry.osism.tech/kolla/magnum-conductor:2025.1 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) magnum_conductor 2026-01-10 15:02:39.384036 | orchestrator | fd9013f0a764 registry.osism.tech/kolla/nova-novncproxy:2025.1 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) nova_novncproxy 2026-01-10 15:02:39.384040 | orchestrator | d41d17dd52d8 registry.osism.tech/kolla/nova-conductor:2025.1 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) nova_conductor 2026-01-10 15:02:39.384044 | orchestrator | 225db38f1627 registry.osism.tech/kolla/magnum-api:2025.1 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) magnum_api 2026-01-10 15:02:39.384063 | orchestrator | e0613d2e8f93 registry.osism.tech/kolla/grafana:2025.1 "dumb-init --single-…" 8 minutes ago Up 8 minutes grafana 2026-01-10 15:02:39.384067 | orchestrator | c6d3028ff48a registry.osism.tech/kolla/placement-api:2025.1 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) placement_api 2026-01-10 15:02:39.384071 | orchestrator | 6861c6da811e registry.osism.tech/kolla/neutron-server:2025.1 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) neutron_server 2026-01-10 15:02:39.384075 | orchestrator | 9b047bd90df1 registry.osism.tech/kolla/designate-worker:2025.1 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_worker 2026-01-10 15:02:39.384078 | orchestrator | b9d72f1b7434 registry.osism.tech/kolla/designate-mdns:2025.1 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_mdns 2026-01-10 15:02:39.384082 | orchestrator | 246a553aced6 registry.osism.tech/kolla/designate-producer:2025.1 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) designate_producer 2026-01-10 15:02:39.384086 | orchestrator | 705ff04c85ac registry.osism.tech/kolla/designate-central:2025.1 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) designate_central 2026-01-10 15:02:39.384090 | orchestrator | fa55467d04b8 registry.osism.tech/kolla/designate-api:2025.1 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) designate_api 2026-01-10 15:02:39.384094 | orchestrator | e7b056decde5 registry.osism.tech/kolla/nova-api:2025.1 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) nova_metadata 2026-01-10 15:02:39.384100 | orchestrator | 868a151fe0e9 registry.osism.tech/kolla/designate-backend-bind9:2025.1 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) designate_backend_bind9 2026-01-10 15:02:39.384106 | orchestrator | b3955a917199 registry.osism.tech/kolla/nova-api:2025.1 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) nova_api 2026-01-10 15:02:39.384112 | orchestrator | 0ebeb945898c registry.osism.tech/kolla/nova-scheduler:2025.1 "dumb-init --single-…" 11 minutes ago Up 9 minutes (healthy) nova_scheduler 2026-01-10 15:02:39.384149 | orchestrator | 39ac6f24aa52 registry.osism.tech/kolla/barbican-worker:2025.1 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_worker 2026-01-10 15:02:39.384170 | orchestrator | 1417b661f0ee registry.osism.tech/kolla/barbican-keystone-listener:2025.1 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) barbican_keystone_listener 2026-01-10 15:02:39.384175 | orchestrator | d81a526f130d registry.osism.tech/kolla/barbican-api:2025.1 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) barbican_api 2026-01-10 15:02:39.384181 | orchestrator | 2a7a89af5fc9 registry.osism.tech/kolla/cinder-backup:2025.1 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) cinder_backup 2026-01-10 15:02:39.384190 | orchestrator | 807dd9d0a8a6 registry.osism.tech/kolla/cinder-volume:2025.1 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) cinder_volume 2026-01-10 15:02:39.384196 | orchestrator | f4cd905af990 registry.osism.tech/kolla/glance-api:2025.1 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) glance_api 2026-01-10 15:02:39.384201 | orchestrator | 6038a4695f8d registry.osism.tech/kolla/cinder-scheduler:2025.1 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) cinder_scheduler 2026-01-10 15:02:39.384222 | orchestrator | 1e2e50d9b20d registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_elasticsearch_exporter 2026-01-10 15:02:39.384229 | orchestrator | 88bf803a86b8 registry.osism.tech/kolla/cinder-api:2025.1 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) cinder_api 2026-01-10 15:02:39.384235 | orchestrator | 9907c934eb54 registry.osism.tech/kolla/prometheus-cadvisor:2025.1 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_cadvisor 2026-01-10 15:02:39.384241 | orchestrator | 5a5f9a910eeb registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_memcached_exporter 2026-01-10 15:02:39.384247 | orchestrator | 74cbe0e1cae5 registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_mysqld_exporter 2026-01-10 15:02:39.384253 | orchestrator | c0ef88ffc189 registry.osism.tech/kolla/prometheus-node-exporter:2025.1 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_node_exporter 2026-01-10 15:02:39.384260 | orchestrator | 8caf89554fbe registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mgr -…" 16 minutes ago Up 15 minutes ceph-mgr-testbed-node-0 2026-01-10 15:02:39.384268 | orchestrator | a2e8b00c1348 registry.osism.tech/kolla/keystone:2025.1 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) keystone 2026-01-10 15:02:39.384279 | orchestrator | ea831e7ae1da registry.osism.tech/kolla/keystone-fernet:2025.1 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) keystone_fernet 2026-01-10 15:02:39.384288 | orchestrator | 3cca2867a703 registry.osism.tech/kolla/keystone-ssh:2025.1 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) keystone_ssh 2026-01-10 15:02:39.384294 | orchestrator | db9801743760 registry.osism.tech/kolla/horizon:2025.1 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) horizon 2026-01-10 15:02:39.384301 | orchestrator | c34806ed2801 registry.osism.tech/kolla/mariadb-server:2025.1 "dumb-init -- kolla_…" 20 minutes ago Up 20 minutes (healthy) mariadb 2026-01-10 15:02:39.384312 | orchestrator | 043c2c8f2fd5 registry.osism.tech/kolla/opensearch-dashboards:2025.1 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) opensearch_dashboards 2026-01-10 15:02:39.384319 | orchestrator | 2d44d4035aed registry.osism.tech/kolla/opensearch:2025.1 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) opensearch 2026-01-10 15:02:39.384327 | orchestrator | 9656542706aa registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-crash" 23 minutes ago Up 23 minutes ceph-crash-testbed-node-0 2026-01-10 15:02:39.384336 | orchestrator | cbd4498f93a3 registry.osism.tech/kolla/keepalived:2025.1 "dumb-init --single-…" 23 minutes ago Up 23 minutes keepalived 2026-01-10 15:02:39.384370 | orchestrator | f64bd8bd0cb6 registry.osism.tech/kolla/proxysql:2025.1 "dumb-init --single-…" 23 minutes ago Up 23 minutes (healthy) proxysql 2026-01-10 15:02:39.384377 | orchestrator | cfe6ba2b404d registry.osism.tech/kolla/haproxy:2025.1 "dumb-init --single-…" 23 minutes ago Up 23 minutes (healthy) haproxy 2026-01-10 15:02:39.384396 | orchestrator | cf049e5caaf5 registry.osism.tech/kolla/ovn-northd:2025.1 "dumb-init --single-…" 26 minutes ago Up 26 minutes ovn_northd 2026-01-10 15:02:39.384402 | orchestrator | 5e732c1b74f0 registry.osism.tech/kolla/ovn-sb-db-relay:2025.1 "dumb-init --single-…" 26 minutes ago Up 25 minutes ovn_sb_db_relay_1 2026-01-10 15:02:39.384413 | orchestrator | 3736fe42e994 registry.osism.tech/kolla/ovn-sb-db-server:2025.1 "dumb-init --single-…" 27 minutes ago Up 27 minutes ovn_sb_db 2026-01-10 15:02:39.384420 | orchestrator | 2a02f90c6691 registry.osism.tech/kolla/ovn-nb-db-server:2025.1 "dumb-init --single-…" 27 minutes ago Up 27 minutes ovn_nb_db 2026-01-10 15:02:39.384425 | orchestrator | f3a8f1b043b4 registry.osism.tech/kolla/ovn-controller:2025.1 "dumb-init --single-…" 28 minutes ago Up 28 minutes ovn_controller 2026-01-10 15:02:39.384431 | orchestrator | c8120fdfe5a3 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mon -…" 28 minutes ago Up 28 minutes ceph-mon-testbed-node-0 2026-01-10 15:02:39.384478 | orchestrator | 476dee281b1a registry.osism.tech/kolla/rabbitmq:2025.1 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) rabbitmq 2026-01-10 15:02:39.384485 | orchestrator | a0a63c1998e6 registry.osism.tech/kolla/openvswitch-vswitchd:2025.1 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) openvswitch_vswitchd 2026-01-10 15:02:39.384489 | orchestrator | ab33c73ab88e registry.osism.tech/kolla/openvswitch-db-server:2025.1 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) openvswitch_db 2026-01-10 15:02:39.384492 | orchestrator | e04240f746ca registry.osism.tech/kolla/redis-sentinel:2025.1 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) redis_sentinel 2026-01-10 15:02:39.384497 | orchestrator | c2b4b306924a registry.osism.tech/kolla/redis:2025.1 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) redis 2026-01-10 15:02:39.384500 | orchestrator | 8bb14d4dfc5c registry.osism.tech/kolla/memcached:2025.1 "dumb-init --single-…" 30 minutes ago Up 29 minutes (healthy) memcached 2026-01-10 15:02:39.384504 | orchestrator | df552d171dfa registry.osism.tech/kolla/cron:2025.1 "dumb-init --single-…" 30 minutes ago Up 30 minutes cron 2026-01-10 15:02:39.384508 | orchestrator | 0ef9719acd6d registry.osism.tech/kolla/kolla-toolbox:2025.1 "dumb-init --single-…" 31 minutes ago Up 31 minutes kolla_toolbox 2026-01-10 15:02:39.384512 | orchestrator | 154ebdf1a3f7 registry.osism.tech/kolla/fluentd:2025.1 "dumb-init --single-…" 31 minutes ago Up 31 minutes fluentd 2026-01-10 15:02:39.719425 | orchestrator | 2026-01-10 15:02:39.719541 | orchestrator | ## Images @ testbed-node-0 2026-01-10 15:02:39.719552 | orchestrator | 2026-01-10 15:02:39.719560 | orchestrator | + echo 2026-01-10 15:02:39.719570 | orchestrator | + echo '## Images @ testbed-node-0' 2026-01-10 15:02:39.719578 | orchestrator | + echo 2026-01-10 15:02:39.719585 | orchestrator | + osism container testbed-node-0 images 2026-01-10 15:02:42.285558 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-01-10 15:02:42.285688 | orchestrator | registry.osism.tech/osism/ceph-daemon reef 2ffb60ff6501 12 hours ago 1.27GB 2026-01-10 15:02:42.285701 | orchestrator | registry.osism.tech/kolla/memcached 2025.1 147fb51206a9 13 hours ago 272MB 2026-01-10 15:02:42.285708 | orchestrator | registry.osism.tech/kolla/grafana 2025.1 d60d501bfb2e 13 hours ago 1.02GB 2026-01-10 15:02:42.285716 | orchestrator | registry.osism.tech/kolla/opensearch 2025.1 180fefcd6471 13 hours ago 1.56GB 2026-01-10 15:02:42.285723 | orchestrator | registry.osism.tech/kolla/opensearch-dashboards 2025.1 28bfab3a635d 13 hours ago 1.53GB 2026-01-10 15:02:42.285731 | orchestrator | registry.osism.tech/kolla/cron 2025.1 9d120b7105f5 13 hours ago 271MB 2026-01-10 15:02:42.285758 | orchestrator | registry.osism.tech/kolla/fluentd 2025.1 746f114d0355 13 hours ago 585MB 2026-01-10 15:02:42.285765 | orchestrator | registry.osism.tech/kolla/proxysql 2025.1 904212638264 13 hours ago 418MB 2026-01-10 15:02:42.285772 | orchestrator | registry.osism.tech/kolla/keepalived 2025.1 9b207f58f544 13 hours ago 282MB 2026-01-10 15:02:42.285779 | orchestrator | registry.osism.tech/kolla/haproxy 2025.1 1a948fcc1079 13 hours ago 280MB 2026-01-10 15:02:42.285786 | orchestrator | registry.osism.tech/kolla/rabbitmq 2025.1 936747a1a04d 13 hours ago 345MB 2026-01-10 15:02:42.285793 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2025.1 a91be85ab8f4 13 hours ago 679MB 2026-01-10 15:02:42.285797 | orchestrator | registry.osism.tech/kolla/redis 2025.1 b06a63fd8b65 13 hours ago 278MB 2026-01-10 15:02:42.285801 | orchestrator | registry.osism.tech/kolla/redis-sentinel 2025.1 81c9552f7b8a 13 hours ago 278MB 2026-01-10 15:02:42.285804 | orchestrator | registry.osism.tech/kolla/mariadb-server 2025.1 3edbca01402d 13 hours ago 458MB 2026-01-10 15:02:42.285808 | orchestrator | registry.osism.tech/kolla/horizon 2025.1 c36340c4fdce 13 hours ago 1.2GB 2026-01-10 15:02:42.285812 | orchestrator | registry.osism.tech/kolla/openvswitch-db-server 2025.1 dc7de6faec04 13 hours ago 288MB 2026-01-10 15:02:42.285815 | orchestrator | registry.osism.tech/kolla/openvswitch-vswitchd 2025.1 e7b1a1f98880 13 hours ago 288MB 2026-01-10 15:02:42.285819 | orchestrator | registry.osism.tech/kolla/prometheus-mysqld-exporter 2025.1 9e66eaaf6171 13 hours ago 307MB 2026-01-10 15:02:42.285823 | orchestrator | registry.osism.tech/kolla/prometheus-elasticsearch-exporter 2025.1 e59629a5a944 13 hours ago 297MB 2026-01-10 15:02:42.285826 | orchestrator | registry.osism.tech/kolla/prometheus-memcached-exporter 2025.1 a9cffb3ea040 13 hours ago 304MB 2026-01-10 15:02:42.285830 | orchestrator | registry.osism.tech/kolla/prometheus-node-exporter 2025.1 dbb749517df4 13 hours ago 311MB 2026-01-10 15:02:42.285834 | orchestrator | registry.osism.tech/kolla/prometheus-cadvisor 2025.1 d5c15f74b87b 13 hours ago 363MB 2026-01-10 15:02:42.285838 | orchestrator | registry.osism.tech/kolla/skyline-apiserver 2025.1 255b8d02016f 13 hours ago 1.01GB 2026-01-10 15:02:42.285854 | orchestrator | registry.osism.tech/kolla/skyline-console 2025.1 8502d10c4633 13 hours ago 1.06GB 2026-01-10 15:02:42.285858 | orchestrator | registry.osism.tech/kolla/octavia-driver-agent 2025.1 0505f0262488 13 hours ago 1.07GB 2026-01-10 15:02:42.285861 | orchestrator | registry.osism.tech/kolla/octavia-worker 2025.1 77c32aec0ebe 13 hours ago 1.05GB 2026-01-10 15:02:42.285865 | orchestrator | registry.osism.tech/kolla/octavia-health-manager 2025.1 50baf56511e9 13 hours ago 1.05GB 2026-01-10 15:02:42.285869 | orchestrator | registry.osism.tech/kolla/octavia-api 2025.1 a1f761f3e3e0 13 hours ago 1.07GB 2026-01-10 15:02:42.285873 | orchestrator | registry.osism.tech/kolla/octavia-housekeeping 2025.1 05ca4b8f3771 13 hours ago 1.05GB 2026-01-10 15:02:42.285876 | orchestrator | registry.osism.tech/kolla/nova-api 2025.1 3f4e8b4813be 13 hours ago 1.23GB 2026-01-10 15:02:42.285880 | orchestrator | registry.osism.tech/kolla/nova-conductor 2025.1 b419da07022f 13 hours ago 1.23GB 2026-01-10 15:02:42.285884 | orchestrator | registry.osism.tech/kolla/nova-novncproxy 2025.1 364df2ecd9fa 13 hours ago 1.39GB 2026-01-10 15:02:42.285887 | orchestrator | registry.osism.tech/kolla/nova-scheduler 2025.1 a99e2f6f88b8 13 hours ago 1.23GB 2026-01-10 15:02:42.285891 | orchestrator | registry.osism.tech/kolla/glance-api 2025.1 796a218454e3 13 hours ago 1.12GB 2026-01-10 15:02:42.285895 | orchestrator | registry.osism.tech/kolla/cinder-volume 2025.1 a263da2e21fa 13 hours ago 1.79GB 2026-01-10 15:02:42.285934 | orchestrator | registry.osism.tech/kolla/cinder-scheduler 2025.1 8ed09a39d47d 13 hours ago 1.43GB 2026-01-10 15:02:42.285939 | orchestrator | registry.osism.tech/kolla/cinder-api 2025.1 2b160816b1ac 13 hours ago 1.43GB 2026-01-10 15:02:42.285943 | orchestrator | registry.osism.tech/kolla/cinder-backup 2025.1 4c8c5af90125 13 hours ago 1.44GB 2026-01-10 15:02:42.285949 | orchestrator | registry.osism.tech/kolla/aodh-listener 2025.1 c4e23660993c 13 hours ago 991MB 2026-01-10 15:02:42.285955 | orchestrator | registry.osism.tech/kolla/aodh-notifier 2025.1 437d1183a7cf 13 hours ago 991MB 2026-01-10 15:02:42.285962 | orchestrator | registry.osism.tech/kolla/aodh-evaluator 2025.1 739871363fbb 13 hours ago 991MB 2026-01-10 15:02:42.285968 | orchestrator | registry.osism.tech/kolla/aodh-api 2025.1 6e5fe69f5bf8 13 hours ago 990MB 2026-01-10 15:02:42.285974 | orchestrator | registry.osism.tech/kolla/placement-api 2025.1 9c23ac2599f5 13 hours ago 992MB 2026-01-10 15:02:42.285980 | orchestrator | registry.osism.tech/kolla/ceilometer-notification 2025.1 4382161fa476 13 hours ago 992MB 2026-01-10 15:02:42.285986 | orchestrator | registry.osism.tech/kolla/ceilometer-central 2025.1 05d68d779d88 13 hours ago 993MB 2026-01-10 15:02:42.285992 | orchestrator | registry.osism.tech/kolla/magnum-conductor 2025.1 a78c8fbbe2ec 13 hours ago 1.26GB 2026-01-10 15:02:42.285999 | orchestrator | registry.osism.tech/kolla/magnum-api 2025.1 3ca716d3f9e6 13 hours ago 1.15GB 2026-01-10 15:02:42.286005 | orchestrator | registry.osism.tech/kolla/neutron-server 2025.1 ecf3aec5ac7c 13 hours ago 1.24GB 2026-01-10 15:02:42.286052 | orchestrator | registry.osism.tech/kolla/designate-central 2025.1 628f4e2f17cc 13 hours ago 1GB 2026-01-10 15:02:42.286059 | orchestrator | registry.osism.tech/kolla/designate-mdns 2025.1 0570068f444f 13 hours ago 1GB 2026-01-10 15:02:42.286063 | orchestrator | registry.osism.tech/kolla/designate-worker 2025.1 45de8c76fa6d 13 hours ago 1.01GB 2026-01-10 15:02:42.286066 | orchestrator | registry.osism.tech/kolla/designate-producer 2025.1 83f657439974 13 hours ago 1GB 2026-01-10 15:02:42.286070 | orchestrator | registry.osism.tech/kolla/designate-api 2025.1 aafa8adffc1e 13 hours ago 1GB 2026-01-10 15:02:42.286078 | orchestrator | registry.osism.tech/kolla/designate-backend-bind9 2025.1 0872eb9ebdd7 13 hours ago 1.01GB 2026-01-10 15:02:42.286082 | orchestrator | registry.osism.tech/kolla/barbican-keystone-listener 2025.1 940b0f3b627a 13 hours ago 1GB 2026-01-10 15:02:42.286086 | orchestrator | registry.osism.tech/kolla/barbican-api 2025.1 31db20473c45 13 hours ago 1e+03MB 2026-01-10 15:02:42.286090 | orchestrator | registry.osism.tech/kolla/barbican-worker 2025.1 4293da1b6a24 13 hours ago 1GB 2026-01-10 15:02:42.286093 | orchestrator | registry.osism.tech/kolla/keystone-fernet 2025.1 2c7477cf5058 13 hours ago 1.05GB 2026-01-10 15:02:42.286097 | orchestrator | registry.osism.tech/kolla/keystone-ssh 2025.1 0873caa4afe3 13 hours ago 1.05GB 2026-01-10 15:02:42.286101 | orchestrator | registry.osism.tech/kolla/keystone 2025.1 b047a8b081f1 13 hours ago 1.1GB 2026-01-10 15:02:42.286105 | orchestrator | registry.osism.tech/kolla/ovn-controller 2025.1 ace03ecb6fa9 13 hours ago 296MB 2026-01-10 15:02:42.286109 | orchestrator | registry.osism.tech/kolla/ovn-northd 2025.1 cadfe8336bd1 13 hours ago 295MB 2026-01-10 15:02:42.286112 | orchestrator | registry.osism.tech/kolla/ovn-nb-db-server 2025.1 38ecd2b02d75 13 hours ago 295MB 2026-01-10 15:02:42.286116 | orchestrator | registry.osism.tech/kolla/ovn-sb-db-server 2025.1 3293088ce921 13 hours ago 295MB 2026-01-10 15:02:42.286120 | orchestrator | registry.osism.tech/kolla/ovn-sb-db-relay 2025.1 b7c7d5db18f9 13 hours ago 295MB 2026-01-10 15:02:42.635118 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-01-10 15:02:42.635947 | orchestrator | ++ semver latest 5.0.0 2026-01-10 15:02:42.690183 | orchestrator | 2026-01-10 15:02:42.690282 | orchestrator | ## Containers @ testbed-node-1 2026-01-10 15:02:42.690294 | orchestrator | 2026-01-10 15:02:42.690300 | orchestrator | + [[ -1 -eq -1 ]] 2026-01-10 15:02:42.690305 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-01-10 15:02:42.690310 | orchestrator | + echo 2026-01-10 15:02:42.690316 | orchestrator | + echo '## Containers @ testbed-node-1' 2026-01-10 15:02:42.690322 | orchestrator | + echo 2026-01-10 15:02:42.690327 | orchestrator | + osism container testbed-node-1 ps 2026-01-10 15:02:45.112723 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-01-10 15:02:45.112808 | orchestrator | 5cc210a69ed1 registry.osism.tech/kolla/octavia-worker:2025.1 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_worker 2026-01-10 15:02:45.112822 | orchestrator | f99763e658a5 registry.osism.tech/kolla/octavia-housekeeping:2025.1 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_housekeeping 2026-01-10 15:02:45.112833 | orchestrator | 86f9c207cdd9 registry.osism.tech/kolla/octavia-health-manager:2025.1 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_health_manager 2026-01-10 15:02:45.112839 | orchestrator | 45e797a07852 registry.osism.tech/kolla/octavia-driver-agent:2025.1 "dumb-init --single-…" 4 minutes ago Up 4 minutes octavia_driver_agent 2026-01-10 15:02:45.112847 | orchestrator | 80e00219e32c registry.osism.tech/kolla/octavia-api:2025.1 "dumb-init --single-…" 5 minutes ago Up 5 minutes (healthy) octavia_api 2026-01-10 15:02:45.112858 | orchestrator | c9d29b628400 registry.osism.tech/kolla/grafana:2025.1 "dumb-init --single-…" 7 minutes ago Up 7 minutes grafana 2026-01-10 15:02:45.112865 | orchestrator | ac24eb4147c7 registry.osism.tech/kolla/nova-novncproxy:2025.1 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) nova_novncproxy 2026-01-10 15:02:45.112872 | orchestrator | e8e18739f87a registry.osism.tech/kolla/magnum-conductor:2025.1 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) magnum_conductor 2026-01-10 15:02:45.112879 | orchestrator | 7ff91f728b74 registry.osism.tech/kolla/magnum-api:2025.1 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) magnum_api 2026-01-10 15:02:45.112885 | orchestrator | 5d4bf53afefe registry.osism.tech/kolla/nova-conductor:2025.1 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) nova_conductor 2026-01-10 15:02:45.112892 | orchestrator | bec30eccd6b1 registry.osism.tech/kolla/placement-api:2025.1 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) placement_api 2026-01-10 15:02:45.112918 | orchestrator | 36fdcfc84089 registry.osism.tech/kolla/neutron-server:2025.1 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) neutron_server 2026-01-10 15:02:45.112926 | orchestrator | 67a648c8a7b6 registry.osism.tech/kolla/designate-worker:2025.1 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_worker 2026-01-10 15:02:45.112933 | orchestrator | b5e037d4330b registry.osism.tech/kolla/designate-mdns:2025.1 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_mdns 2026-01-10 15:02:45.112941 | orchestrator | 33863b2f0664 registry.osism.tech/kolla/designate-producer:2025.1 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) designate_producer 2026-01-10 15:02:45.112948 | orchestrator | 272379aa1240 registry.osism.tech/kolla/designate-central:2025.1 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) designate_central 2026-01-10 15:02:45.113020 | orchestrator | c4b85f4f2833 registry.osism.tech/kolla/designate-api:2025.1 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) designate_api 2026-01-10 15:02:45.113029 | orchestrator | bd922e880018 registry.osism.tech/kolla/designate-backend-bind9:2025.1 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) designate_backend_bind9 2026-01-10 15:02:45.113036 | orchestrator | 45dda4df5091 registry.osism.tech/kolla/nova-api:2025.1 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) nova_metadata 2026-01-10 15:02:45.113043 | orchestrator | f2092432e7bb registry.osism.tech/kolla/nova-api:2025.1 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) nova_api 2026-01-10 15:02:45.113050 | orchestrator | 5388aed12986 registry.osism.tech/kolla/nova-scheduler:2025.1 "dumb-init --single-…" 11 minutes ago Up 9 minutes (healthy) nova_scheduler 2026-01-10 15:02:45.113071 | orchestrator | 32a020c8b9c1 registry.osism.tech/kolla/barbican-worker:2025.1 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) barbican_worker 2026-01-10 15:02:45.113078 | orchestrator | 0c96834fe780 registry.osism.tech/kolla/barbican-keystone-listener:2025.1 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) barbican_keystone_listener 2026-01-10 15:02:45.113085 | orchestrator | fedbbc94e381 registry.osism.tech/kolla/barbican-api:2025.1 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) barbican_api 2026-01-10 15:02:45.113093 | orchestrator | 1a0890a7d68d registry.osism.tech/kolla/cinder-backup:2025.1 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) cinder_backup 2026-01-10 15:02:45.113100 | orchestrator | 130b9a5d81e5 registry.osism.tech/kolla/cinder-volume:2025.1 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) cinder_volume 2026-01-10 15:02:45.113106 | orchestrator | 978c8a8665c5 registry.osism.tech/kolla/glance-api:2025.1 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) glance_api 2026-01-10 15:02:45.113112 | orchestrator | 6873ccf8b478 registry.osism.tech/kolla/cinder-scheduler:2025.1 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) cinder_scheduler 2026-01-10 15:02:45.113119 | orchestrator | 27616a0b78aa registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_elasticsearch_exporter 2026-01-10 15:02:45.113127 | orchestrator | 3e43b6bbdc5d registry.osism.tech/kolla/cinder-api:2025.1 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) cinder_api 2026-01-10 15:02:45.113134 | orchestrator | d3e5372c2e06 registry.osism.tech/kolla/prometheus-cadvisor:2025.1 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_cadvisor 2026-01-10 15:02:45.113141 | orchestrator | 7d2c30d2de41 registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_memcached_exporter 2026-01-10 15:02:45.113148 | orchestrator | 1768ee2875a5 registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_mysqld_exporter 2026-01-10 15:02:45.113154 | orchestrator | 6f1239d6dc71 registry.osism.tech/kolla/prometheus-node-exporter:2025.1 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_node_exporter 2026-01-10 15:02:45.113168 | orchestrator | f86452d50575 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mgr -…" 16 minutes ago Up 16 minutes ceph-mgr-testbed-node-1 2026-01-10 15:02:45.113197 | orchestrator | a681da2a48a9 registry.osism.tech/kolla/keystone:2025.1 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) keystone 2026-01-10 15:02:45.113204 | orchestrator | 524d6ed821a0 registry.osism.tech/kolla/keystone-fernet:2025.1 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) keystone_fernet 2026-01-10 15:02:45.113212 | orchestrator | 5f89cccf4de7 registry.osism.tech/kolla/horizon:2025.1 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) horizon 2026-01-10 15:02:45.113218 | orchestrator | 505f92b8ca78 registry.osism.tech/kolla/keystone-ssh:2025.1 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) keystone_ssh 2026-01-10 15:02:45.113227 | orchestrator | 047a647be259 registry.osism.tech/kolla/opensearch-dashboards:2025.1 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) opensearch_dashboards 2026-01-10 15:02:45.113234 | orchestrator | 8f2f63f010db registry.osism.tech/kolla/mariadb-server:2025.1 "dumb-init -- kolla_…" 21 minutes ago Up 21 minutes (healthy) mariadb 2026-01-10 15:02:45.113241 | orchestrator | 8cb207985e71 registry.osism.tech/kolla/opensearch:2025.1 "dumb-init --single-…" 22 minutes ago Up 21 minutes (healthy) opensearch 2026-01-10 15:02:45.113248 | orchestrator | d83d693a9ff2 registry.osism.tech/kolla/keepalived:2025.1 "dumb-init --single-…" 23 minutes ago Up 23 minutes keepalived 2026-01-10 15:02:45.113254 | orchestrator | d38bf328101e registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-crash" 23 minutes ago Up 23 minutes ceph-crash-testbed-node-1 2026-01-10 15:02:45.113272 | orchestrator | 5143a34012a2 registry.osism.tech/kolla/proxysql:2025.1 "dumb-init --single-…" 23 minutes ago Up 23 minutes (healthy) proxysql 2026-01-10 15:02:45.113279 | orchestrator | 29d1bcc0e72b registry.osism.tech/kolla/haproxy:2025.1 "dumb-init --single-…" 23 minutes ago Up 23 minutes (healthy) haproxy 2026-01-10 15:02:45.113286 | orchestrator | ca900a6172d7 registry.osism.tech/kolla/ovn-northd:2025.1 "dumb-init --single-…" 26 minutes ago Up 26 minutes ovn_northd 2026-01-10 15:02:45.113293 | orchestrator | 8ecf8a086b76 registry.osism.tech/kolla/ovn-sb-db-relay:2025.1 "dumb-init --single-…" 26 minutes ago Up 25 minutes ovn_sb_db_relay_1 2026-01-10 15:02:45.113300 | orchestrator | 660559250db7 registry.osism.tech/kolla/rabbitmq:2025.1 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) rabbitmq 2026-01-10 15:02:45.113307 | orchestrator | c404a28f93bf registry.osism.tech/kolla/ovn-sb-db-server:2025.1 "dumb-init --single-…" 27 minutes ago Up 25 minutes ovn_sb_db 2026-01-10 15:02:45.113313 | orchestrator | 3f9f5d565552 registry.osism.tech/kolla/ovn-nb-db-server:2025.1 "dumb-init --single-…" 27 minutes ago Up 26 minutes ovn_nb_db 2026-01-10 15:02:45.113320 | orchestrator | 3eb3174a537a registry.osism.tech/kolla/ovn-controller:2025.1 "dumb-init --single-…" 28 minutes ago Up 28 minutes ovn_controller 2026-01-10 15:02:45.113327 | orchestrator | 40334733961f registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mon -…" 28 minutes ago Up 28 minutes ceph-mon-testbed-node-1 2026-01-10 15:02:45.113335 | orchestrator | 0a1d77040d5a registry.osism.tech/kolla/openvswitch-vswitchd:2025.1 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) openvswitch_vswitchd 2026-01-10 15:02:45.113342 | orchestrator | 5343e28a0cfa registry.osism.tech/kolla/openvswitch-db-server:2025.1 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) openvswitch_db 2026-01-10 15:02:45.113356 | orchestrator | 5c94eeb06b18 registry.osism.tech/kolla/redis-sentinel:2025.1 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) redis_sentinel 2026-01-10 15:02:45.113365 | orchestrator | 84f1a5230407 registry.osism.tech/kolla/redis:2025.1 "dumb-init --single-…" 30 minutes ago Up 29 minutes (healthy) redis 2026-01-10 15:02:45.113370 | orchestrator | 41a10e2be9e6 registry.osism.tech/kolla/memcached:2025.1 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) memcached 2026-01-10 15:02:45.113374 | orchestrator | 55c10d4032f5 registry.osism.tech/kolla/cron:2025.1 "dumb-init --single-…" 30 minutes ago Up 30 minutes cron 2026-01-10 15:02:45.113379 | orchestrator | dfea1ceff92a registry.osism.tech/kolla/kolla-toolbox:2025.1 "dumb-init --single-…" 30 minutes ago Up 30 minutes kolla_toolbox 2026-01-10 15:02:45.113383 | orchestrator | 694cfe6c4d87 registry.osism.tech/kolla/fluentd:2025.1 "dumb-init --single-…" 31 minutes ago Up 31 minutes fluentd 2026-01-10 15:02:45.511205 | orchestrator | 2026-01-10 15:02:45.511328 | orchestrator | ## Images @ testbed-node-1 2026-01-10 15:02:45.511341 | orchestrator | 2026-01-10 15:02:45.511347 | orchestrator | + echo 2026-01-10 15:02:45.511354 | orchestrator | + echo '## Images @ testbed-node-1' 2026-01-10 15:02:45.511362 | orchestrator | + echo 2026-01-10 15:02:45.511370 | orchestrator | + osism container testbed-node-1 images 2026-01-10 15:02:48.097747 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-01-10 15:02:48.097830 | orchestrator | registry.osism.tech/osism/ceph-daemon reef 2ffb60ff6501 12 hours ago 1.27GB 2026-01-10 15:02:48.097838 | orchestrator | registry.osism.tech/kolla/memcached 2025.1 147fb51206a9 13 hours ago 272MB 2026-01-10 15:02:48.097844 | orchestrator | registry.osism.tech/kolla/grafana 2025.1 d60d501bfb2e 13 hours ago 1.02GB 2026-01-10 15:02:48.099346 | orchestrator | registry.osism.tech/kolla/opensearch-dashboards 2025.1 28bfab3a635d 13 hours ago 1.53GB 2026-01-10 15:02:48.099422 | orchestrator | registry.osism.tech/kolla/opensearch 2025.1 180fefcd6471 13 hours ago 1.56GB 2026-01-10 15:02:48.099432 | orchestrator | registry.osism.tech/kolla/cron 2025.1 9d120b7105f5 13 hours ago 271MB 2026-01-10 15:02:48.099441 | orchestrator | registry.osism.tech/kolla/fluentd 2025.1 746f114d0355 13 hours ago 585MB 2026-01-10 15:02:48.099447 | orchestrator | registry.osism.tech/kolla/proxysql 2025.1 904212638264 13 hours ago 418MB 2026-01-10 15:02:48.099452 | orchestrator | registry.osism.tech/kolla/keepalived 2025.1 9b207f58f544 13 hours ago 282MB 2026-01-10 15:02:48.099509 | orchestrator | registry.osism.tech/kolla/haproxy 2025.1 1a948fcc1079 13 hours ago 280MB 2026-01-10 15:02:48.099515 | orchestrator | registry.osism.tech/kolla/rabbitmq 2025.1 936747a1a04d 13 hours ago 345MB 2026-01-10 15:02:48.099520 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2025.1 a91be85ab8f4 13 hours ago 679MB 2026-01-10 15:02:48.099525 | orchestrator | registry.osism.tech/kolla/redis 2025.1 b06a63fd8b65 13 hours ago 278MB 2026-01-10 15:02:48.099531 | orchestrator | registry.osism.tech/kolla/redis-sentinel 2025.1 81c9552f7b8a 13 hours ago 278MB 2026-01-10 15:02:48.099536 | orchestrator | registry.osism.tech/kolla/mariadb-server 2025.1 3edbca01402d 13 hours ago 458MB 2026-01-10 15:02:48.099541 | orchestrator | registry.osism.tech/kolla/horizon 2025.1 c36340c4fdce 13 hours ago 1.2GB 2026-01-10 15:02:48.099546 | orchestrator | registry.osism.tech/kolla/openvswitch-db-server 2025.1 dc7de6faec04 13 hours ago 288MB 2026-01-10 15:02:48.099551 | orchestrator | registry.osism.tech/kolla/openvswitch-vswitchd 2025.1 e7b1a1f98880 13 hours ago 288MB 2026-01-10 15:02:48.099576 | orchestrator | registry.osism.tech/kolla/prometheus-mysqld-exporter 2025.1 9e66eaaf6171 13 hours ago 307MB 2026-01-10 15:02:48.099581 | orchestrator | registry.osism.tech/kolla/prometheus-elasticsearch-exporter 2025.1 e59629a5a944 13 hours ago 297MB 2026-01-10 15:02:48.099586 | orchestrator | registry.osism.tech/kolla/prometheus-memcached-exporter 2025.1 a9cffb3ea040 13 hours ago 304MB 2026-01-10 15:02:48.099591 | orchestrator | registry.osism.tech/kolla/prometheus-node-exporter 2025.1 dbb749517df4 13 hours ago 311MB 2026-01-10 15:02:48.099596 | orchestrator | registry.osism.tech/kolla/prometheus-cadvisor 2025.1 d5c15f74b87b 13 hours ago 363MB 2026-01-10 15:02:48.099601 | orchestrator | registry.osism.tech/kolla/octavia-driver-agent 2025.1 0505f0262488 13 hours ago 1.07GB 2026-01-10 15:02:48.099606 | orchestrator | registry.osism.tech/kolla/octavia-worker 2025.1 77c32aec0ebe 13 hours ago 1.05GB 2026-01-10 15:02:48.099613 | orchestrator | registry.osism.tech/kolla/octavia-health-manager 2025.1 50baf56511e9 13 hours ago 1.05GB 2026-01-10 15:02:48.099618 | orchestrator | registry.osism.tech/kolla/octavia-api 2025.1 a1f761f3e3e0 13 hours ago 1.07GB 2026-01-10 15:02:48.099623 | orchestrator | registry.osism.tech/kolla/octavia-housekeeping 2025.1 05ca4b8f3771 13 hours ago 1.05GB 2026-01-10 15:02:48.099628 | orchestrator | registry.osism.tech/kolla/nova-api 2025.1 3f4e8b4813be 13 hours ago 1.23GB 2026-01-10 15:02:48.099633 | orchestrator | registry.osism.tech/kolla/nova-conductor 2025.1 b419da07022f 13 hours ago 1.23GB 2026-01-10 15:02:48.099638 | orchestrator | registry.osism.tech/kolla/nova-novncproxy 2025.1 364df2ecd9fa 13 hours ago 1.39GB 2026-01-10 15:02:48.099643 | orchestrator | registry.osism.tech/kolla/nova-scheduler 2025.1 a99e2f6f88b8 13 hours ago 1.23GB 2026-01-10 15:02:48.099648 | orchestrator | registry.osism.tech/kolla/glance-api 2025.1 796a218454e3 13 hours ago 1.12GB 2026-01-10 15:02:48.099654 | orchestrator | registry.osism.tech/kolla/cinder-volume 2025.1 a263da2e21fa 13 hours ago 1.79GB 2026-01-10 15:02:48.099659 | orchestrator | registry.osism.tech/kolla/cinder-scheduler 2025.1 8ed09a39d47d 13 hours ago 1.43GB 2026-01-10 15:02:48.099664 | orchestrator | registry.osism.tech/kolla/cinder-api 2025.1 2b160816b1ac 13 hours ago 1.43GB 2026-01-10 15:02:48.099690 | orchestrator | registry.osism.tech/kolla/cinder-backup 2025.1 4c8c5af90125 13 hours ago 1.44GB 2026-01-10 15:02:48.099695 | orchestrator | registry.osism.tech/kolla/placement-api 2025.1 9c23ac2599f5 13 hours ago 992MB 2026-01-10 15:02:48.099700 | orchestrator | registry.osism.tech/kolla/magnum-conductor 2025.1 a78c8fbbe2ec 13 hours ago 1.26GB 2026-01-10 15:02:48.099705 | orchestrator | registry.osism.tech/kolla/magnum-api 2025.1 3ca716d3f9e6 13 hours ago 1.15GB 2026-01-10 15:02:48.099710 | orchestrator | registry.osism.tech/kolla/neutron-server 2025.1 ecf3aec5ac7c 13 hours ago 1.24GB 2026-01-10 15:02:48.099716 | orchestrator | registry.osism.tech/kolla/designate-central 2025.1 628f4e2f17cc 13 hours ago 1GB 2026-01-10 15:02:48.099721 | orchestrator | registry.osism.tech/kolla/designate-mdns 2025.1 0570068f444f 13 hours ago 1GB 2026-01-10 15:02:48.099727 | orchestrator | registry.osism.tech/kolla/designate-worker 2025.1 45de8c76fa6d 13 hours ago 1.01GB 2026-01-10 15:02:48.099732 | orchestrator | registry.osism.tech/kolla/designate-producer 2025.1 83f657439974 13 hours ago 1GB 2026-01-10 15:02:48.099737 | orchestrator | registry.osism.tech/kolla/designate-api 2025.1 aafa8adffc1e 13 hours ago 1GB 2026-01-10 15:02:48.099753 | orchestrator | registry.osism.tech/kolla/designate-backend-bind9 2025.1 0872eb9ebdd7 13 hours ago 1.01GB 2026-01-10 15:02:48.099758 | orchestrator | registry.osism.tech/kolla/barbican-keystone-listener 2025.1 940b0f3b627a 13 hours ago 1GB 2026-01-10 15:02:48.099788 | orchestrator | registry.osism.tech/kolla/barbican-api 2025.1 31db20473c45 13 hours ago 1e+03MB 2026-01-10 15:02:48.099794 | orchestrator | registry.osism.tech/kolla/barbican-worker 2025.1 4293da1b6a24 13 hours ago 1GB 2026-01-10 15:02:48.099799 | orchestrator | registry.osism.tech/kolla/keystone-fernet 2025.1 2c7477cf5058 13 hours ago 1.05GB 2026-01-10 15:02:48.099804 | orchestrator | registry.osism.tech/kolla/keystone-ssh 2025.1 0873caa4afe3 13 hours ago 1.05GB 2026-01-10 15:02:48.099809 | orchestrator | registry.osism.tech/kolla/keystone 2025.1 b047a8b081f1 13 hours ago 1.1GB 2026-01-10 15:02:48.099814 | orchestrator | registry.osism.tech/kolla/ovn-controller 2025.1 ace03ecb6fa9 13 hours ago 296MB 2026-01-10 15:02:48.099819 | orchestrator | registry.osism.tech/kolla/ovn-northd 2025.1 cadfe8336bd1 13 hours ago 295MB 2026-01-10 15:02:48.099824 | orchestrator | registry.osism.tech/kolla/ovn-nb-db-server 2025.1 38ecd2b02d75 13 hours ago 295MB 2026-01-10 15:02:48.099829 | orchestrator | registry.osism.tech/kolla/ovn-sb-db-server 2025.1 3293088ce921 13 hours ago 295MB 2026-01-10 15:02:48.099835 | orchestrator | registry.osism.tech/kolla/ovn-sb-db-relay 2025.1 b7c7d5db18f9 13 hours ago 295MB 2026-01-10 15:02:48.453874 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-01-10 15:02:48.454135 | orchestrator | ++ semver latest 5.0.0 2026-01-10 15:02:48.517288 | orchestrator | + [[ -1 -eq -1 ]] 2026-01-10 15:02:48.517380 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-01-10 15:02:48.517393 | orchestrator | + echo 2026-01-10 15:02:48.517402 | orchestrator | 2026-01-10 15:02:48.517410 | orchestrator | ## Containers @ testbed-node-2 2026-01-10 15:02:48.517418 | orchestrator | 2026-01-10 15:02:48.517426 | orchestrator | + echo '## Containers @ testbed-node-2' 2026-01-10 15:02:48.517435 | orchestrator | + echo 2026-01-10 15:02:48.517443 | orchestrator | + osism container testbed-node-2 ps 2026-01-10 15:02:51.035359 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-01-10 15:02:51.035536 | orchestrator | 47fb2c29944f registry.osism.tech/kolla/octavia-worker:2025.1 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_worker 2026-01-10 15:02:51.035549 | orchestrator | 1d6ffbf62d18 registry.osism.tech/kolla/octavia-housekeeping:2025.1 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_housekeeping 2026-01-10 15:02:51.035557 | orchestrator | 09128a4b6a2c registry.osism.tech/kolla/octavia-health-manager:2025.1 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_health_manager 2026-01-10 15:02:51.035563 | orchestrator | c22ce87c23ee registry.osism.tech/kolla/octavia-driver-agent:2025.1 "dumb-init --single-…" 4 minutes ago Up 4 minutes octavia_driver_agent 2026-01-10 15:02:51.035570 | orchestrator | e374b78ed13e registry.osism.tech/kolla/octavia-api:2025.1 "dumb-init --single-…" 5 minutes ago Up 5 minutes (healthy) octavia_api 2026-01-10 15:02:51.035578 | orchestrator | 4b5627726d75 registry.osism.tech/kolla/grafana:2025.1 "dumb-init --single-…" 7 minutes ago Up 7 minutes grafana 2026-01-10 15:02:51.035585 | orchestrator | 4c7b46b69d66 registry.osism.tech/kolla/magnum-conductor:2025.1 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) magnum_conductor 2026-01-10 15:02:51.035592 | orchestrator | 5957c1e12618 registry.osism.tech/kolla/nova-novncproxy:2025.1 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) nova_novncproxy 2026-01-10 15:02:51.035599 | orchestrator | 67b283b6fe69 registry.osism.tech/kolla/magnum-api:2025.1 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) magnum_api 2026-01-10 15:02:51.035648 | orchestrator | 4da1a45f2c25 registry.osism.tech/kolla/nova-conductor:2025.1 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) nova_conductor 2026-01-10 15:02:51.035657 | orchestrator | 41363551c9e8 registry.osism.tech/kolla/placement-api:2025.1 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) placement_api 2026-01-10 15:02:51.035664 | orchestrator | f7ba1661baac registry.osism.tech/kolla/neutron-server:2025.1 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) neutron_server 2026-01-10 15:02:51.035671 | orchestrator | fc515c81885f registry.osism.tech/kolla/designate-worker:2025.1 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_worker 2026-01-10 15:02:51.035677 | orchestrator | ae18513a8b64 registry.osism.tech/kolla/designate-mdns:2025.1 "dumb-init --single-…" 11 minutes ago Up 10 minutes (healthy) designate_mdns 2026-01-10 15:02:51.035685 | orchestrator | 025ef84fe395 registry.osism.tech/kolla/designate-producer:2025.1 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) designate_producer 2026-01-10 15:02:51.035690 | orchestrator | 5ff48eda6baf registry.osism.tech/kolla/designate-central:2025.1 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) designate_central 2026-01-10 15:02:51.035695 | orchestrator | 0460989fd332 registry.osism.tech/kolla/designate-api:2025.1 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) designate_api 2026-01-10 15:02:51.035699 | orchestrator | 0935029f41fd registry.osism.tech/kolla/designate-backend-bind9:2025.1 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) designate_backend_bind9 2026-01-10 15:02:51.035704 | orchestrator | 5ee3a4f385a7 registry.osism.tech/kolla/nova-api:2025.1 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) nova_metadata 2026-01-10 15:02:51.035711 | orchestrator | b7cb2b6a04f4 registry.osism.tech/kolla/nova-api:2025.1 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) nova_api 2026-01-10 15:02:51.035716 | orchestrator | bd598915118d registry.osism.tech/kolla/nova-scheduler:2025.1 "dumb-init --single-…" 11 minutes ago Up 9 minutes (healthy) nova_scheduler 2026-01-10 15:02:51.035752 | orchestrator | 0b01c45db14f registry.osism.tech/kolla/barbican-worker:2025.1 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) barbican_worker 2026-01-10 15:02:51.035761 | orchestrator | 5f565c1c9ec3 registry.osism.tech/kolla/barbican-keystone-listener:2025.1 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) barbican_keystone_listener 2026-01-10 15:02:51.035799 | orchestrator | 3b7e33996c75 registry.osism.tech/kolla/barbican-api:2025.1 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) barbican_api 2026-01-10 15:02:51.035808 | orchestrator | a2728320843e registry.osism.tech/kolla/cinder-backup:2025.1 "dumb-init --single-…" 14 minutes ago Up 13 minutes (healthy) cinder_backup 2026-01-10 15:02:51.035815 | orchestrator | 453f85818463 registry.osism.tech/kolla/cinder-volume:2025.1 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) cinder_volume 2026-01-10 15:02:51.035823 | orchestrator | 50db897badc1 registry.osism.tech/kolla/glance-api:2025.1 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) glance_api 2026-01-10 15:02:51.035830 | orchestrator | 27f5f889dfd6 registry.osism.tech/kolla/cinder-scheduler:2025.1 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) cinder_scheduler 2026-01-10 15:02:51.035837 | orchestrator | f5304b241dd2 registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_elasticsearch_exporter 2026-01-10 15:02:51.035861 | orchestrator | bda271a7e65a registry.osism.tech/kolla/cinder-api:2025.1 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) cinder_api 2026-01-10 15:02:51.035867 | orchestrator | 95f7f6b133e7 registry.osism.tech/kolla/prometheus-cadvisor:2025.1 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_cadvisor 2026-01-10 15:02:51.035874 | orchestrator | 5715e938e7e1 registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_memcached_exporter 2026-01-10 15:02:51.035886 | orchestrator | 0ad97cbcafa8 registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_mysqld_exporter 2026-01-10 15:02:51.035893 | orchestrator | 08d31dd03355 registry.osism.tech/kolla/prometheus-node-exporter:2025.1 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_node_exporter 2026-01-10 15:02:51.035900 | orchestrator | 10afb0673c2b registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mgr -…" 15 minutes ago Up 15 minutes ceph-mgr-testbed-node-2 2026-01-10 15:02:51.035906 | orchestrator | 2669ff3f94a8 registry.osism.tech/kolla/keystone:2025.1 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) keystone 2026-01-10 15:02:51.035913 | orchestrator | 513d326084c9 registry.osism.tech/kolla/keystone-fernet:2025.1 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) keystone_fernet 2026-01-10 15:02:51.035920 | orchestrator | bae2e62d9526 registry.osism.tech/kolla/horizon:2025.1 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) horizon 2026-01-10 15:02:51.035926 | orchestrator | f9e0cb023297 registry.osism.tech/kolla/keystone-ssh:2025.1 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) keystone_ssh 2026-01-10 15:02:51.035933 | orchestrator | 6c8802ef6c28 registry.osism.tech/kolla/opensearch-dashboards:2025.1 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) opensearch_dashboards 2026-01-10 15:02:51.035940 | orchestrator | 9dff8abfc346 registry.osism.tech/kolla/mariadb-server:2025.1 "dumb-init -- kolla_…" 20 minutes ago Up 20 minutes (healthy) mariadb 2026-01-10 15:02:51.035947 | orchestrator | 62f0a0a0c641 registry.osism.tech/kolla/opensearch:2025.1 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) opensearch 2026-01-10 15:02:51.035952 | orchestrator | 4443fc729cb2 registry.osism.tech/kolla/keepalived:2025.1 "dumb-init --single-…" 23 minutes ago Up 23 minutes keepalived 2026-01-10 15:02:51.035958 | orchestrator | af74074caefe registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-crash" 23 minutes ago Up 23 minutes ceph-crash-testbed-node-2 2026-01-10 15:02:51.035976 | orchestrator | fe7e2b00132c registry.osism.tech/kolla/proxysql:2025.1 "dumb-init --single-…" 23 minutes ago Up 23 minutes (healthy) proxysql 2026-01-10 15:02:51.035983 | orchestrator | 8341b1fa6a1d registry.osism.tech/kolla/haproxy:2025.1 "dumb-init --single-…" 23 minutes ago Up 23 minutes (healthy) haproxy 2026-01-10 15:02:51.035989 | orchestrator | fc58334d884b registry.osism.tech/kolla/ovn-northd:2025.1 "dumb-init --single-…" 26 minutes ago Up 26 minutes ovn_northd 2026-01-10 15:02:51.035996 | orchestrator | 9b1d2804be06 registry.osism.tech/kolla/rabbitmq:2025.1 "dumb-init --single-…" 27 minutes ago Up 26 minutes (healthy) rabbitmq 2026-01-10 15:02:51.036028 | orchestrator | cce954346a8a registry.osism.tech/kolla/ovn-sb-db-relay:2025.1 "dumb-init --single-…" 27 minutes ago Up 25 minutes ovn_sb_db_relay_1 2026-01-10 15:02:51.036035 | orchestrator | 0526c0607179 registry.osism.tech/kolla/ovn-sb-db-server:2025.1 "dumb-init --single-…" 27 minutes ago Up 25 minutes ovn_sb_db 2026-01-10 15:02:51.036042 | orchestrator | ce919cc76f48 registry.osism.tech/kolla/ovn-nb-db-server:2025.1 "dumb-init --single-…" 27 minutes ago Up 26 minutes ovn_nb_db 2026-01-10 15:02:51.036049 | orchestrator | a75def075efd registry.osism.tech/kolla/ovn-controller:2025.1 "dumb-init --single-…" 28 minutes ago Up 28 minutes ovn_controller 2026-01-10 15:02:51.036055 | orchestrator | e5cac7951682 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mon -…" 28 minutes ago Up 28 minutes ceph-mon-testbed-node-2 2026-01-10 15:02:51.036062 | orchestrator | f2a3f97e3a5c registry.osism.tech/kolla/openvswitch-vswitchd:2025.1 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) openvswitch_vswitchd 2026-01-10 15:02:51.036068 | orchestrator | 991116f8d97c registry.osism.tech/kolla/openvswitch-db-server:2025.1 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) openvswitch_db 2026-01-10 15:02:51.036074 | orchestrator | 2733e2b96de4 registry.osism.tech/kolla/redis-sentinel:2025.1 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) redis_sentinel 2026-01-10 15:02:51.036081 | orchestrator | 2f824051be09 registry.osism.tech/kolla/redis:2025.1 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) redis 2026-01-10 15:02:51.036087 | orchestrator | d180fa13e63a registry.osism.tech/kolla/memcached:2025.1 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) memcached 2026-01-10 15:02:51.036094 | orchestrator | 0f366fbaa8f7 registry.osism.tech/kolla/cron:2025.1 "dumb-init --single-…" 30 minutes ago Up 30 minutes cron 2026-01-10 15:02:51.036100 | orchestrator | e5134b6416a2 registry.osism.tech/kolla/kolla-toolbox:2025.1 "dumb-init --single-…" 30 minutes ago Up 30 minutes kolla_toolbox 2026-01-10 15:02:51.036112 | orchestrator | d39be97927de registry.osism.tech/kolla/fluentd:2025.1 "dumb-init --single-…" 31 minutes ago Up 31 minutes fluentd 2026-01-10 15:02:51.392952 | orchestrator | 2026-01-10 15:02:51.393036 | orchestrator | ## Images @ testbed-node-2 2026-01-10 15:02:51.393047 | orchestrator | 2026-01-10 15:02:51.393054 | orchestrator | + echo 2026-01-10 15:02:51.393062 | orchestrator | + echo '## Images @ testbed-node-2' 2026-01-10 15:02:51.393069 | orchestrator | + echo 2026-01-10 15:02:51.393076 | orchestrator | + osism container testbed-node-2 images 2026-01-10 15:02:53.934076 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-01-10 15:02:53.934166 | orchestrator | registry.osism.tech/osism/ceph-daemon reef 2ffb60ff6501 12 hours ago 1.27GB 2026-01-10 15:02:53.934173 | orchestrator | registry.osism.tech/kolla/memcached 2025.1 147fb51206a9 13 hours ago 272MB 2026-01-10 15:02:53.934178 | orchestrator | registry.osism.tech/kolla/grafana 2025.1 d60d501bfb2e 13 hours ago 1.02GB 2026-01-10 15:02:53.934183 | orchestrator | registry.osism.tech/kolla/opensearch-dashboards 2025.1 28bfab3a635d 13 hours ago 1.53GB 2026-01-10 15:02:53.934187 | orchestrator | registry.osism.tech/kolla/opensearch 2025.1 180fefcd6471 13 hours ago 1.56GB 2026-01-10 15:02:53.934192 | orchestrator | registry.osism.tech/kolla/cron 2025.1 9d120b7105f5 13 hours ago 271MB 2026-01-10 15:02:53.934196 | orchestrator | registry.osism.tech/kolla/fluentd 2025.1 746f114d0355 13 hours ago 585MB 2026-01-10 15:02:53.934216 | orchestrator | registry.osism.tech/kolla/proxysql 2025.1 904212638264 13 hours ago 418MB 2026-01-10 15:02:53.934221 | orchestrator | registry.osism.tech/kolla/keepalived 2025.1 9b207f58f544 13 hours ago 282MB 2026-01-10 15:02:53.934225 | orchestrator | registry.osism.tech/kolla/haproxy 2025.1 1a948fcc1079 13 hours ago 280MB 2026-01-10 15:02:53.934230 | orchestrator | registry.osism.tech/kolla/rabbitmq 2025.1 936747a1a04d 13 hours ago 345MB 2026-01-10 15:02:53.934234 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2025.1 a91be85ab8f4 13 hours ago 679MB 2026-01-10 15:02:53.934238 | orchestrator | registry.osism.tech/kolla/redis 2025.1 b06a63fd8b65 13 hours ago 278MB 2026-01-10 15:02:53.934242 | orchestrator | registry.osism.tech/kolla/redis-sentinel 2025.1 81c9552f7b8a 13 hours ago 278MB 2026-01-10 15:02:53.934247 | orchestrator | registry.osism.tech/kolla/mariadb-server 2025.1 3edbca01402d 13 hours ago 458MB 2026-01-10 15:02:53.934251 | orchestrator | registry.osism.tech/kolla/horizon 2025.1 c36340c4fdce 13 hours ago 1.2GB 2026-01-10 15:02:53.934255 | orchestrator | registry.osism.tech/kolla/openvswitch-db-server 2025.1 dc7de6faec04 13 hours ago 288MB 2026-01-10 15:02:53.934259 | orchestrator | registry.osism.tech/kolla/openvswitch-vswitchd 2025.1 e7b1a1f98880 13 hours ago 288MB 2026-01-10 15:02:53.934264 | orchestrator | registry.osism.tech/kolla/prometheus-mysqld-exporter 2025.1 9e66eaaf6171 13 hours ago 307MB 2026-01-10 15:02:53.934268 | orchestrator | registry.osism.tech/kolla/prometheus-elasticsearch-exporter 2025.1 e59629a5a944 13 hours ago 297MB 2026-01-10 15:02:53.934272 | orchestrator | registry.osism.tech/kolla/prometheus-memcached-exporter 2025.1 a9cffb3ea040 13 hours ago 304MB 2026-01-10 15:02:53.934276 | orchestrator | registry.osism.tech/kolla/prometheus-node-exporter 2025.1 dbb749517df4 13 hours ago 311MB 2026-01-10 15:02:53.934281 | orchestrator | registry.osism.tech/kolla/prometheus-cadvisor 2025.1 d5c15f74b87b 13 hours ago 363MB 2026-01-10 15:02:53.934285 | orchestrator | registry.osism.tech/kolla/octavia-driver-agent 2025.1 0505f0262488 13 hours ago 1.07GB 2026-01-10 15:02:53.934290 | orchestrator | registry.osism.tech/kolla/octavia-worker 2025.1 77c32aec0ebe 13 hours ago 1.05GB 2026-01-10 15:02:53.934294 | orchestrator | registry.osism.tech/kolla/octavia-health-manager 2025.1 50baf56511e9 13 hours ago 1.05GB 2026-01-10 15:02:53.934298 | orchestrator | registry.osism.tech/kolla/octavia-api 2025.1 a1f761f3e3e0 13 hours ago 1.07GB 2026-01-10 15:02:53.934345 | orchestrator | registry.osism.tech/kolla/octavia-housekeeping 2025.1 05ca4b8f3771 13 hours ago 1.05GB 2026-01-10 15:02:53.934351 | orchestrator | registry.osism.tech/kolla/nova-api 2025.1 3f4e8b4813be 13 hours ago 1.23GB 2026-01-10 15:02:53.934355 | orchestrator | registry.osism.tech/kolla/nova-conductor 2025.1 b419da07022f 13 hours ago 1.23GB 2026-01-10 15:02:53.934359 | orchestrator | registry.osism.tech/kolla/nova-novncproxy 2025.1 364df2ecd9fa 13 hours ago 1.39GB 2026-01-10 15:02:53.934363 | orchestrator | registry.osism.tech/kolla/nova-scheduler 2025.1 a99e2f6f88b8 13 hours ago 1.23GB 2026-01-10 15:02:53.934368 | orchestrator | registry.osism.tech/kolla/glance-api 2025.1 796a218454e3 13 hours ago 1.12GB 2026-01-10 15:02:53.934372 | orchestrator | registry.osism.tech/kolla/cinder-volume 2025.1 a263da2e21fa 13 hours ago 1.79GB 2026-01-10 15:02:53.934376 | orchestrator | registry.osism.tech/kolla/cinder-scheduler 2025.1 8ed09a39d47d 13 hours ago 1.43GB 2026-01-10 15:02:53.934380 | orchestrator | registry.osism.tech/kolla/cinder-api 2025.1 2b160816b1ac 13 hours ago 1.43GB 2026-01-10 15:02:53.934397 | orchestrator | registry.osism.tech/kolla/cinder-backup 2025.1 4c8c5af90125 13 hours ago 1.44GB 2026-01-10 15:02:53.934407 | orchestrator | registry.osism.tech/kolla/placement-api 2025.1 9c23ac2599f5 13 hours ago 992MB 2026-01-10 15:02:53.934411 | orchestrator | registry.osism.tech/kolla/magnum-conductor 2025.1 a78c8fbbe2ec 13 hours ago 1.26GB 2026-01-10 15:02:53.934416 | orchestrator | registry.osism.tech/kolla/magnum-api 2025.1 3ca716d3f9e6 13 hours ago 1.15GB 2026-01-10 15:02:53.934420 | orchestrator | registry.osism.tech/kolla/neutron-server 2025.1 ecf3aec5ac7c 13 hours ago 1.24GB 2026-01-10 15:02:53.934424 | orchestrator | registry.osism.tech/kolla/designate-central 2025.1 628f4e2f17cc 13 hours ago 1GB 2026-01-10 15:02:53.934428 | orchestrator | registry.osism.tech/kolla/designate-mdns 2025.1 0570068f444f 13 hours ago 1GB 2026-01-10 15:02:53.934432 | orchestrator | registry.osism.tech/kolla/designate-worker 2025.1 45de8c76fa6d 13 hours ago 1.01GB 2026-01-10 15:02:53.934437 | orchestrator | registry.osism.tech/kolla/designate-producer 2025.1 83f657439974 13 hours ago 1GB 2026-01-10 15:02:53.934441 | orchestrator | registry.osism.tech/kolla/designate-api 2025.1 aafa8adffc1e 13 hours ago 1GB 2026-01-10 15:02:53.934445 | orchestrator | registry.osism.tech/kolla/designate-backend-bind9 2025.1 0872eb9ebdd7 13 hours ago 1.01GB 2026-01-10 15:02:53.934449 | orchestrator | registry.osism.tech/kolla/barbican-keystone-listener 2025.1 940b0f3b627a 13 hours ago 1GB 2026-01-10 15:02:53.934453 | orchestrator | registry.osism.tech/kolla/barbican-api 2025.1 31db20473c45 13 hours ago 1e+03MB 2026-01-10 15:02:53.934458 | orchestrator | registry.osism.tech/kolla/barbican-worker 2025.1 4293da1b6a24 13 hours ago 1GB 2026-01-10 15:02:53.934518 | orchestrator | registry.osism.tech/kolla/keystone-fernet 2025.1 2c7477cf5058 13 hours ago 1.05GB 2026-01-10 15:02:53.934523 | orchestrator | registry.osism.tech/kolla/keystone-ssh 2025.1 0873caa4afe3 13 hours ago 1.05GB 2026-01-10 15:02:53.934531 | orchestrator | registry.osism.tech/kolla/keystone 2025.1 b047a8b081f1 13 hours ago 1.1GB 2026-01-10 15:02:53.934535 | orchestrator | registry.osism.tech/kolla/ovn-controller 2025.1 ace03ecb6fa9 13 hours ago 296MB 2026-01-10 15:02:53.934540 | orchestrator | registry.osism.tech/kolla/ovn-northd 2025.1 cadfe8336bd1 13 hours ago 295MB 2026-01-10 15:02:53.934544 | orchestrator | registry.osism.tech/kolla/ovn-nb-db-server 2025.1 38ecd2b02d75 13 hours ago 295MB 2026-01-10 15:02:53.934548 | orchestrator | registry.osism.tech/kolla/ovn-sb-db-server 2025.1 3293088ce921 13 hours ago 295MB 2026-01-10 15:02:53.934552 | orchestrator | registry.osism.tech/kolla/ovn-sb-db-relay 2025.1 b7c7d5db18f9 13 hours ago 295MB 2026-01-10 15:02:54.291699 | orchestrator | + sh -c /opt/configuration/scripts/check-services.sh 2026-01-10 15:02:54.300360 | orchestrator | + set -e 2026-01-10 15:02:54.300438 | orchestrator | + source /opt/manager-vars.sh 2026-01-10 15:02:54.301752 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-01-10 15:02:54.301794 | orchestrator | ++ NUMBER_OF_NODES=6 2026-01-10 15:02:54.301801 | orchestrator | ++ export CEPH_VERSION=reef 2026-01-10 15:02:54.301807 | orchestrator | ++ CEPH_VERSION=reef 2026-01-10 15:02:54.301813 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-01-10 15:02:54.301821 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-01-10 15:02:54.301827 | orchestrator | ++ export MANAGER_VERSION=latest 2026-01-10 15:02:54.301833 | orchestrator | ++ MANAGER_VERSION=latest 2026-01-10 15:02:54.301839 | orchestrator | ++ export OPENSTACK_VERSION=2025.1 2026-01-10 15:02:54.301846 | orchestrator | ++ OPENSTACK_VERSION=2025.1 2026-01-10 15:02:54.301851 | orchestrator | ++ export ARA=false 2026-01-10 15:02:54.301857 | orchestrator | ++ ARA=false 2026-01-10 15:02:54.301863 | orchestrator | ++ export DEPLOY_MODE=manager 2026-01-10 15:02:54.301868 | orchestrator | ++ DEPLOY_MODE=manager 2026-01-10 15:02:54.301874 | orchestrator | ++ export TEMPEST=false 2026-01-10 15:02:54.301879 | orchestrator | ++ TEMPEST=false 2026-01-10 15:02:54.301886 | orchestrator | ++ export IS_ZUUL=true 2026-01-10 15:02:54.301891 | orchestrator | ++ IS_ZUUL=true 2026-01-10 15:02:54.301897 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.86 2026-01-10 15:02:54.302111 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.86 2026-01-10 15:02:54.302130 | orchestrator | ++ export EXTERNAL_API=false 2026-01-10 15:02:54.302138 | orchestrator | ++ EXTERNAL_API=false 2026-01-10 15:02:54.302146 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-01-10 15:02:54.302153 | orchestrator | ++ IMAGE_USER=ubuntu 2026-01-10 15:02:54.302161 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-01-10 15:02:54.302168 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-01-10 15:02:54.302176 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-01-10 15:02:54.302184 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-01-10 15:02:54.302191 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-01-10 15:02:54.302215 | orchestrator | + sh -c /opt/configuration/scripts/check/100-ceph-with-ansible.sh 2026-01-10 15:02:54.314891 | orchestrator | + set -e 2026-01-10 15:02:54.314970 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-01-10 15:02:54.314979 | orchestrator | ++ export INTERACTIVE=false 2026-01-10 15:02:54.314989 | orchestrator | ++ INTERACTIVE=false 2026-01-10 15:02:54.314996 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-01-10 15:02:54.315003 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-01-10 15:02:54.315010 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-01-10 15:02:54.316658 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-01-10 15:02:54.324069 | orchestrator | 2026-01-10 15:02:54.324151 | orchestrator | # Ceph status 2026-01-10 15:02:54.324159 | orchestrator | 2026-01-10 15:02:54.324167 | orchestrator | ++ export MANAGER_VERSION=latest 2026-01-10 15:02:54.324175 | orchestrator | ++ MANAGER_VERSION=latest 2026-01-10 15:02:54.324183 | orchestrator | + echo 2026-01-10 15:02:54.324189 | orchestrator | + echo '# Ceph status' 2026-01-10 15:02:54.324196 | orchestrator | + echo 2026-01-10 15:02:54.324203 | orchestrator | + ceph -s 2026-01-10 15:02:54.897095 | orchestrator | cluster: 2026-01-10 15:02:54.897183 | orchestrator | id: 11111111-1111-1111-1111-111111111111 2026-01-10 15:02:54.897194 | orchestrator | health: HEALTH_OK 2026-01-10 15:02:54.897201 | orchestrator | 2026-01-10 15:02:54.897207 | orchestrator | services: 2026-01-10 15:02:54.897214 | orchestrator | mon: 3 daemons, quorum testbed-node-0,testbed-node-1,testbed-node-2 (age 28m) 2026-01-10 15:02:54.897221 | orchestrator | mgr: testbed-node-2(active, since 15m), standbys: testbed-node-0, testbed-node-1 2026-01-10 15:02:54.897228 | orchestrator | mds: 1/1 daemons up, 2 standby 2026-01-10 15:02:54.897235 | orchestrator | osd: 6 osds: 6 up (since 24m), 6 in (since 25m) 2026-01-10 15:02:54.897240 | orchestrator | rgw: 3 daemons active (3 hosts, 1 zones) 2026-01-10 15:02:54.897246 | orchestrator | 2026-01-10 15:02:54.897253 | orchestrator | data: 2026-01-10 15:02:54.897259 | orchestrator | volumes: 1/1 healthy 2026-01-10 15:02:54.897264 | orchestrator | pools: 14 pools, 401 pgs 2026-01-10 15:02:54.897271 | orchestrator | objects: 523 objects, 2.2 GiB 2026-01-10 15:02:54.897277 | orchestrator | usage: 7.1 GiB used, 113 GiB / 120 GiB avail 2026-01-10 15:02:54.897284 | orchestrator | pgs: 401 active+clean 2026-01-10 15:02:54.897290 | orchestrator | 2026-01-10 15:02:54.942781 | orchestrator | 2026-01-10 15:02:54.942851 | orchestrator | # Ceph versions 2026-01-10 15:02:54.942857 | orchestrator | 2026-01-10 15:02:54.942861 | orchestrator | + echo 2026-01-10 15:02:54.942866 | orchestrator | + echo '# Ceph versions' 2026-01-10 15:02:54.942871 | orchestrator | + echo 2026-01-10 15:02:54.942875 | orchestrator | + ceph versions 2026-01-10 15:02:55.544018 | orchestrator | { 2026-01-10 15:02:55.544089 | orchestrator | "mon": { 2026-01-10 15:02:55.544096 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2026-01-10 15:02:55.544102 | orchestrator | }, 2026-01-10 15:02:55.544106 | orchestrator | "mgr": { 2026-01-10 15:02:55.544110 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2026-01-10 15:02:55.544114 | orchestrator | }, 2026-01-10 15:02:55.544118 | orchestrator | "osd": { 2026-01-10 15:02:55.544122 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 6 2026-01-10 15:02:55.544138 | orchestrator | }, 2026-01-10 15:02:55.544142 | orchestrator | "mds": { 2026-01-10 15:02:55.544152 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2026-01-10 15:02:55.544156 | orchestrator | }, 2026-01-10 15:02:55.544159 | orchestrator | "rgw": { 2026-01-10 15:02:55.544163 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2026-01-10 15:02:55.544167 | orchestrator | }, 2026-01-10 15:02:55.544171 | orchestrator | "overall": { 2026-01-10 15:02:55.544201 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 18 2026-01-10 15:02:55.544205 | orchestrator | } 2026-01-10 15:02:55.544209 | orchestrator | } 2026-01-10 15:02:55.594417 | orchestrator | 2026-01-10 15:02:55.594561 | orchestrator | # Ceph OSD tree 2026-01-10 15:02:55.594576 | orchestrator | 2026-01-10 15:02:55.594581 | orchestrator | + echo 2026-01-10 15:02:55.594586 | orchestrator | + echo '# Ceph OSD tree' 2026-01-10 15:02:55.594591 | orchestrator | + echo 2026-01-10 15:02:55.594595 | orchestrator | + ceph osd df tree 2026-01-10 15:02:56.148665 | orchestrator | ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS TYPE NAME 2026-01-10 15:02:56.148775 | orchestrator | -1 0.11691 - 120 GiB 7.1 GiB 6.7 GiB 6 KiB 430 MiB 113 GiB 5.92 1.00 - root default 2026-01-10 15:02:56.148788 | orchestrator | -3 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 143 MiB 38 GiB 5.92 1.00 - host testbed-node-3 2026-01-10 15:02:56.148796 | orchestrator | 0 hdd 0.01949 1.00000 20 GiB 1.1 GiB 1.0 GiB 1 KiB 70 MiB 19 GiB 5.36 0.91 189 up osd.0 2026-01-10 15:02:56.148803 | orchestrator | 3 hdd 0.01949 1.00000 20 GiB 1.3 GiB 1.2 GiB 1 KiB 74 MiB 19 GiB 6.47 1.09 201 up osd.3 2026-01-10 15:02:56.148809 | orchestrator | -5 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 143 MiB 38 GiB 5.92 1.00 - host testbed-node-4 2026-01-10 15:02:56.148816 | orchestrator | 2 hdd 0.01949 1.00000 20 GiB 1.4 GiB 1.3 GiB 1 KiB 74 MiB 19 GiB 7.06 1.19 206 up osd.2 2026-01-10 15:02:56.148822 | orchestrator | 5 hdd 0.01949 1.00000 20 GiB 976 MiB 907 MiB 1 KiB 70 MiB 19 GiB 4.77 0.81 186 up osd.5 2026-01-10 15:02:56.148829 | orchestrator | -7 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 143 MiB 38 GiB 5.92 1.00 - host testbed-node-5 2026-01-10 15:02:56.148835 | orchestrator | 1 hdd 0.01949 1.00000 20 GiB 1.3 GiB 1.2 GiB 1 KiB 74 MiB 19 GiB 6.28 1.06 192 up osd.1 2026-01-10 15:02:56.148841 | orchestrator | 4 hdd 0.01949 1.00000 20 GiB 1.1 GiB 1.0 GiB 1 KiB 70 MiB 19 GiB 5.56 0.94 196 up osd.4 2026-01-10 15:02:56.148847 | orchestrator | TOTAL 120 GiB 7.1 GiB 6.7 GiB 9.3 KiB 430 MiB 113 GiB 5.92 2026-01-10 15:02:56.148853 | orchestrator | MIN/MAX VAR: 0.81/1.19 STDDEV: 0.76 2026-01-10 15:02:56.200083 | orchestrator | 2026-01-10 15:02:56.200170 | orchestrator | # Ceph monitor status 2026-01-10 15:02:56.200181 | orchestrator | 2026-01-10 15:02:56.200188 | orchestrator | + echo 2026-01-10 15:02:56.200195 | orchestrator | + echo '# Ceph monitor status' 2026-01-10 15:02:56.200202 | orchestrator | + echo 2026-01-10 15:02:56.200208 | orchestrator | + ceph mon stat 2026-01-10 15:02:56.796299 | orchestrator | e1: 3 mons at {testbed-node-0=[v2:192.168.16.10:3300/0,v1:192.168.16.10:6789/0],testbed-node-1=[v2:192.168.16.11:3300/0,v1:192.168.16.11:6789/0],testbed-node-2=[v2:192.168.16.12:3300/0,v1:192.168.16.12:6789/0]} removed_ranks: {} disallowed_leaders: {}, election epoch 14, leader 0 testbed-node-0, quorum 0,1,2 testbed-node-0,testbed-node-1,testbed-node-2 2026-01-10 15:02:56.840461 | orchestrator | 2026-01-10 15:02:56.840664 | orchestrator | # Ceph quorum status 2026-01-10 15:02:56.840679 | orchestrator | 2026-01-10 15:02:56.840686 | orchestrator | + echo 2026-01-10 15:02:56.840694 | orchestrator | + echo '# Ceph quorum status' 2026-01-10 15:02:56.840701 | orchestrator | + echo 2026-01-10 15:02:56.840871 | orchestrator | + ceph quorum_status 2026-01-10 15:02:56.841708 | orchestrator | + jq 2026-01-10 15:02:57.495686 | orchestrator | { 2026-01-10 15:02:57.495783 | orchestrator | "election_epoch": 14, 2026-01-10 15:02:57.495804 | orchestrator | "quorum": [ 2026-01-10 15:02:57.495810 | orchestrator | 0, 2026-01-10 15:02:57.495816 | orchestrator | 1, 2026-01-10 15:02:57.495822 | orchestrator | 2 2026-01-10 15:02:57.495828 | orchestrator | ], 2026-01-10 15:02:57.495834 | orchestrator | "quorum_names": [ 2026-01-10 15:02:57.495840 | orchestrator | "testbed-node-0", 2026-01-10 15:02:57.495847 | orchestrator | "testbed-node-1", 2026-01-10 15:02:57.495996 | orchestrator | "testbed-node-2" 2026-01-10 15:02:57.496006 | orchestrator | ], 2026-01-10 15:02:57.496012 | orchestrator | "quorum_leader_name": "testbed-node-0", 2026-01-10 15:02:57.496020 | orchestrator | "quorum_age": 1712, 2026-01-10 15:02:57.496054 | orchestrator | "features": { 2026-01-10 15:02:57.496060 | orchestrator | "quorum_con": "4540138322906710015", 2026-01-10 15:02:57.496066 | orchestrator | "quorum_mon": [ 2026-01-10 15:02:57.496072 | orchestrator | "kraken", 2026-01-10 15:02:57.496077 | orchestrator | "luminous", 2026-01-10 15:02:57.496083 | orchestrator | "mimic", 2026-01-10 15:02:57.496089 | orchestrator | "osdmap-prune", 2026-01-10 15:02:57.496095 | orchestrator | "nautilus", 2026-01-10 15:02:57.496100 | orchestrator | "octopus", 2026-01-10 15:02:57.496106 | orchestrator | "pacific", 2026-01-10 15:02:57.496112 | orchestrator | "elector-pinging", 2026-01-10 15:02:57.496119 | orchestrator | "quincy", 2026-01-10 15:02:57.496124 | orchestrator | "reef" 2026-01-10 15:02:57.496130 | orchestrator | ] 2026-01-10 15:02:57.496136 | orchestrator | }, 2026-01-10 15:02:57.496142 | orchestrator | "monmap": { 2026-01-10 15:02:57.496148 | orchestrator | "epoch": 1, 2026-01-10 15:02:57.496155 | orchestrator | "fsid": "11111111-1111-1111-1111-111111111111", 2026-01-10 15:02:57.496162 | orchestrator | "modified": "2026-01-10T14:34:01.959916Z", 2026-01-10 15:02:57.496169 | orchestrator | "created": "2026-01-10T14:34:01.959916Z", 2026-01-10 15:02:57.496175 | orchestrator | "min_mon_release": 18, 2026-01-10 15:02:57.496181 | orchestrator | "min_mon_release_name": "reef", 2026-01-10 15:02:57.496185 | orchestrator | "election_strategy": 1, 2026-01-10 15:02:57.496189 | orchestrator | "disallowed_leaders: ": "", 2026-01-10 15:02:57.496193 | orchestrator | "stretch_mode": false, 2026-01-10 15:02:57.496197 | orchestrator | "tiebreaker_mon": "", 2026-01-10 15:02:57.496201 | orchestrator | "removed_ranks: ": "", 2026-01-10 15:02:57.496204 | orchestrator | "features": { 2026-01-10 15:02:57.496208 | orchestrator | "persistent": [ 2026-01-10 15:02:57.496212 | orchestrator | "kraken", 2026-01-10 15:02:57.496216 | orchestrator | "luminous", 2026-01-10 15:02:57.496220 | orchestrator | "mimic", 2026-01-10 15:02:57.496223 | orchestrator | "osdmap-prune", 2026-01-10 15:02:57.496227 | orchestrator | "nautilus", 2026-01-10 15:02:57.496231 | orchestrator | "octopus", 2026-01-10 15:02:57.496234 | orchestrator | "pacific", 2026-01-10 15:02:57.496238 | orchestrator | "elector-pinging", 2026-01-10 15:02:57.496242 | orchestrator | "quincy", 2026-01-10 15:02:57.496245 | orchestrator | "reef" 2026-01-10 15:02:57.496249 | orchestrator | ], 2026-01-10 15:02:57.496253 | orchestrator | "optional": [] 2026-01-10 15:02:57.496257 | orchestrator | }, 2026-01-10 15:02:57.496260 | orchestrator | "mons": [ 2026-01-10 15:02:57.496264 | orchestrator | { 2026-01-10 15:02:57.496268 | orchestrator | "rank": 0, 2026-01-10 15:02:57.496272 | orchestrator | "name": "testbed-node-0", 2026-01-10 15:02:57.496275 | orchestrator | "public_addrs": { 2026-01-10 15:02:57.496279 | orchestrator | "addrvec": [ 2026-01-10 15:02:57.496283 | orchestrator | { 2026-01-10 15:02:57.496286 | orchestrator | "type": "v2", 2026-01-10 15:02:57.496290 | orchestrator | "addr": "192.168.16.10:3300", 2026-01-10 15:02:57.496294 | orchestrator | "nonce": 0 2026-01-10 15:02:57.496297 | orchestrator | }, 2026-01-10 15:02:57.496301 | orchestrator | { 2026-01-10 15:02:57.496305 | orchestrator | "type": "v1", 2026-01-10 15:02:57.496309 | orchestrator | "addr": "192.168.16.10:6789", 2026-01-10 15:02:57.496312 | orchestrator | "nonce": 0 2026-01-10 15:02:57.496316 | orchestrator | } 2026-01-10 15:02:57.496320 | orchestrator | ] 2026-01-10 15:02:57.496326 | orchestrator | }, 2026-01-10 15:02:57.496332 | orchestrator | "addr": "192.168.16.10:6789/0", 2026-01-10 15:02:57.496340 | orchestrator | "public_addr": "192.168.16.10:6789/0", 2026-01-10 15:02:57.496348 | orchestrator | "priority": 0, 2026-01-10 15:02:57.496357 | orchestrator | "weight": 0, 2026-01-10 15:02:57.496362 | orchestrator | "crush_location": "{}" 2026-01-10 15:02:57.496368 | orchestrator | }, 2026-01-10 15:02:57.496373 | orchestrator | { 2026-01-10 15:02:57.496380 | orchestrator | "rank": 1, 2026-01-10 15:02:57.496385 | orchestrator | "name": "testbed-node-1", 2026-01-10 15:02:57.496391 | orchestrator | "public_addrs": { 2026-01-10 15:02:57.496396 | orchestrator | "addrvec": [ 2026-01-10 15:02:57.496401 | orchestrator | { 2026-01-10 15:02:57.496407 | orchestrator | "type": "v2", 2026-01-10 15:02:57.496412 | orchestrator | "addr": "192.168.16.11:3300", 2026-01-10 15:02:57.496417 | orchestrator | "nonce": 0 2026-01-10 15:02:57.496423 | orchestrator | }, 2026-01-10 15:02:57.496429 | orchestrator | { 2026-01-10 15:02:57.496434 | orchestrator | "type": "v1", 2026-01-10 15:02:57.496441 | orchestrator | "addr": "192.168.16.11:6789", 2026-01-10 15:02:57.496458 | orchestrator | "nonce": 0 2026-01-10 15:02:57.496463 | orchestrator | } 2026-01-10 15:02:57.496469 | orchestrator | ] 2026-01-10 15:02:57.496633 | orchestrator | }, 2026-01-10 15:02:57.496653 | orchestrator | "addr": "192.168.16.11:6789/0", 2026-01-10 15:02:57.496658 | orchestrator | "public_addr": "192.168.16.11:6789/0", 2026-01-10 15:02:57.496663 | orchestrator | "priority": 0, 2026-01-10 15:02:57.496668 | orchestrator | "weight": 0, 2026-01-10 15:02:57.496672 | orchestrator | "crush_location": "{}" 2026-01-10 15:02:57.496676 | orchestrator | }, 2026-01-10 15:02:57.496680 | orchestrator | { 2026-01-10 15:02:57.496685 | orchestrator | "rank": 2, 2026-01-10 15:02:57.496689 | orchestrator | "name": "testbed-node-2", 2026-01-10 15:02:57.496694 | orchestrator | "public_addrs": { 2026-01-10 15:02:57.496698 | orchestrator | "addrvec": [ 2026-01-10 15:02:57.496703 | orchestrator | { 2026-01-10 15:02:57.496707 | orchestrator | "type": "v2", 2026-01-10 15:02:57.496711 | orchestrator | "addr": "192.168.16.12:3300", 2026-01-10 15:02:57.496716 | orchestrator | "nonce": 0 2026-01-10 15:02:57.496720 | orchestrator | }, 2026-01-10 15:02:57.496725 | orchestrator | { 2026-01-10 15:02:57.496729 | orchestrator | "type": "v1", 2026-01-10 15:02:57.496734 | orchestrator | "addr": "192.168.16.12:6789", 2026-01-10 15:02:57.496738 | orchestrator | "nonce": 0 2026-01-10 15:02:57.496743 | orchestrator | } 2026-01-10 15:02:57.496747 | orchestrator | ] 2026-01-10 15:02:57.496752 | orchestrator | }, 2026-01-10 15:02:57.496756 | orchestrator | "addr": "192.168.16.12:6789/0", 2026-01-10 15:02:57.496761 | orchestrator | "public_addr": "192.168.16.12:6789/0", 2026-01-10 15:02:57.496765 | orchestrator | "priority": 0, 2026-01-10 15:02:57.496769 | orchestrator | "weight": 0, 2026-01-10 15:02:57.496774 | orchestrator | "crush_location": "{}" 2026-01-10 15:02:57.496778 | orchestrator | } 2026-01-10 15:02:57.496782 | orchestrator | ] 2026-01-10 15:02:57.496787 | orchestrator | } 2026-01-10 15:02:57.496791 | orchestrator | } 2026-01-10 15:02:57.496808 | orchestrator | 2026-01-10 15:02:57.496813 | orchestrator | # Ceph free space status 2026-01-10 15:02:57.496817 | orchestrator | 2026-01-10 15:02:57.496821 | orchestrator | + echo 2026-01-10 15:02:57.496826 | orchestrator | + echo '# Ceph free space status' 2026-01-10 15:02:57.496830 | orchestrator | + echo 2026-01-10 15:02:57.496834 | orchestrator | + ceph df 2026-01-10 15:02:58.084950 | orchestrator | --- RAW STORAGE --- 2026-01-10 15:02:58.085054 | orchestrator | CLASS SIZE AVAIL USED RAW USED %RAW USED 2026-01-10 15:02:58.085084 | orchestrator | hdd 120 GiB 113 GiB 7.1 GiB 7.1 GiB 5.92 2026-01-10 15:02:58.085097 | orchestrator | TOTAL 120 GiB 113 GiB 7.1 GiB 7.1 GiB 5.92 2026-01-10 15:02:58.085110 | orchestrator | 2026-01-10 15:02:58.085123 | orchestrator | --- POOLS --- 2026-01-10 15:02:58.085136 | orchestrator | POOL ID PGS STORED OBJECTS USED %USED MAX AVAIL 2026-01-10 15:02:58.085150 | orchestrator | .mgr 1 1 577 KiB 2 1.1 MiB 0 53 GiB 2026-01-10 15:02:58.085162 | orchestrator | cephfs_data 2 32 0 B 0 0 B 0 35 GiB 2026-01-10 15:02:58.085174 | orchestrator | cephfs_metadata 3 16 4.4 KiB 22 96 KiB 0 35 GiB 2026-01-10 15:02:58.085186 | orchestrator | default.rgw.buckets.data 4 32 0 B 0 0 B 0 35 GiB 2026-01-10 15:02:58.085199 | orchestrator | default.rgw.buckets.index 5 32 0 B 0 0 B 0 35 GiB 2026-01-10 15:02:58.085211 | orchestrator | default.rgw.control 6 32 0 B 8 0 B 0 35 GiB 2026-01-10 15:02:58.085223 | orchestrator | default.rgw.log 7 32 3.6 KiB 177 408 KiB 0 35 GiB 2026-01-10 15:02:58.085236 | orchestrator | default.rgw.meta 8 32 0 B 0 0 B 0 35 GiB 2026-01-10 15:02:58.085248 | orchestrator | .rgw.root 9 32 3.0 KiB 7 56 KiB 0 53 GiB 2026-01-10 15:02:58.085261 | orchestrator | backups 10 32 19 B 2 12 KiB 0 35 GiB 2026-01-10 15:02:58.085273 | orchestrator | volumes 11 32 19 B 2 12 KiB 0 35 GiB 2026-01-10 15:02:58.085285 | orchestrator | images 12 32 2.2 GiB 299 6.7 GiB 5.94 35 GiB 2026-01-10 15:02:58.085298 | orchestrator | metrics 13 32 19 B 2 12 KiB 0 35 GiB 2026-01-10 15:02:58.085310 | orchestrator | vms 14 32 19 B 2 12 KiB 0 35 GiB 2026-01-10 15:02:58.131529 | orchestrator | ++ semver latest 5.0.0 2026-01-10 15:02:58.190235 | orchestrator | + [[ -1 -eq -1 ]] 2026-01-10 15:02:58.191138 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-01-10 15:02:58.191170 | orchestrator | + [[ ! -e /etc/redhat-release ]] 2026-01-10 15:02:58.191179 | orchestrator | + osism apply facts 2026-01-10 15:03:00.332379 | orchestrator | 2026-01-10 15:03:00 | INFO  | Task 60b0ff93-d6c9-4d35-b4ec-f163ca8bb837 (facts) was prepared for execution. 2026-01-10 15:03:00.332470 | orchestrator | 2026-01-10 15:03:00 | INFO  | It takes a moment until task 60b0ff93-d6c9-4d35-b4ec-f163ca8bb837 (facts) has been started and output is visible here. 2026-01-10 15:03:14.357042 | orchestrator | 2026-01-10 15:03:14.357132 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-01-10 15:03:14.357139 | orchestrator | 2026-01-10 15:03:14.357144 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-01-10 15:03:14.357149 | orchestrator | Saturday 10 January 2026 15:03:04 +0000 (0:00:00.277) 0:00:00.277 ****** 2026-01-10 15:03:14.357153 | orchestrator | ok: [testbed-manager] 2026-01-10 15:03:14.357161 | orchestrator | ok: [testbed-node-0] 2026-01-10 15:03:14.357167 | orchestrator | ok: [testbed-node-1] 2026-01-10 15:03:14.357174 | orchestrator | ok: [testbed-node-2] 2026-01-10 15:03:14.357180 | orchestrator | ok: [testbed-node-3] 2026-01-10 15:03:14.357187 | orchestrator | ok: [testbed-node-4] 2026-01-10 15:03:14.357193 | orchestrator | ok: [testbed-node-5] 2026-01-10 15:03:14.357199 | orchestrator | 2026-01-10 15:03:14.357205 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-01-10 15:03:14.357212 | orchestrator | Saturday 10 January 2026 15:03:06 +0000 (0:00:01.706) 0:00:01.984 ****** 2026-01-10 15:03:14.357218 | orchestrator | skipping: [testbed-manager] 2026-01-10 15:03:14.357225 | orchestrator | skipping: [testbed-node-0] 2026-01-10 15:03:14.357231 | orchestrator | skipping: [testbed-node-1] 2026-01-10 15:03:14.357236 | orchestrator | skipping: [testbed-node-2] 2026-01-10 15:03:14.357242 | orchestrator | skipping: [testbed-node-3] 2026-01-10 15:03:14.357248 | orchestrator | skipping: [testbed-node-4] 2026-01-10 15:03:14.357254 | orchestrator | skipping: [testbed-node-5] 2026-01-10 15:03:14.357260 | orchestrator | 2026-01-10 15:03:14.357266 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-01-10 15:03:14.357272 | orchestrator | 2026-01-10 15:03:14.357278 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-01-10 15:03:14.357284 | orchestrator | Saturday 10 January 2026 15:03:08 +0000 (0:00:01.424) 0:00:03.408 ****** 2026-01-10 15:03:14.357290 | orchestrator | ok: [testbed-node-2] 2026-01-10 15:03:14.357296 | orchestrator | ok: [testbed-node-1] 2026-01-10 15:03:14.357303 | orchestrator | ok: [testbed-node-0] 2026-01-10 15:03:14.357308 | orchestrator | ok: [testbed-manager] 2026-01-10 15:03:14.357315 | orchestrator | ok: [testbed-node-3] 2026-01-10 15:03:14.357320 | orchestrator | ok: [testbed-node-4] 2026-01-10 15:03:14.357326 | orchestrator | ok: [testbed-node-5] 2026-01-10 15:03:14.357332 | orchestrator | 2026-01-10 15:03:14.357337 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-01-10 15:03:14.357343 | orchestrator | 2026-01-10 15:03:14.357350 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-01-10 15:03:14.357357 | orchestrator | Saturday 10 January 2026 15:03:13 +0000 (0:00:05.189) 0:00:08.598 ****** 2026-01-10 15:03:14.357362 | orchestrator | skipping: [testbed-manager] 2026-01-10 15:03:14.357368 | orchestrator | skipping: [testbed-node-0] 2026-01-10 15:03:14.357375 | orchestrator | skipping: [testbed-node-1] 2026-01-10 15:03:14.357390 | orchestrator | skipping: [testbed-node-2] 2026-01-10 15:03:14.357397 | orchestrator | skipping: [testbed-node-3] 2026-01-10 15:03:14.357404 | orchestrator | skipping: [testbed-node-4] 2026-01-10 15:03:14.357410 | orchestrator | skipping: [testbed-node-5] 2026-01-10 15:03:14.357415 | orchestrator | 2026-01-10 15:03:14.357421 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-10 15:03:14.357428 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-10 15:03:14.357460 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-10 15:03:14.357467 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-10 15:03:14.357474 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-10 15:03:14.357481 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-10 15:03:14.357487 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-10 15:03:14.357493 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-10 15:03:14.357500 | orchestrator | 2026-01-10 15:03:14.357505 | orchestrator | 2026-01-10 15:03:14.357612 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-10 15:03:14.357618 | orchestrator | Saturday 10 January 2026 15:03:13 +0000 (0:00:00.590) 0:00:09.188 ****** 2026-01-10 15:03:14.357622 | orchestrator | =============================================================================== 2026-01-10 15:03:14.357627 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.19s 2026-01-10 15:03:14.357645 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.71s 2026-01-10 15:03:14.357651 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.42s 2026-01-10 15:03:14.357657 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.59s 2026-01-10 15:03:14.705029 | orchestrator | + osism validate ceph-mons 2026-01-10 15:03:48.084424 | orchestrator | 2026-01-10 15:03:48.084556 | orchestrator | PLAY [Ceph validate mons] ****************************************************** 2026-01-10 15:03:48.084564 | orchestrator | 2026-01-10 15:03:48.084569 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2026-01-10 15:03:48.084621 | orchestrator | Saturday 10 January 2026 15:03:31 +0000 (0:00:00.455) 0:00:00.455 ****** 2026-01-10 15:03:48.084626 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-01-10 15:03:48.084630 | orchestrator | 2026-01-10 15:03:48.084634 | orchestrator | TASK [Create report output directory] ****************************************** 2026-01-10 15:03:48.084639 | orchestrator | Saturday 10 January 2026 15:03:32 +0000 (0:00:00.939) 0:00:01.395 ****** 2026-01-10 15:03:48.084643 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-01-10 15:03:48.084647 | orchestrator | 2026-01-10 15:03:48.084651 | orchestrator | TASK [Define report vars] ****************************************************** 2026-01-10 15:03:48.084655 | orchestrator | Saturday 10 January 2026 15:03:33 +0000 (0:00:01.075) 0:00:02.470 ****** 2026-01-10 15:03:48.084659 | orchestrator | ok: [testbed-node-0] 2026-01-10 15:03:48.084664 | orchestrator | 2026-01-10 15:03:48.084668 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2026-01-10 15:03:48.084671 | orchestrator | Saturday 10 January 2026 15:03:33 +0000 (0:00:00.116) 0:00:02.586 ****** 2026-01-10 15:03:48.084675 | orchestrator | ok: [testbed-node-0] 2026-01-10 15:03:48.084679 | orchestrator | ok: [testbed-node-1] 2026-01-10 15:03:48.084683 | orchestrator | ok: [testbed-node-2] 2026-01-10 15:03:48.084686 | orchestrator | 2026-01-10 15:03:48.084690 | orchestrator | TASK [Get container info] ****************************************************** 2026-01-10 15:03:48.084694 | orchestrator | Saturday 10 January 2026 15:03:34 +0000 (0:00:00.304) 0:00:02.891 ****** 2026-01-10 15:03:48.084698 | orchestrator | ok: [testbed-node-1] 2026-01-10 15:03:48.084702 | orchestrator | ok: [testbed-node-2] 2026-01-10 15:03:48.084706 | orchestrator | ok: [testbed-node-0] 2026-01-10 15:03:48.084725 | orchestrator | 2026-01-10 15:03:48.084729 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2026-01-10 15:03:48.084733 | orchestrator | Saturday 10 January 2026 15:03:35 +0000 (0:00:01.182) 0:00:04.074 ****** 2026-01-10 15:03:48.084737 | orchestrator | skipping: [testbed-node-0] 2026-01-10 15:03:48.084741 | orchestrator | skipping: [testbed-node-1] 2026-01-10 15:03:48.084745 | orchestrator | skipping: [testbed-node-2] 2026-01-10 15:03:48.084749 | orchestrator | 2026-01-10 15:03:48.084752 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2026-01-10 15:03:48.084756 | orchestrator | Saturday 10 January 2026 15:03:35 +0000 (0:00:00.312) 0:00:04.387 ****** 2026-01-10 15:03:48.084760 | orchestrator | ok: [testbed-node-0] 2026-01-10 15:03:48.084764 | orchestrator | ok: [testbed-node-1] 2026-01-10 15:03:48.084767 | orchestrator | ok: [testbed-node-2] 2026-01-10 15:03:48.084771 | orchestrator | 2026-01-10 15:03:48.084775 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-01-10 15:03:48.084779 | orchestrator | Saturday 10 January 2026 15:03:36 +0000 (0:00:00.568) 0:00:04.955 ****** 2026-01-10 15:03:48.084782 | orchestrator | ok: [testbed-node-0] 2026-01-10 15:03:48.084786 | orchestrator | ok: [testbed-node-1] 2026-01-10 15:03:48.084790 | orchestrator | ok: [testbed-node-2] 2026-01-10 15:03:48.084793 | orchestrator | 2026-01-10 15:03:48.084797 | orchestrator | TASK [Set test result to failed if ceph-mon is not running] ******************** 2026-01-10 15:03:48.084801 | orchestrator | Saturday 10 January 2026 15:03:36 +0000 (0:00:00.320) 0:00:05.276 ****** 2026-01-10 15:03:48.084804 | orchestrator | skipping: [testbed-node-0] 2026-01-10 15:03:48.084808 | orchestrator | skipping: [testbed-node-1] 2026-01-10 15:03:48.084812 | orchestrator | skipping: [testbed-node-2] 2026-01-10 15:03:48.084816 | orchestrator | 2026-01-10 15:03:48.084820 | orchestrator | TASK [Set test result to passed if ceph-mon is running] ************************ 2026-01-10 15:03:48.084824 | orchestrator | Saturday 10 January 2026 15:03:36 +0000 (0:00:00.298) 0:00:05.574 ****** 2026-01-10 15:03:48.084827 | orchestrator | ok: [testbed-node-0] 2026-01-10 15:03:48.084831 | orchestrator | ok: [testbed-node-1] 2026-01-10 15:03:48.084835 | orchestrator | ok: [testbed-node-2] 2026-01-10 15:03:48.084839 | orchestrator | 2026-01-10 15:03:48.084843 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-01-10 15:03:48.084846 | orchestrator | Saturday 10 January 2026 15:03:37 +0000 (0:00:00.541) 0:00:06.116 ****** 2026-01-10 15:03:48.084850 | orchestrator | skipping: [testbed-node-0] 2026-01-10 15:03:48.084854 | orchestrator | 2026-01-10 15:03:48.084858 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-01-10 15:03:48.084861 | orchestrator | Saturday 10 January 2026 15:03:37 +0000 (0:00:00.279) 0:00:06.395 ****** 2026-01-10 15:03:48.084865 | orchestrator | skipping: [testbed-node-0] 2026-01-10 15:03:48.084869 | orchestrator | 2026-01-10 15:03:48.084873 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-01-10 15:03:48.084876 | orchestrator | Saturday 10 January 2026 15:03:37 +0000 (0:00:00.258) 0:00:06.654 ****** 2026-01-10 15:03:48.084880 | orchestrator | skipping: [testbed-node-0] 2026-01-10 15:03:48.084884 | orchestrator | 2026-01-10 15:03:48.084887 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-01-10 15:03:48.084891 | orchestrator | Saturday 10 January 2026 15:03:38 +0000 (0:00:00.263) 0:00:06.918 ****** 2026-01-10 15:03:48.084895 | orchestrator | 2026-01-10 15:03:48.084898 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-01-10 15:03:48.084902 | orchestrator | Saturday 10 January 2026 15:03:38 +0000 (0:00:00.072) 0:00:06.990 ****** 2026-01-10 15:03:48.084906 | orchestrator | 2026-01-10 15:03:48.084910 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-01-10 15:03:48.084913 | orchestrator | Saturday 10 January 2026 15:03:38 +0000 (0:00:00.072) 0:00:07.062 ****** 2026-01-10 15:03:48.084917 | orchestrator | 2026-01-10 15:03:48.084921 | orchestrator | TASK [Print report file information] ******************************************* 2026-01-10 15:03:48.084925 | orchestrator | Saturday 10 January 2026 15:03:38 +0000 (0:00:00.079) 0:00:07.142 ****** 2026-01-10 15:03:48.084932 | orchestrator | skipping: [testbed-node-0] 2026-01-10 15:03:48.084936 | orchestrator | 2026-01-10 15:03:48.084940 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2026-01-10 15:03:48.084944 | orchestrator | Saturday 10 January 2026 15:03:38 +0000 (0:00:00.247) 0:00:07.389 ****** 2026-01-10 15:03:48.084947 | orchestrator | skipping: [testbed-node-0] 2026-01-10 15:03:48.084951 | orchestrator | 2026-01-10 15:03:48.084967 | orchestrator | TASK [Prepare quorum test vars] ************************************************ 2026-01-10 15:03:48.084972 | orchestrator | Saturday 10 January 2026 15:03:38 +0000 (0:00:00.239) 0:00:07.628 ****** 2026-01-10 15:03:48.084977 | orchestrator | ok: [testbed-node-0] 2026-01-10 15:03:48.084981 | orchestrator | 2026-01-10 15:03:48.084986 | orchestrator | TASK [Get monmap info from one mon container] ********************************** 2026-01-10 15:03:48.084991 | orchestrator | Saturday 10 January 2026 15:03:38 +0000 (0:00:00.126) 0:00:07.754 ****** 2026-01-10 15:03:48.084995 | orchestrator | changed: [testbed-node-0] 2026-01-10 15:03:48.085000 | orchestrator | 2026-01-10 15:03:48.085004 | orchestrator | TASK [Set quorum test data] **************************************************** 2026-01-10 15:03:48.085009 | orchestrator | Saturday 10 January 2026 15:03:40 +0000 (0:00:01.720) 0:00:09.475 ****** 2026-01-10 15:03:48.085013 | orchestrator | ok: [testbed-node-0] 2026-01-10 15:03:48.085017 | orchestrator | 2026-01-10 15:03:48.085022 | orchestrator | TASK [Fail quorum test if not all monitors are in quorum] ********************** 2026-01-10 15:03:48.085026 | orchestrator | Saturday 10 January 2026 15:03:41 +0000 (0:00:00.535) 0:00:10.010 ****** 2026-01-10 15:03:48.085030 | orchestrator | skipping: [testbed-node-0] 2026-01-10 15:03:48.085035 | orchestrator | 2026-01-10 15:03:48.085039 | orchestrator | TASK [Pass quorum test if all monitors are in quorum] ************************** 2026-01-10 15:03:48.085044 | orchestrator | Saturday 10 January 2026 15:03:41 +0000 (0:00:00.129) 0:00:10.140 ****** 2026-01-10 15:03:48.085048 | orchestrator | ok: [testbed-node-0] 2026-01-10 15:03:48.085053 | orchestrator | 2026-01-10 15:03:48.085057 | orchestrator | TASK [Set fsid test vars] ****************************************************** 2026-01-10 15:03:48.085061 | orchestrator | Saturday 10 January 2026 15:03:41 +0000 (0:00:00.331) 0:00:10.471 ****** 2026-01-10 15:03:48.085066 | orchestrator | ok: [testbed-node-0] 2026-01-10 15:03:48.085070 | orchestrator | 2026-01-10 15:03:48.085074 | orchestrator | TASK [Fail Cluster FSID test if FSID does not match configuration] ************* 2026-01-10 15:03:48.085098 | orchestrator | Saturday 10 January 2026 15:03:41 +0000 (0:00:00.326) 0:00:10.798 ****** 2026-01-10 15:03:48.085103 | orchestrator | skipping: [testbed-node-0] 2026-01-10 15:03:48.085108 | orchestrator | 2026-01-10 15:03:48.085112 | orchestrator | TASK [Pass Cluster FSID test if it matches configuration] ********************** 2026-01-10 15:03:48.085117 | orchestrator | Saturday 10 January 2026 15:03:42 +0000 (0:00:00.147) 0:00:10.946 ****** 2026-01-10 15:03:48.085121 | orchestrator | ok: [testbed-node-0] 2026-01-10 15:03:48.085125 | orchestrator | 2026-01-10 15:03:48.085129 | orchestrator | TASK [Prepare status test vars] ************************************************ 2026-01-10 15:03:48.085134 | orchestrator | Saturday 10 January 2026 15:03:42 +0000 (0:00:00.122) 0:00:11.068 ****** 2026-01-10 15:03:48.085138 | orchestrator | ok: [testbed-node-0] 2026-01-10 15:03:48.085142 | orchestrator | 2026-01-10 15:03:48.085147 | orchestrator | TASK [Gather status data] ****************************************************** 2026-01-10 15:03:48.085151 | orchestrator | Saturday 10 January 2026 15:03:42 +0000 (0:00:00.113) 0:00:11.182 ****** 2026-01-10 15:03:48.085155 | orchestrator | changed: [testbed-node-0] 2026-01-10 15:03:48.085160 | orchestrator | 2026-01-10 15:03:48.085164 | orchestrator | TASK [Set health test data] **************************************************** 2026-01-10 15:03:48.085168 | orchestrator | Saturday 10 January 2026 15:03:43 +0000 (0:00:01.419) 0:00:12.601 ****** 2026-01-10 15:03:48.085173 | orchestrator | ok: [testbed-node-0] 2026-01-10 15:03:48.085177 | orchestrator | 2026-01-10 15:03:48.085182 | orchestrator | TASK [Fail cluster-health if health is not acceptable] ************************* 2026-01-10 15:03:48.085186 | orchestrator | Saturday 10 January 2026 15:03:44 +0000 (0:00:00.330) 0:00:12.931 ****** 2026-01-10 15:03:48.085194 | orchestrator | skipping: [testbed-node-0] 2026-01-10 15:03:48.085199 | orchestrator | 2026-01-10 15:03:48.085203 | orchestrator | TASK [Pass cluster-health if health is acceptable] ***************************** 2026-01-10 15:03:48.085207 | orchestrator | Saturday 10 January 2026 15:03:44 +0000 (0:00:00.153) 0:00:13.085 ****** 2026-01-10 15:03:48.085212 | orchestrator | ok: [testbed-node-0] 2026-01-10 15:03:48.085216 | orchestrator | 2026-01-10 15:03:48.085220 | orchestrator | TASK [Fail cluster-health if health is not acceptable (strict)] **************** 2026-01-10 15:03:48.085225 | orchestrator | Saturday 10 January 2026 15:03:44 +0000 (0:00:00.152) 0:00:13.238 ****** 2026-01-10 15:03:48.085229 | orchestrator | skipping: [testbed-node-0] 2026-01-10 15:03:48.085234 | orchestrator | 2026-01-10 15:03:48.085247 | orchestrator | TASK [Pass cluster-health if status is OK (strict)] **************************** 2026-01-10 15:03:48.085257 | orchestrator | Saturday 10 January 2026 15:03:44 +0000 (0:00:00.342) 0:00:13.580 ****** 2026-01-10 15:03:48.085262 | orchestrator | skipping: [testbed-node-0] 2026-01-10 15:03:48.085269 | orchestrator | 2026-01-10 15:03:48.085274 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2026-01-10 15:03:48.085278 | orchestrator | Saturday 10 January 2026 15:03:44 +0000 (0:00:00.155) 0:00:13.735 ****** 2026-01-10 15:03:48.085283 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-01-10 15:03:48.085288 | orchestrator | 2026-01-10 15:03:48.085292 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2026-01-10 15:03:48.085297 | orchestrator | Saturday 10 January 2026 15:03:45 +0000 (0:00:00.284) 0:00:14.020 ****** 2026-01-10 15:03:48.085301 | orchestrator | skipping: [testbed-node-0] 2026-01-10 15:03:48.085306 | orchestrator | 2026-01-10 15:03:48.085310 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-01-10 15:03:48.085315 | orchestrator | Saturday 10 January 2026 15:03:45 +0000 (0:00:00.251) 0:00:14.271 ****** 2026-01-10 15:03:48.085319 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-01-10 15:03:48.085324 | orchestrator | 2026-01-10 15:03:48.085329 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-01-10 15:03:48.085334 | orchestrator | Saturday 10 January 2026 15:03:47 +0000 (0:00:01.867) 0:00:16.139 ****** 2026-01-10 15:03:48.085338 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-01-10 15:03:48.085343 | orchestrator | 2026-01-10 15:03:48.085350 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-01-10 15:03:48.085354 | orchestrator | Saturday 10 January 2026 15:03:47 +0000 (0:00:00.269) 0:00:16.408 ****** 2026-01-10 15:03:48.085358 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-01-10 15:03:48.085362 | orchestrator | 2026-01-10 15:03:48.085369 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-01-10 15:03:51.017555 | orchestrator | Saturday 10 January 2026 15:03:47 +0000 (0:00:00.283) 0:00:16.691 ****** 2026-01-10 15:03:51.017707 | orchestrator | 2026-01-10 15:03:51.017719 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-01-10 15:03:51.017727 | orchestrator | Saturday 10 January 2026 15:03:47 +0000 (0:00:00.073) 0:00:16.765 ****** 2026-01-10 15:03:51.017735 | orchestrator | 2026-01-10 15:03:51.017742 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-01-10 15:03:51.017749 | orchestrator | Saturday 10 January 2026 15:03:47 +0000 (0:00:00.071) 0:00:16.837 ****** 2026-01-10 15:03:51.017757 | orchestrator | 2026-01-10 15:03:51.017763 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2026-01-10 15:03:51.017770 | orchestrator | Saturday 10 January 2026 15:03:48 +0000 (0:00:00.075) 0:00:16.912 ****** 2026-01-10 15:03:51.017778 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-01-10 15:03:51.017785 | orchestrator | 2026-01-10 15:03:51.017792 | orchestrator | TASK [Print report file information] ******************************************* 2026-01-10 15:03:51.017801 | orchestrator | Saturday 10 January 2026 15:03:49 +0000 (0:00:01.651) 0:00:18.564 ****** 2026-01-10 15:03:51.017844 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2026-01-10 15:03:51.017852 | orchestrator |  "msg": [ 2026-01-10 15:03:51.017867 | orchestrator |  "Validator run completed.", 2026-01-10 15:03:51.017876 | orchestrator |  "You can find the report file here:", 2026-01-10 15:03:51.017884 | orchestrator |  "/opt/reports/validator/ceph-mons-validator-2026-01-10T15:03:32+00:00-report.json", 2026-01-10 15:03:51.017892 | orchestrator |  "on the following host:", 2026-01-10 15:03:51.017900 | orchestrator |  "testbed-manager" 2026-01-10 15:03:51.017908 | orchestrator |  ] 2026-01-10 15:03:51.017915 | orchestrator | } 2026-01-10 15:03:51.017923 | orchestrator | 2026-01-10 15:03:51.017930 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-10 15:03:51.017939 | orchestrator | testbed-node-0 : ok=24  changed=5  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2026-01-10 15:03:51.017948 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-10 15:03:51.017956 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-10 15:03:51.017963 | orchestrator | 2026-01-10 15:03:51.017970 | orchestrator | 2026-01-10 15:03:51.017977 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-10 15:03:51.017986 | orchestrator | Saturday 10 January 2026 15:03:50 +0000 (0:00:00.921) 0:00:19.485 ****** 2026-01-10 15:03:51.017994 | orchestrator | =============================================================================== 2026-01-10 15:03:51.018001 | orchestrator | Aggregate test results step one ----------------------------------------- 1.87s 2026-01-10 15:03:51.018007 | orchestrator | Get monmap info from one mon container ---------------------------------- 1.72s 2026-01-10 15:03:51.018074 | orchestrator | Write report file ------------------------------------------------------- 1.65s 2026-01-10 15:03:51.018084 | orchestrator | Gather status data ------------------------------------------------------ 1.42s 2026-01-10 15:03:51.018091 | orchestrator | Get container info ------------------------------------------------------ 1.18s 2026-01-10 15:03:51.018098 | orchestrator | Create report output directory ------------------------------------------ 1.08s 2026-01-10 15:03:51.018105 | orchestrator | Get timestamp for report file ------------------------------------------- 0.94s 2026-01-10 15:03:51.018112 | orchestrator | Print report file information ------------------------------------------- 0.92s 2026-01-10 15:03:51.018118 | orchestrator | Set test result to passed if container is existing ---------------------- 0.57s 2026-01-10 15:03:51.018124 | orchestrator | Set test result to passed if ceph-mon is running ------------------------ 0.54s 2026-01-10 15:03:51.018131 | orchestrator | Set quorum test data ---------------------------------------------------- 0.54s 2026-01-10 15:03:51.018137 | orchestrator | Fail cluster-health if health is not acceptable (strict) ---------------- 0.34s 2026-01-10 15:03:51.018145 | orchestrator | Pass quorum test if all monitors are in quorum -------------------------- 0.33s 2026-01-10 15:03:51.018151 | orchestrator | Set health test data ---------------------------------------------------- 0.33s 2026-01-10 15:03:51.018158 | orchestrator | Set fsid test vars ------------------------------------------------------ 0.33s 2026-01-10 15:03:51.018171 | orchestrator | Prepare test data ------------------------------------------------------- 0.32s 2026-01-10 15:03:51.018178 | orchestrator | Set test result to failed if container is missing ----------------------- 0.31s 2026-01-10 15:03:51.018187 | orchestrator | Prepare test data for container existance test -------------------------- 0.30s 2026-01-10 15:03:51.018195 | orchestrator | Set test result to failed if ceph-mon is not running -------------------- 0.30s 2026-01-10 15:03:51.018208 | orchestrator | Set validation result to passed if no test failed ----------------------- 0.28s 2026-01-10 15:03:51.388502 | orchestrator | + osism validate ceph-mgrs 2026-01-10 15:04:23.308052 | orchestrator | 2026-01-10 15:04:23.308160 | orchestrator | PLAY [Ceph validate mgrs] ****************************************************** 2026-01-10 15:04:23.308193 | orchestrator | 2026-01-10 15:04:23.308201 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2026-01-10 15:04:23.308208 | orchestrator | Saturday 10 January 2026 15:04:08 +0000 (0:00:00.469) 0:00:00.469 ****** 2026-01-10 15:04:23.308215 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-01-10 15:04:23.308221 | orchestrator | 2026-01-10 15:04:23.308227 | orchestrator | TASK [Create report output directory] ****************************************** 2026-01-10 15:04:23.308234 | orchestrator | Saturday 10 January 2026 15:04:09 +0000 (0:00:00.877) 0:00:01.347 ****** 2026-01-10 15:04:23.308240 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-01-10 15:04:23.308246 | orchestrator | 2026-01-10 15:04:23.308252 | orchestrator | TASK [Define report vars] ****************************************************** 2026-01-10 15:04:23.308258 | orchestrator | Saturday 10 January 2026 15:04:10 +0000 (0:00:00.998) 0:00:02.345 ****** 2026-01-10 15:04:23.308265 | orchestrator | ok: [testbed-node-0] 2026-01-10 15:04:23.308272 | orchestrator | 2026-01-10 15:04:23.308280 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2026-01-10 15:04:23.308287 | orchestrator | Saturday 10 January 2026 15:04:10 +0000 (0:00:00.131) 0:00:02.477 ****** 2026-01-10 15:04:23.308293 | orchestrator | ok: [testbed-node-0] 2026-01-10 15:04:23.308299 | orchestrator | ok: [testbed-node-1] 2026-01-10 15:04:23.308306 | orchestrator | ok: [testbed-node-2] 2026-01-10 15:04:23.308312 | orchestrator | 2026-01-10 15:04:23.308318 | orchestrator | TASK [Get container info] ****************************************************** 2026-01-10 15:04:23.308325 | orchestrator | Saturday 10 January 2026 15:04:10 +0000 (0:00:00.321) 0:00:02.798 ****** 2026-01-10 15:04:23.308331 | orchestrator | ok: [testbed-node-1] 2026-01-10 15:04:23.308337 | orchestrator | ok: [testbed-node-2] 2026-01-10 15:04:23.308343 | orchestrator | ok: [testbed-node-0] 2026-01-10 15:04:23.308349 | orchestrator | 2026-01-10 15:04:23.308355 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2026-01-10 15:04:23.308361 | orchestrator | Saturday 10 January 2026 15:04:11 +0000 (0:00:01.097) 0:00:03.896 ****** 2026-01-10 15:04:23.308367 | orchestrator | skipping: [testbed-node-0] 2026-01-10 15:04:23.308373 | orchestrator | skipping: [testbed-node-1] 2026-01-10 15:04:23.308379 | orchestrator | skipping: [testbed-node-2] 2026-01-10 15:04:23.308386 | orchestrator | 2026-01-10 15:04:23.308392 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2026-01-10 15:04:23.308398 | orchestrator | Saturday 10 January 2026 15:04:12 +0000 (0:00:00.347) 0:00:04.243 ****** 2026-01-10 15:04:23.308405 | orchestrator | ok: [testbed-node-0] 2026-01-10 15:04:23.308411 | orchestrator | ok: [testbed-node-1] 2026-01-10 15:04:23.308417 | orchestrator | ok: [testbed-node-2] 2026-01-10 15:04:23.308423 | orchestrator | 2026-01-10 15:04:23.308429 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-01-10 15:04:23.308435 | orchestrator | Saturday 10 January 2026 15:04:12 +0000 (0:00:00.508) 0:00:04.752 ****** 2026-01-10 15:04:23.308440 | orchestrator | ok: [testbed-node-0] 2026-01-10 15:04:23.308447 | orchestrator | ok: [testbed-node-1] 2026-01-10 15:04:23.308453 | orchestrator | ok: [testbed-node-2] 2026-01-10 15:04:23.308459 | orchestrator | 2026-01-10 15:04:23.308465 | orchestrator | TASK [Set test result to failed if ceph-mgr is not running] ******************** 2026-01-10 15:04:23.308471 | orchestrator | Saturday 10 January 2026 15:04:12 +0000 (0:00:00.333) 0:00:05.086 ****** 2026-01-10 15:04:23.308478 | orchestrator | skipping: [testbed-node-0] 2026-01-10 15:04:23.308484 | orchestrator | skipping: [testbed-node-1] 2026-01-10 15:04:23.308490 | orchestrator | skipping: [testbed-node-2] 2026-01-10 15:04:23.308496 | orchestrator | 2026-01-10 15:04:23.308502 | orchestrator | TASK [Set test result to passed if ceph-mgr is running] ************************ 2026-01-10 15:04:23.308508 | orchestrator | Saturday 10 January 2026 15:04:13 +0000 (0:00:00.307) 0:00:05.394 ****** 2026-01-10 15:04:23.308514 | orchestrator | ok: [testbed-node-0] 2026-01-10 15:04:23.308520 | orchestrator | ok: [testbed-node-1] 2026-01-10 15:04:23.308527 | orchestrator | ok: [testbed-node-2] 2026-01-10 15:04:23.308539 | orchestrator | 2026-01-10 15:04:23.308546 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-01-10 15:04:23.308553 | orchestrator | Saturday 10 January 2026 15:04:13 +0000 (0:00:00.525) 0:00:05.919 ****** 2026-01-10 15:04:23.308559 | orchestrator | skipping: [testbed-node-0] 2026-01-10 15:04:23.308566 | orchestrator | 2026-01-10 15:04:23.308572 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-01-10 15:04:23.308578 | orchestrator | Saturday 10 January 2026 15:04:13 +0000 (0:00:00.252) 0:00:06.171 ****** 2026-01-10 15:04:23.308584 | orchestrator | skipping: [testbed-node-0] 2026-01-10 15:04:23.308591 | orchestrator | 2026-01-10 15:04:23.308597 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-01-10 15:04:23.308603 | orchestrator | Saturday 10 January 2026 15:04:14 +0000 (0:00:00.255) 0:00:06.427 ****** 2026-01-10 15:04:23.308609 | orchestrator | skipping: [testbed-node-0] 2026-01-10 15:04:23.308615 | orchestrator | 2026-01-10 15:04:23.308621 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-01-10 15:04:23.308628 | orchestrator | Saturday 10 January 2026 15:04:14 +0000 (0:00:00.272) 0:00:06.700 ****** 2026-01-10 15:04:23.308654 | orchestrator | 2026-01-10 15:04:23.308660 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-01-10 15:04:23.308666 | orchestrator | Saturday 10 January 2026 15:04:14 +0000 (0:00:00.072) 0:00:06.773 ****** 2026-01-10 15:04:23.308673 | orchestrator | 2026-01-10 15:04:23.308681 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-01-10 15:04:23.308689 | orchestrator | Saturday 10 January 2026 15:04:14 +0000 (0:00:00.071) 0:00:06.844 ****** 2026-01-10 15:04:23.308696 | orchestrator | 2026-01-10 15:04:23.308703 | orchestrator | TASK [Print report file information] ******************************************* 2026-01-10 15:04:23.308710 | orchestrator | Saturday 10 January 2026 15:04:14 +0000 (0:00:00.074) 0:00:06.919 ****** 2026-01-10 15:04:23.308716 | orchestrator | skipping: [testbed-node-0] 2026-01-10 15:04:23.308722 | orchestrator | 2026-01-10 15:04:23.308728 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2026-01-10 15:04:23.308735 | orchestrator | Saturday 10 January 2026 15:04:14 +0000 (0:00:00.248) 0:00:07.168 ****** 2026-01-10 15:04:23.308741 | orchestrator | skipping: [testbed-node-0] 2026-01-10 15:04:23.308747 | orchestrator | 2026-01-10 15:04:23.308792 | orchestrator | TASK [Define mgr module test vars] ********************************************* 2026-01-10 15:04:23.308800 | orchestrator | Saturday 10 January 2026 15:04:15 +0000 (0:00:00.271) 0:00:07.439 ****** 2026-01-10 15:04:23.308809 | orchestrator | ok: [testbed-node-0] 2026-01-10 15:04:23.308816 | orchestrator | 2026-01-10 15:04:23.308823 | orchestrator | TASK [Gather list of mgr modules] ********************************************** 2026-01-10 15:04:23.308829 | orchestrator | Saturday 10 January 2026 15:04:15 +0000 (0:00:00.140) 0:00:07.579 ****** 2026-01-10 15:04:23.308836 | orchestrator | changed: [testbed-node-0] 2026-01-10 15:04:23.308843 | orchestrator | 2026-01-10 15:04:23.308850 | orchestrator | TASK [Parse mgr module list from json] ***************************************** 2026-01-10 15:04:23.308856 | orchestrator | Saturday 10 January 2026 15:04:17 +0000 (0:00:02.136) 0:00:09.716 ****** 2026-01-10 15:04:23.308863 | orchestrator | ok: [testbed-node-0] 2026-01-10 15:04:23.308870 | orchestrator | 2026-01-10 15:04:23.308876 | orchestrator | TASK [Extract list of enabled mgr modules] ************************************* 2026-01-10 15:04:23.308883 | orchestrator | Saturday 10 January 2026 15:04:17 +0000 (0:00:00.475) 0:00:10.191 ****** 2026-01-10 15:04:23.308890 | orchestrator | ok: [testbed-node-0] 2026-01-10 15:04:23.308898 | orchestrator | 2026-01-10 15:04:23.308905 | orchestrator | TASK [Fail test if mgr modules are disabled that should be enabled] ************ 2026-01-10 15:04:23.308912 | orchestrator | Saturday 10 January 2026 15:04:18 +0000 (0:00:00.346) 0:00:10.538 ****** 2026-01-10 15:04:23.308919 | orchestrator | skipping: [testbed-node-0] 2026-01-10 15:04:23.308925 | orchestrator | 2026-01-10 15:04:23.308931 | orchestrator | TASK [Pass test if required mgr modules are enabled] *************************** 2026-01-10 15:04:23.308937 | orchestrator | Saturday 10 January 2026 15:04:18 +0000 (0:00:00.148) 0:00:10.686 ****** 2026-01-10 15:04:23.308952 | orchestrator | ok: [testbed-node-0] 2026-01-10 15:04:23.308959 | orchestrator | 2026-01-10 15:04:23.308966 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2026-01-10 15:04:23.308973 | orchestrator | Saturday 10 January 2026 15:04:18 +0000 (0:00:00.153) 0:00:10.840 ****** 2026-01-10 15:04:23.308981 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-01-10 15:04:23.308988 | orchestrator | 2026-01-10 15:04:23.308995 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2026-01-10 15:04:23.309001 | orchestrator | Saturday 10 January 2026 15:04:18 +0000 (0:00:00.262) 0:00:11.102 ****** 2026-01-10 15:04:23.309008 | orchestrator | skipping: [testbed-node-0] 2026-01-10 15:04:23.309014 | orchestrator | 2026-01-10 15:04:23.309021 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-01-10 15:04:23.309028 | orchestrator | Saturday 10 January 2026 15:04:19 +0000 (0:00:00.275) 0:00:11.378 ****** 2026-01-10 15:04:23.309035 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-01-10 15:04:23.309042 | orchestrator | 2026-01-10 15:04:23.309049 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-01-10 15:04:23.309055 | orchestrator | Saturday 10 January 2026 15:04:20 +0000 (0:00:01.301) 0:00:12.679 ****** 2026-01-10 15:04:23.309062 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-01-10 15:04:23.309069 | orchestrator | 2026-01-10 15:04:23.309076 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-01-10 15:04:23.309083 | orchestrator | Saturday 10 January 2026 15:04:20 +0000 (0:00:00.256) 0:00:12.935 ****** 2026-01-10 15:04:23.309091 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-01-10 15:04:23.309098 | orchestrator | 2026-01-10 15:04:23.309105 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-01-10 15:04:23.309111 | orchestrator | Saturday 10 January 2026 15:04:20 +0000 (0:00:00.266) 0:00:13.202 ****** 2026-01-10 15:04:23.309117 | orchestrator | 2026-01-10 15:04:23.309123 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-01-10 15:04:23.309129 | orchestrator | Saturday 10 January 2026 15:04:21 +0000 (0:00:00.078) 0:00:13.281 ****** 2026-01-10 15:04:23.309135 | orchestrator | 2026-01-10 15:04:23.309142 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-01-10 15:04:23.309148 | orchestrator | Saturday 10 January 2026 15:04:21 +0000 (0:00:00.073) 0:00:13.355 ****** 2026-01-10 15:04:23.309154 | orchestrator | 2026-01-10 15:04:23.309160 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2026-01-10 15:04:23.309166 | orchestrator | Saturday 10 January 2026 15:04:21 +0000 (0:00:00.287) 0:00:13.642 ****** 2026-01-10 15:04:23.309172 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-01-10 15:04:23.309179 | orchestrator | 2026-01-10 15:04:23.309184 | orchestrator | TASK [Print report file information] ******************************************* 2026-01-10 15:04:23.309190 | orchestrator | Saturday 10 January 2026 15:04:22 +0000 (0:00:01.446) 0:00:15.089 ****** 2026-01-10 15:04:23.309196 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2026-01-10 15:04:23.309203 | orchestrator |  "msg": [ 2026-01-10 15:04:23.309209 | orchestrator |  "Validator run completed.", 2026-01-10 15:04:23.309215 | orchestrator |  "You can find the report file here:", 2026-01-10 15:04:23.309221 | orchestrator |  "/opt/reports/validator/ceph-mgrs-validator-2026-01-10T15:04:08+00:00-report.json", 2026-01-10 15:04:23.309229 | orchestrator |  "on the following host:", 2026-01-10 15:04:23.309235 | orchestrator |  "testbed-manager" 2026-01-10 15:04:23.309241 | orchestrator |  ] 2026-01-10 15:04:23.309248 | orchestrator | } 2026-01-10 15:04:23.309254 | orchestrator | 2026-01-10 15:04:23.309260 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-10 15:04:23.309267 | orchestrator | testbed-node-0 : ok=19  changed=3  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-01-10 15:04:23.309280 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-10 15:04:23.309295 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-10 15:04:23.661741 | orchestrator | 2026-01-10 15:04:23.661818 | orchestrator | 2026-01-10 15:04:23.661826 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-10 15:04:23.661847 | orchestrator | Saturday 10 January 2026 15:04:23 +0000 (0:00:00.424) 0:00:15.513 ****** 2026-01-10 15:04:23.661852 | orchestrator | =============================================================================== 2026-01-10 15:04:23.661856 | orchestrator | Gather list of mgr modules ---------------------------------------------- 2.14s 2026-01-10 15:04:23.661860 | orchestrator | Write report file ------------------------------------------------------- 1.45s 2026-01-10 15:04:23.661863 | orchestrator | Aggregate test results step one ----------------------------------------- 1.30s 2026-01-10 15:04:23.661867 | orchestrator | Get container info ------------------------------------------------------ 1.10s 2026-01-10 15:04:23.661871 | orchestrator | Create report output directory ------------------------------------------ 1.00s 2026-01-10 15:04:23.661875 | orchestrator | Get timestamp for report file ------------------------------------------- 0.88s 2026-01-10 15:04:23.661879 | orchestrator | Set test result to passed if ceph-mgr is running ------------------------ 0.53s 2026-01-10 15:04:23.661883 | orchestrator | Set test result to passed if container is existing ---------------------- 0.51s 2026-01-10 15:04:23.661887 | orchestrator | Parse mgr module list from json ----------------------------------------- 0.48s 2026-01-10 15:04:23.661890 | orchestrator | Flush handlers ---------------------------------------------------------- 0.44s 2026-01-10 15:04:23.661894 | orchestrator | Print report file information ------------------------------------------- 0.42s 2026-01-10 15:04:23.661898 | orchestrator | Set test result to failed if container is missing ----------------------- 0.35s 2026-01-10 15:04:23.661902 | orchestrator | Extract list of enabled mgr modules ------------------------------------- 0.35s 2026-01-10 15:04:23.661905 | orchestrator | Prepare test data ------------------------------------------------------- 0.33s 2026-01-10 15:04:23.661909 | orchestrator | Prepare test data for container existance test -------------------------- 0.32s 2026-01-10 15:04:23.661913 | orchestrator | Set test result to failed if ceph-mgr is not running -------------------- 0.31s 2026-01-10 15:04:23.661916 | orchestrator | Set validation result to failed if a test failed ------------------------ 0.28s 2026-01-10 15:04:23.661920 | orchestrator | Aggregate test results step three --------------------------------------- 0.27s 2026-01-10 15:04:23.661924 | orchestrator | Fail due to missing containers ------------------------------------------ 0.27s 2026-01-10 15:04:23.661927 | orchestrator | Aggregate test results step three --------------------------------------- 0.27s 2026-01-10 15:04:24.006414 | orchestrator | + osism validate ceph-osds 2026-01-10 15:04:45.203872 | orchestrator | 2026-01-10 15:04:45.203973 | orchestrator | PLAY [Ceph validate OSDs] ****************************************************** 2026-01-10 15:04:45.203985 | orchestrator | 2026-01-10 15:04:45.203992 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2026-01-10 15:04:45.204000 | orchestrator | Saturday 10 January 2026 15:04:40 +0000 (0:00:00.394) 0:00:00.394 ****** 2026-01-10 15:04:45.204008 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-01-10 15:04:45.204015 | orchestrator | 2026-01-10 15:04:45.204021 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-01-10 15:04:45.204028 | orchestrator | Saturday 10 January 2026 15:04:41 +0000 (0:00:00.720) 0:00:01.114 ****** 2026-01-10 15:04:45.204034 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-01-10 15:04:45.204040 | orchestrator | 2026-01-10 15:04:45.204046 | orchestrator | TASK [Create report output directory] ****************************************** 2026-01-10 15:04:45.204054 | orchestrator | Saturday 10 January 2026 15:04:42 +0000 (0:00:00.471) 0:00:01.586 ****** 2026-01-10 15:04:45.204078 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-01-10 15:04:45.204082 | orchestrator | 2026-01-10 15:04:45.204086 | orchestrator | TASK [Define report vars] ****************************************************** 2026-01-10 15:04:45.204090 | orchestrator | Saturday 10 January 2026 15:04:42 +0000 (0:00:00.677) 0:00:02.264 ****** 2026-01-10 15:04:45.204094 | orchestrator | ok: [testbed-node-3] 2026-01-10 15:04:45.204099 | orchestrator | 2026-01-10 15:04:45.204103 | orchestrator | TASK [Define OSD test variables] *********************************************** 2026-01-10 15:04:45.204107 | orchestrator | Saturday 10 January 2026 15:04:42 +0000 (0:00:00.127) 0:00:02.392 ****** 2026-01-10 15:04:45.204111 | orchestrator | skipping: [testbed-node-3] 2026-01-10 15:04:45.204115 | orchestrator | 2026-01-10 15:04:45.204119 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2026-01-10 15:04:45.204123 | orchestrator | Saturday 10 January 2026 15:04:43 +0000 (0:00:00.139) 0:00:02.531 ****** 2026-01-10 15:04:45.204126 | orchestrator | skipping: [testbed-node-3] 2026-01-10 15:04:45.204130 | orchestrator | skipping: [testbed-node-4] 2026-01-10 15:04:45.204134 | orchestrator | skipping: [testbed-node-5] 2026-01-10 15:04:45.204137 | orchestrator | 2026-01-10 15:04:45.204141 | orchestrator | TASK [Define OSD test variables] *********************************************** 2026-01-10 15:04:45.204145 | orchestrator | Saturday 10 January 2026 15:04:43 +0000 (0:00:00.305) 0:00:02.836 ****** 2026-01-10 15:04:45.204148 | orchestrator | ok: [testbed-node-3] 2026-01-10 15:04:45.204152 | orchestrator | 2026-01-10 15:04:45.204156 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2026-01-10 15:04:45.204159 | orchestrator | Saturday 10 January 2026 15:04:43 +0000 (0:00:00.144) 0:00:02.981 ****** 2026-01-10 15:04:45.204163 | orchestrator | ok: [testbed-node-3] 2026-01-10 15:04:45.204167 | orchestrator | ok: [testbed-node-4] 2026-01-10 15:04:45.204170 | orchestrator | ok: [testbed-node-5] 2026-01-10 15:04:45.204174 | orchestrator | 2026-01-10 15:04:45.204178 | orchestrator | TASK [Calculate total number of OSDs in cluster] ******************************* 2026-01-10 15:04:45.204182 | orchestrator | Saturday 10 January 2026 15:04:43 +0000 (0:00:00.330) 0:00:03.312 ****** 2026-01-10 15:04:45.204186 | orchestrator | ok: [testbed-node-3] 2026-01-10 15:04:45.204189 | orchestrator | 2026-01-10 15:04:45.204193 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-01-10 15:04:45.204197 | orchestrator | Saturday 10 January 2026 15:04:44 +0000 (0:00:00.655) 0:00:03.968 ****** 2026-01-10 15:04:45.204200 | orchestrator | ok: [testbed-node-3] 2026-01-10 15:04:45.204204 | orchestrator | ok: [testbed-node-4] 2026-01-10 15:04:45.204208 | orchestrator | ok: [testbed-node-5] 2026-01-10 15:04:45.204211 | orchestrator | 2026-01-10 15:04:45.204215 | orchestrator | TASK [Get list of ceph-osd containers on host] ********************************* 2026-01-10 15:04:45.204219 | orchestrator | Saturday 10 January 2026 15:04:44 +0000 (0:00:00.486) 0:00:04.454 ****** 2026-01-10 15:04:45.204224 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'eb8058cfc6674ea9f131fb1a7dc79ed84c93cdf8745ceced7b6ffa2b56989b8c', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 7 minutes (healthy)'})  2026-01-10 15:04:45.204231 | orchestrator | skipping: [testbed-node-3] => (item={'id': '8ffebf8242d87d6bcd3e961d1a809a541341f305f445edfa5f83932b751243e3', 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 8 minutes (healthy)'})  2026-01-10 15:04:45.204236 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'aecd8a362ea1084b945a5e1ae2c9c001bab5c753f86383d916ace9372c8a989d', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 8 minutes (healthy)'})  2026-01-10 15:04:45.204241 | orchestrator | skipping: [testbed-node-3] => (item={'id': '5f00182fb6ef2f795345e26aa34af1cf08bbe15d034df1583803401591af15d4', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 10 minutes (healthy)'})  2026-01-10 15:04:45.204258 | orchestrator | skipping: [testbed-node-3] => (item={'id': '7d5285110ba6a80bbd413ffdbcef388193590ab2f2d077e6752122aadfcd563a', 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 14 minutes'})  2026-01-10 15:04:45.204277 | orchestrator | skipping: [testbed-node-3] => (item={'id': '397e439438585ee66a20a1c36b6027615cc8eed3d50f740b2df85b43faac8117', 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 15 minutes'})  2026-01-10 15:04:45.204281 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'c36de64b20b13b8686cc2d0a49d4ae3a8994dfb3c5122fe01103c3fe69634110', 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 15 minutes'})  2026-01-10 15:04:45.204287 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'd27a6ce6216b4db9ff54e014e6aa5d3c9a87b0a773e89009ff22b11691d1693b', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-rgw-default-testbed-node-3-rgw0', 'state': 'running', 'status': 'Up 21 minutes'})  2026-01-10 15:04:45.204291 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'c9ac889102566543767a8f621ce9249e0a90450ed847e1e9ad37ab66e55dadfe', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-mds-testbed-node-3', 'state': 'running', 'status': 'Up 23 minutes'})  2026-01-10 15:04:45.204298 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'faa09d36eb0710f0fd5ee2cd1a3d769983ea28473388bae3163b938221187b9e', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-crash-testbed-node-3', 'state': 'running', 'status': 'Up 23 minutes'})  2026-01-10 15:04:45.204305 | orchestrator | ok: [testbed-node-3] => (item={'id': 'c7bef0baece7b8bb92522fadcab33b36067e891aeed1b6bef96a5f49344fcb61', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-0', 'state': 'running', 'status': 'Up 24 minutes'}) 2026-01-10 15:04:45.204312 | orchestrator | ok: [testbed-node-3] => (item={'id': 'b47844d5afed0244e3274a63e2d56c4237a81f60e651c1d3e4a875fe4221e6a3', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-3', 'state': 'running', 'status': 'Up 24 minutes'}) 2026-01-10 15:04:45.204318 | orchestrator | skipping: [testbed-node-3] => (item={'id': '3b22682b4db3ed96bc9cb9fe6b4a43b347075455a418a06c520a6fc9c1bb53d9', 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 28 minutes'})  2026-01-10 15:04:45.204330 | orchestrator | skipping: [testbed-node-3] => (item={'id': '0f3d832e37df73f4d1c82889840ecc1f4208fa804519949209cc5d015a7dc47e', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 29 minutes (healthy)'})  2026-01-10 15:04:45.204336 | orchestrator | skipping: [testbed-node-3] => (item={'id': '906e8db25676821f4b30124f6d896d1b19b056fb53e45e2337f21c18382016db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 30 minutes (healthy)'})  2026-01-10 15:04:45.204340 | orchestrator | skipping: [testbed-node-3] => (item={'id': '6416739379db8287868dcace6b83e21b128e42e9e54f62c0ef7b179ce0ba6635', 'image': 'registry.osism.tech/kolla/cron:2025.1', 'name': '/cron', 'state': 'running', 'status': 'Up 30 minutes'})  2026-01-10 15:04:45.204344 | orchestrator | skipping: [testbed-node-3] => (item={'id': '7422b3dd5651a2a7f7de044a13ab641b46720b07769c0032ad1cc619bc3cb3b2', 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 31 minutes'})  2026-01-10 15:04:45.204348 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'b955ed8248ba7f621cdefb336e5594aca190e5324e001b42f3eb7cb386356533', 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'name': '/fluentd', 'state': 'running', 'status': 'Up 31 minutes'})  2026-01-10 15:04:45.204356 | orchestrator | skipping: [testbed-node-4] => (item={'id': '48b4375cc859367fcbd89ec22b9718d2b0271699ccd6c642b73f5fda0fcb4652', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 7 minutes (healthy)'})  2026-01-10 15:04:45.204360 | orchestrator | skipping: [testbed-node-4] => (item={'id': '09bc1edaec33cefb25d898282b78650ef06b7ca6afef3b82d409c9481b8f9d8d', 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 8 minutes (healthy)'})  2026-01-10 15:04:45.204419 | orchestrator | skipping: [testbed-node-4] => (item={'id': '059826d8663db5149762e44a301df95d325006805d2eb2032a85033f0458ef9b', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 8 minutes (healthy)'})  2026-01-10 15:04:45.204434 | orchestrator | skipping: [testbed-node-4] => (item={'id': '5b2e48dacba3b725b49dbec99df46bb827e81bba6bf4dbf9087c8af3ba433353', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2026-01-10 15:04:45.482981 | orchestrator | skipping: [testbed-node-4] => (item={'id': '3a2e0e868067d18b7d329e0fb0bdc2d6221b9f667e2c36404aba0863b35ee60a', 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 14 minutes'})  2026-01-10 15:04:45.483078 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'fa57e73a4915eb78186f9b1c92bc8387bbf9664342a0442338066f2c9550bce9', 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 15 minutes'})  2026-01-10 15:04:45.483089 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'e517b50659942dd9537253b27f9bf9cb04a7942d302eb12b50bbf6ca0653f97d', 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 15 minutes'})  2026-01-10 15:04:45.483094 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'e713651991144a9f599c7bff991d23b8e0eaf9323ca328a269e355ed389bb1cd', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-rgw-default-testbed-node-4-rgw0', 'state': 'running', 'status': 'Up 21 minutes'})  2026-01-10 15:04:45.483098 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'acd71656e195d90a21a48198487f7959cc5ef5964aa711a5d3c3fce7f04f2a05', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-mds-testbed-node-4', 'state': 'running', 'status': 'Up 23 minutes'})  2026-01-10 15:04:45.483103 | orchestrator | skipping: [testbed-node-4] => (item={'id': '9809791c280bec14c9b2bf9045b032b08ed5d53e19290e0d3526d328d5fe1120', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-crash-testbed-node-4', 'state': 'running', 'status': 'Up 23 minutes'})  2026-01-10 15:04:45.483110 | orchestrator | ok: [testbed-node-4] => (item={'id': 'a9d94a40916a4077f7231fa1c24481b90ce99e86a88b489c4bbd4dbda6cd79c5', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-5', 'state': 'running', 'status': 'Up 24 minutes'}) 2026-01-10 15:04:45.483115 | orchestrator | ok: [testbed-node-4] => (item={'id': '4347bfa78a783a19d57f5c804f06d84463817613cf15ab3a4f34fc83544eac75', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-2', 'state': 'running', 'status': 'Up 24 minutes'}) 2026-01-10 15:04:45.483133 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'd456230e9cd31d8a00612abea559df9489ee058f204cfa56ede2c3f9bb4dda95', 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 28 minutes'})  2026-01-10 15:04:45.483137 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'f47333c5bd6ea16a2b326e5991e7ddddd669c951bb1c7dbd0fa1ccb830f0f7e9', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 29 minutes (healthy)'})  2026-01-10 15:04:45.483155 | orchestrator | skipping: [testbed-node-4] => (item={'id': '29672d6ad9d421dc07e54ddd240d6e4d1c9334c1c3f5698e301289d161fba3d6', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 30 minutes (healthy)'})  2026-01-10 15:04:45.483160 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'ef4e38501123334994dd8e0c1b75ea5f927d8b1a45e8320d5365c8e221db6dee', 'image': 'registry.osism.tech/kolla/cron:2025.1', 'name': '/cron', 'state': 'running', 'status': 'Up 30 minutes'})  2026-01-10 15:04:45.483164 | orchestrator | skipping: [testbed-node-4] => (item={'id': '37eb8c7e80b6f7a2e2f1fbf309119d5d5e44aa3ffd5a5e2ecf549983af16fdf8', 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 31 minutes'})  2026-01-10 15:04:45.483168 | orchestrator | skipping: [testbed-node-4] => (item={'id': '8049b278eec4b4bf0c26e5dba9d10fc601cd6699bddd6d319560aa085ca40937', 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'name': '/fluentd', 'state': 'running', 'status': 'Up 31 minutes'})  2026-01-10 15:04:45.483172 | orchestrator | skipping: [testbed-node-5] => (item={'id': '15cff6d87e2877bcecedabd17963779a42be904a41d6e25449ed4a88f8361e44', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 7 minutes (healthy)'})  2026-01-10 15:04:45.483187 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'a2c70c03b102fcf7b2ff629c8a83a5dadd579b025aa088f3bb88e5d57f63916e', 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 8 minutes (healthy)'})  2026-01-10 15:04:45.483192 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'c122264f9b18907a26854afffe0bfea1b48e6606e583bf647a57e488c0255c82', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 8 minutes (healthy)'})  2026-01-10 15:04:45.483195 | orchestrator | skipping: [testbed-node-5] => (item={'id': '1b82a1596c1e97ecd2e961eb814be49cb9c14ee3d494237cebadfcc9143f1582', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 10 minutes (healthy)'})  2026-01-10 15:04:45.483199 | orchestrator | skipping: [testbed-node-5] => (item={'id': '62f266e39b9039bdb9882e12bbe8b8617e62b115284978801da079c593084830', 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 14 minutes'})  2026-01-10 15:04:45.483203 | orchestrator | skipping: [testbed-node-5] => (item={'id': '7a1a49dd5a112304db4739fbeb5ee1a03a2400acda10d38787329257b86e694c', 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 15 minutes'})  2026-01-10 15:04:45.483207 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'a7229646cfcf9c9097cc982a04ce130f66caf0d6d60011555fe4a264cfaceedd', 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 15 minutes'})  2026-01-10 15:04:45.483211 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'efde41b9bdf36d4c2a1b1eefcaf897192ca9fc0540c73fd3151b1d8ff378703e', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-rgw-default-testbed-node-5-rgw0', 'state': 'running', 'status': 'Up 21 minutes'})  2026-01-10 15:04:45.483214 | orchestrator | skipping: [testbed-node-5] => (item={'id': '6fc77c4beec20137ce88080da9b8cfc9bb5feeb0a20b22136f4342b3376d84aa', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-mds-testbed-node-5', 'state': 'running', 'status': 'Up 23 minutes'})  2026-01-10 15:04:45.483221 | orchestrator | skipping: [testbed-node-5] => (item={'id': '6c8ecee3c597ed4140f6bf33e64e065c38beffbf9bdf3c9edda8661e824cdd32', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-crash-testbed-node-5', 'state': 'running', 'status': 'Up 23 minutes'})  2026-01-10 15:04:45.483229 | orchestrator | ok: [testbed-node-5] => (item={'id': '84652ed3ad650a3b9e0b8cbe1b068e82ffedeb59dd46152d38c83b0b59ebb4be', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-1', 'state': 'running', 'status': 'Up 24 minutes'}) 2026-01-10 15:04:45.483233 | orchestrator | ok: [testbed-node-5] => (item={'id': 'dbacafdba361fec798f63ae74e6d4820f31d55d2d6d5984b3abd257226301fbb', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-4', 'state': 'running', 'status': 'Up 24 minutes'}) 2026-01-10 15:04:45.483236 | orchestrator | skipping: [testbed-node-5] => (item={'id': '65a331ff9d2dc443afa5bf36012ef046b4db0f0684b0a538a89c74cec779e4e0', 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 28 minutes'})  2026-01-10 15:04:45.483240 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'ef9f0a745fd90e501c3b430974a68ea08d95d700b3a07aa7307031bf30ea20f9', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 29 minutes (healthy)'})  2026-01-10 15:04:45.483244 | orchestrator | skipping: [testbed-node-5] => (item={'id': '204c6a25468d0d2df0692de24968fadf1a23d25efbfde7432c54fc3f0f2ce996', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 30 minutes (healthy)'})  2026-01-10 15:04:45.483248 | orchestrator | skipping: [testbed-node-5] => (item={'id': '5077cbd0244a8113967b3cc16bc4c09152fd50dcc935d102b3d3b10f6285b2dc', 'image': 'registry.osism.tech/kolla/cron:2025.1', 'name': '/cron', 'state': 'running', 'status': 'Up 30 minutes'})  2026-01-10 15:04:45.483251 | orchestrator | skipping: [testbed-node-5] => (item={'id': '1da7dc0499ad4a55e68dd33c13bc2d9827f6c1167a20cdb52e9abc3f0c984cee', 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 31 minutes'})  2026-01-10 15:04:45.483259 | orchestrator | skipping: [testbed-node-5] => (item={'id': '477f48d504d80d5fad60b09b3fee86790d5bb3132c645aaf071d3fe53729298a', 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'name': '/fluentd', 'state': 'running', 'status': 'Up 31 minutes'})  2026-01-10 15:05:00.541143 | orchestrator | 2026-01-10 15:05:00.541199 | orchestrator | TASK [Get count of ceph-osd containers on host] ******************************** 2026-01-10 15:05:00.541205 | orchestrator | Saturday 10 January 2026 15:04:45 +0000 (0:00:00.522) 0:00:04.976 ****** 2026-01-10 15:05:00.541210 | orchestrator | ok: [testbed-node-3] 2026-01-10 15:05:00.541215 | orchestrator | ok: [testbed-node-4] 2026-01-10 15:05:00.541219 | orchestrator | ok: [testbed-node-5] 2026-01-10 15:05:00.541223 | orchestrator | 2026-01-10 15:05:00.541227 | orchestrator | TASK [Set test result to failed when count of containers is wrong] ************* 2026-01-10 15:05:00.541231 | orchestrator | Saturday 10 January 2026 15:04:45 +0000 (0:00:00.311) 0:00:05.288 ****** 2026-01-10 15:05:00.541235 | orchestrator | skipping: [testbed-node-3] 2026-01-10 15:05:00.541239 | orchestrator | skipping: [testbed-node-4] 2026-01-10 15:05:00.541243 | orchestrator | skipping: [testbed-node-5] 2026-01-10 15:05:00.541247 | orchestrator | 2026-01-10 15:05:00.541250 | orchestrator | TASK [Set test result to passed if count matches] ****************************** 2026-01-10 15:05:00.541254 | orchestrator | Saturday 10 January 2026 15:04:46 +0000 (0:00:00.556) 0:00:05.844 ****** 2026-01-10 15:05:00.541258 | orchestrator | ok: [testbed-node-3] 2026-01-10 15:05:00.541262 | orchestrator | ok: [testbed-node-4] 2026-01-10 15:05:00.541266 | orchestrator | ok: [testbed-node-5] 2026-01-10 15:05:00.541270 | orchestrator | 2026-01-10 15:05:00.541273 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-01-10 15:05:00.541277 | orchestrator | Saturday 10 January 2026 15:04:46 +0000 (0:00:00.330) 0:00:06.175 ****** 2026-01-10 15:05:00.541281 | orchestrator | ok: [testbed-node-3] 2026-01-10 15:05:00.541285 | orchestrator | ok: [testbed-node-4] 2026-01-10 15:05:00.541289 | orchestrator | ok: [testbed-node-5] 2026-01-10 15:05:00.541303 | orchestrator | 2026-01-10 15:05:00.541307 | orchestrator | TASK [Get list of ceph-osd containers that are not running] ******************** 2026-01-10 15:05:00.541312 | orchestrator | Saturday 10 January 2026 15:04:46 +0000 (0:00:00.285) 0:00:06.461 ****** 2026-01-10 15:05:00.541318 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-0', 'osd_id': '0', 'state': 'running'})  2026-01-10 15:05:00.541328 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-3', 'osd_id': '3', 'state': 'running'})  2026-01-10 15:05:00.541337 | orchestrator | skipping: [testbed-node-3] 2026-01-10 15:05:00.541343 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-5', 'osd_id': '5', 'state': 'running'})  2026-01-10 15:05:00.541351 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-2', 'osd_id': '2', 'state': 'running'})  2026-01-10 15:05:00.541357 | orchestrator | skipping: [testbed-node-4] 2026-01-10 15:05:00.541364 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-1', 'osd_id': '1', 'state': 'running'})  2026-01-10 15:05:00.541371 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-4', 'osd_id': '4', 'state': 'running'})  2026-01-10 15:05:00.541377 | orchestrator | skipping: [testbed-node-5] 2026-01-10 15:05:00.541383 | orchestrator | 2026-01-10 15:05:00.541389 | orchestrator | TASK [Get count of ceph-osd containers that are not running] ******************* 2026-01-10 15:05:00.541395 | orchestrator | Saturday 10 January 2026 15:04:47 +0000 (0:00:00.325) 0:00:06.787 ****** 2026-01-10 15:05:00.541402 | orchestrator | ok: [testbed-node-3] 2026-01-10 15:05:00.541408 | orchestrator | ok: [testbed-node-4] 2026-01-10 15:05:00.541414 | orchestrator | ok: [testbed-node-5] 2026-01-10 15:05:00.541420 | orchestrator | 2026-01-10 15:05:00.541427 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2026-01-10 15:05:00.541433 | orchestrator | Saturday 10 January 2026 15:04:47 +0000 (0:00:00.519) 0:00:07.306 ****** 2026-01-10 15:05:00.541440 | orchestrator | skipping: [testbed-node-3] 2026-01-10 15:05:00.541446 | orchestrator | skipping: [testbed-node-4] 2026-01-10 15:05:00.541452 | orchestrator | skipping: [testbed-node-5] 2026-01-10 15:05:00.541458 | orchestrator | 2026-01-10 15:05:00.541465 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2026-01-10 15:05:00.541471 | orchestrator | Saturday 10 January 2026 15:04:48 +0000 (0:00:00.302) 0:00:07.608 ****** 2026-01-10 15:05:00.541477 | orchestrator | skipping: [testbed-node-3] 2026-01-10 15:05:00.541483 | orchestrator | skipping: [testbed-node-4] 2026-01-10 15:05:00.541490 | orchestrator | skipping: [testbed-node-5] 2026-01-10 15:05:00.541496 | orchestrator | 2026-01-10 15:05:00.541502 | orchestrator | TASK [Set test result to passed if all containers are running] ***************** 2026-01-10 15:05:00.541535 | orchestrator | Saturday 10 January 2026 15:04:48 +0000 (0:00:00.327) 0:00:07.936 ****** 2026-01-10 15:05:00.541541 | orchestrator | ok: [testbed-node-3] 2026-01-10 15:05:00.541548 | orchestrator | ok: [testbed-node-4] 2026-01-10 15:05:00.541554 | orchestrator | ok: [testbed-node-5] 2026-01-10 15:05:00.541560 | orchestrator | 2026-01-10 15:05:00.541567 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-01-10 15:05:00.541573 | orchestrator | Saturday 10 January 2026 15:04:48 +0000 (0:00:00.350) 0:00:08.286 ****** 2026-01-10 15:05:00.541580 | orchestrator | skipping: [testbed-node-3] 2026-01-10 15:05:00.541585 | orchestrator | 2026-01-10 15:05:00.541591 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-01-10 15:05:00.541597 | orchestrator | Saturday 10 January 2026 15:04:49 +0000 (0:00:00.498) 0:00:08.785 ****** 2026-01-10 15:05:00.541603 | orchestrator | skipping: [testbed-node-3] 2026-01-10 15:05:00.541609 | orchestrator | 2026-01-10 15:05:00.541614 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-01-10 15:05:00.541620 | orchestrator | Saturday 10 January 2026 15:04:49 +0000 (0:00:00.713) 0:00:09.499 ****** 2026-01-10 15:05:00.541626 | orchestrator | skipping: [testbed-node-3] 2026-01-10 15:05:00.541632 | orchestrator | 2026-01-10 15:05:00.541637 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-01-10 15:05:00.541650 | orchestrator | Saturday 10 January 2026 15:04:50 +0000 (0:00:00.259) 0:00:09.759 ****** 2026-01-10 15:05:00.541656 | orchestrator | 2026-01-10 15:05:00.541663 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-01-10 15:05:00.541669 | orchestrator | Saturday 10 January 2026 15:04:50 +0000 (0:00:00.073) 0:00:09.833 ****** 2026-01-10 15:05:00.541675 | orchestrator | 2026-01-10 15:05:00.541681 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-01-10 15:05:00.541732 | orchestrator | Saturday 10 January 2026 15:04:50 +0000 (0:00:00.071) 0:00:09.904 ****** 2026-01-10 15:05:00.541741 | orchestrator | 2026-01-10 15:05:00.541748 | orchestrator | TASK [Print report file information] ******************************************* 2026-01-10 15:05:00.541754 | orchestrator | Saturday 10 January 2026 15:04:50 +0000 (0:00:00.074) 0:00:09.979 ****** 2026-01-10 15:05:00.541761 | orchestrator | skipping: [testbed-node-3] 2026-01-10 15:05:00.541767 | orchestrator | 2026-01-10 15:05:00.541773 | orchestrator | TASK [Fail early due to containers not running] ******************************** 2026-01-10 15:05:00.541779 | orchestrator | Saturday 10 January 2026 15:04:50 +0000 (0:00:00.303) 0:00:10.282 ****** 2026-01-10 15:05:00.541785 | orchestrator | skipping: [testbed-node-3] 2026-01-10 15:05:00.541790 | orchestrator | 2026-01-10 15:05:00.541797 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-01-10 15:05:00.541802 | orchestrator | Saturday 10 January 2026 15:04:51 +0000 (0:00:00.256) 0:00:10.539 ****** 2026-01-10 15:05:00.541808 | orchestrator | ok: [testbed-node-3] 2026-01-10 15:05:00.541814 | orchestrator | ok: [testbed-node-4] 2026-01-10 15:05:00.541820 | orchestrator | ok: [testbed-node-5] 2026-01-10 15:05:00.541826 | orchestrator | 2026-01-10 15:05:00.541832 | orchestrator | TASK [Set _mon_hostname fact] ************************************************** 2026-01-10 15:05:00.541838 | orchestrator | Saturday 10 January 2026 15:04:51 +0000 (0:00:00.314) 0:00:10.854 ****** 2026-01-10 15:05:00.541843 | orchestrator | ok: [testbed-node-3] 2026-01-10 15:05:00.541849 | orchestrator | 2026-01-10 15:05:00.541855 | orchestrator | TASK [Get ceph osd tree] ******************************************************* 2026-01-10 15:05:00.541861 | orchestrator | Saturday 10 January 2026 15:04:51 +0000 (0:00:00.232) 0:00:11.086 ****** 2026-01-10 15:05:00.541867 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-01-10 15:05:00.541873 | orchestrator | 2026-01-10 15:05:00.541879 | orchestrator | TASK [Parse osd tree from JSON] ************************************************ 2026-01-10 15:05:00.541885 | orchestrator | Saturday 10 January 2026 15:04:53 +0000 (0:00:02.277) 0:00:13.363 ****** 2026-01-10 15:05:00.541891 | orchestrator | ok: [testbed-node-3] 2026-01-10 15:05:00.541897 | orchestrator | 2026-01-10 15:05:00.541903 | orchestrator | TASK [Get OSDs that are not up or in] ****************************************** 2026-01-10 15:05:00.541909 | orchestrator | Saturday 10 January 2026 15:04:53 +0000 (0:00:00.151) 0:00:13.515 ****** 2026-01-10 15:05:00.541915 | orchestrator | ok: [testbed-node-3] 2026-01-10 15:05:00.541921 | orchestrator | 2026-01-10 15:05:00.541926 | orchestrator | TASK [Fail test if OSDs are not up or in] ************************************** 2026-01-10 15:05:00.541933 | orchestrator | Saturday 10 January 2026 15:04:54 +0000 (0:00:00.333) 0:00:13.849 ****** 2026-01-10 15:05:00.541938 | orchestrator | skipping: [testbed-node-3] 2026-01-10 15:05:00.541945 | orchestrator | 2026-01-10 15:05:00.541950 | orchestrator | TASK [Pass test if OSDs are all up and in] ************************************* 2026-01-10 15:05:00.541956 | orchestrator | Saturday 10 January 2026 15:04:54 +0000 (0:00:00.150) 0:00:13.999 ****** 2026-01-10 15:05:00.541962 | orchestrator | ok: [testbed-node-3] 2026-01-10 15:05:00.541968 | orchestrator | 2026-01-10 15:05:00.541974 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-01-10 15:05:00.541984 | orchestrator | Saturday 10 January 2026 15:04:54 +0000 (0:00:00.137) 0:00:14.137 ****** 2026-01-10 15:05:00.541990 | orchestrator | ok: [testbed-node-3] 2026-01-10 15:05:00.541996 | orchestrator | ok: [testbed-node-4] 2026-01-10 15:05:00.542002 | orchestrator | ok: [testbed-node-5] 2026-01-10 15:05:00.542008 | orchestrator | 2026-01-10 15:05:00.542069 | orchestrator | TASK [List ceph LVM volumes and collect data] ********************************** 2026-01-10 15:05:00.542075 | orchestrator | Saturday 10 January 2026 15:04:54 +0000 (0:00:00.305) 0:00:14.443 ****** 2026-01-10 15:05:00.542082 | orchestrator | changed: [testbed-node-3] 2026-01-10 15:05:00.542088 | orchestrator | changed: [testbed-node-4] 2026-01-10 15:05:00.542094 | orchestrator | changed: [testbed-node-5] 2026-01-10 15:05:00.542101 | orchestrator | 2026-01-10 15:05:00.542107 | orchestrator | TASK [Parse LVM data as JSON] ************************************************** 2026-01-10 15:05:00.542113 | orchestrator | Saturday 10 January 2026 15:04:57 +0000 (0:00:02.941) 0:00:17.384 ****** 2026-01-10 15:05:00.542119 | orchestrator | ok: [testbed-node-3] 2026-01-10 15:05:00.542126 | orchestrator | ok: [testbed-node-4] 2026-01-10 15:05:00.542132 | orchestrator | ok: [testbed-node-5] 2026-01-10 15:05:00.542139 | orchestrator | 2026-01-10 15:05:00.542145 | orchestrator | TASK [Get unencrypted and encrypted OSDs] ************************************** 2026-01-10 15:05:00.542152 | orchestrator | Saturday 10 January 2026 15:04:58 +0000 (0:00:00.517) 0:00:17.902 ****** 2026-01-10 15:05:00.542158 | orchestrator | ok: [testbed-node-3] 2026-01-10 15:05:00.542165 | orchestrator | ok: [testbed-node-4] 2026-01-10 15:05:00.542169 | orchestrator | ok: [testbed-node-5] 2026-01-10 15:05:00.542173 | orchestrator | 2026-01-10 15:05:00.542177 | orchestrator | TASK [Fail if count of encrypted OSDs does not match] ************************** 2026-01-10 15:05:00.542180 | orchestrator | Saturday 10 January 2026 15:04:58 +0000 (0:00:00.539) 0:00:18.441 ****** 2026-01-10 15:05:00.542184 | orchestrator | skipping: [testbed-node-3] 2026-01-10 15:05:00.542188 | orchestrator | skipping: [testbed-node-4] 2026-01-10 15:05:00.542192 | orchestrator | skipping: [testbed-node-5] 2026-01-10 15:05:00.542195 | orchestrator | 2026-01-10 15:05:00.542199 | orchestrator | TASK [Pass if count of encrypted OSDs equals count of OSDs] ******************** 2026-01-10 15:05:00.542203 | orchestrator | Saturday 10 January 2026 15:04:59 +0000 (0:00:00.351) 0:00:18.792 ****** 2026-01-10 15:05:00.542206 | orchestrator | ok: [testbed-node-3] 2026-01-10 15:05:00.542210 | orchestrator | ok: [testbed-node-4] 2026-01-10 15:05:00.542214 | orchestrator | ok: [testbed-node-5] 2026-01-10 15:05:00.542217 | orchestrator | 2026-01-10 15:05:00.542221 | orchestrator | TASK [Fail if count of unencrypted OSDs does not match] ************************ 2026-01-10 15:05:00.542225 | orchestrator | Saturday 10 January 2026 15:04:59 +0000 (0:00:00.594) 0:00:19.387 ****** 2026-01-10 15:05:00.542228 | orchestrator | skipping: [testbed-node-3] 2026-01-10 15:05:00.542232 | orchestrator | skipping: [testbed-node-4] 2026-01-10 15:05:00.542236 | orchestrator | skipping: [testbed-node-5] 2026-01-10 15:05:00.542239 | orchestrator | 2026-01-10 15:05:00.542243 | orchestrator | TASK [Pass if count of unencrypted OSDs equals count of OSDs] ****************** 2026-01-10 15:05:00.542247 | orchestrator | Saturday 10 January 2026 15:05:00 +0000 (0:00:00.327) 0:00:19.714 ****** 2026-01-10 15:05:00.542250 | orchestrator | skipping: [testbed-node-3] 2026-01-10 15:05:00.542254 | orchestrator | skipping: [testbed-node-4] 2026-01-10 15:05:00.542258 | orchestrator | skipping: [testbed-node-5] 2026-01-10 15:05:00.542262 | orchestrator | 2026-01-10 15:05:00.542270 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-01-10 15:05:08.625852 | orchestrator | Saturday 10 January 2026 15:05:00 +0000 (0:00:00.334) 0:00:20.049 ****** 2026-01-10 15:05:08.625963 | orchestrator | ok: [testbed-node-3] 2026-01-10 15:05:08.625981 | orchestrator | ok: [testbed-node-4] 2026-01-10 15:05:08.625994 | orchestrator | ok: [testbed-node-5] 2026-01-10 15:05:08.626006 | orchestrator | 2026-01-10 15:05:08.626051 | orchestrator | TASK [Get CRUSH node data of each OSD host and root node childs] *************** 2026-01-10 15:05:08.626065 | orchestrator | Saturday 10 January 2026 15:05:01 +0000 (0:00:00.557) 0:00:20.607 ****** 2026-01-10 15:05:08.626078 | orchestrator | ok: [testbed-node-3] 2026-01-10 15:05:08.626091 | orchestrator | ok: [testbed-node-4] 2026-01-10 15:05:08.626103 | orchestrator | ok: [testbed-node-5] 2026-01-10 15:05:08.626116 | orchestrator | 2026-01-10 15:05:08.626129 | orchestrator | TASK [Calculate sub test expression results] *********************************** 2026-01-10 15:05:08.626168 | orchestrator | Saturday 10 January 2026 15:05:02 +0000 (0:00:01.022) 0:00:21.629 ****** 2026-01-10 15:05:08.626182 | orchestrator | ok: [testbed-node-3] 2026-01-10 15:05:08.626194 | orchestrator | ok: [testbed-node-4] 2026-01-10 15:05:08.626206 | orchestrator | ok: [testbed-node-5] 2026-01-10 15:05:08.626219 | orchestrator | 2026-01-10 15:05:08.626231 | orchestrator | TASK [Fail test if any sub test failed] **************************************** 2026-01-10 15:05:08.626243 | orchestrator | Saturday 10 January 2026 15:05:02 +0000 (0:00:00.443) 0:00:22.073 ****** 2026-01-10 15:05:08.626256 | orchestrator | skipping: [testbed-node-3] 2026-01-10 15:05:08.626270 | orchestrator | skipping: [testbed-node-4] 2026-01-10 15:05:08.626282 | orchestrator | skipping: [testbed-node-5] 2026-01-10 15:05:08.626294 | orchestrator | 2026-01-10 15:05:08.626307 | orchestrator | TASK [Pass test if no sub test failed] ***************************************** 2026-01-10 15:05:08.626319 | orchestrator | Saturday 10 January 2026 15:05:02 +0000 (0:00:00.320) 0:00:22.394 ****** 2026-01-10 15:05:08.626332 | orchestrator | ok: [testbed-node-3] 2026-01-10 15:05:08.626344 | orchestrator | ok: [testbed-node-4] 2026-01-10 15:05:08.626356 | orchestrator | ok: [testbed-node-5] 2026-01-10 15:05:08.626368 | orchestrator | 2026-01-10 15:05:08.626380 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2026-01-10 15:05:08.626392 | orchestrator | Saturday 10 January 2026 15:05:03 +0000 (0:00:00.340) 0:00:22.734 ****** 2026-01-10 15:05:08.626403 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-01-10 15:05:08.626415 | orchestrator | 2026-01-10 15:05:08.626427 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2026-01-10 15:05:08.626440 | orchestrator | Saturday 10 January 2026 15:05:03 +0000 (0:00:00.263) 0:00:22.998 ****** 2026-01-10 15:05:08.626453 | orchestrator | skipping: [testbed-node-3] 2026-01-10 15:05:08.626465 | orchestrator | 2026-01-10 15:05:08.626478 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-01-10 15:05:08.626489 | orchestrator | Saturday 10 January 2026 15:05:04 +0000 (0:00:00.596) 0:00:23.594 ****** 2026-01-10 15:05:08.626500 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-01-10 15:05:08.626511 | orchestrator | 2026-01-10 15:05:08.626520 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-01-10 15:05:08.626538 | orchestrator | Saturday 10 January 2026 15:05:05 +0000 (0:00:01.576) 0:00:25.171 ****** 2026-01-10 15:05:08.626545 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-01-10 15:05:08.626552 | orchestrator | 2026-01-10 15:05:08.626559 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-01-10 15:05:08.626565 | orchestrator | Saturday 10 January 2026 15:05:05 +0000 (0:00:00.272) 0:00:25.443 ****** 2026-01-10 15:05:08.626572 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-01-10 15:05:08.626579 | orchestrator | 2026-01-10 15:05:08.626586 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-01-10 15:05:08.626593 | orchestrator | Saturday 10 January 2026 15:05:06 +0000 (0:00:00.294) 0:00:25.737 ****** 2026-01-10 15:05:08.626600 | orchestrator | 2026-01-10 15:05:08.626606 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-01-10 15:05:08.626613 | orchestrator | Saturday 10 January 2026 15:05:06 +0000 (0:00:00.074) 0:00:25.812 ****** 2026-01-10 15:05:08.626620 | orchestrator | 2026-01-10 15:05:08.626626 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-01-10 15:05:08.626633 | orchestrator | Saturday 10 January 2026 15:05:06 +0000 (0:00:00.070) 0:00:25.883 ****** 2026-01-10 15:05:08.626765 | orchestrator | 2026-01-10 15:05:08.626778 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2026-01-10 15:05:08.626785 | orchestrator | Saturday 10 January 2026 15:05:06 +0000 (0:00:00.075) 0:00:25.959 ****** 2026-01-10 15:05:08.626792 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-01-10 15:05:08.626799 | orchestrator | 2026-01-10 15:05:08.626805 | orchestrator | TASK [Print report file information] ******************************************* 2026-01-10 15:05:08.626821 | orchestrator | Saturday 10 January 2026 15:05:07 +0000 (0:00:01.328) 0:00:27.287 ****** 2026-01-10 15:05:08.626828 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => { 2026-01-10 15:05:08.626834 | orchestrator |  "msg": [ 2026-01-10 15:05:08.626841 | orchestrator |  "Validator run completed.", 2026-01-10 15:05:08.626848 | orchestrator |  "You can find the report file here:", 2026-01-10 15:05:08.626855 | orchestrator |  "/opt/reports/validator/ceph-osds-validator-2026-01-10T15:04:41+00:00-report.json", 2026-01-10 15:05:08.626863 | orchestrator |  "on the following host:", 2026-01-10 15:05:08.626869 | orchestrator |  "testbed-manager" 2026-01-10 15:05:08.626876 | orchestrator |  ] 2026-01-10 15:05:08.626883 | orchestrator | } 2026-01-10 15:05:08.626890 | orchestrator | 2026-01-10 15:05:08.626896 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-10 15:05:08.626904 | orchestrator | testbed-node-3 : ok=35  changed=4  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-01-10 15:05:08.626912 | orchestrator | testbed-node-4 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-01-10 15:05:08.626937 | orchestrator | testbed-node-5 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-01-10 15:05:08.626944 | orchestrator | 2026-01-10 15:05:08.626950 | orchestrator | 2026-01-10 15:05:08.626957 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-10 15:05:08.626963 | orchestrator | Saturday 10 January 2026 15:05:08 +0000 (0:00:00.423) 0:00:27.711 ****** 2026-01-10 15:05:08.626970 | orchestrator | =============================================================================== 2026-01-10 15:05:08.626976 | orchestrator | List ceph LVM volumes and collect data ---------------------------------- 2.94s 2026-01-10 15:05:08.626983 | orchestrator | Get ceph osd tree ------------------------------------------------------- 2.28s 2026-01-10 15:05:08.626990 | orchestrator | Aggregate test results step one ----------------------------------------- 1.58s 2026-01-10 15:05:08.626996 | orchestrator | Write report file ------------------------------------------------------- 1.33s 2026-01-10 15:05:08.627003 | orchestrator | Get CRUSH node data of each OSD host and root node childs --------------- 1.02s 2026-01-10 15:05:08.627009 | orchestrator | Get timestamp for report file ------------------------------------------- 0.72s 2026-01-10 15:05:08.627015 | orchestrator | Aggregate test results step two ----------------------------------------- 0.71s 2026-01-10 15:05:08.627021 | orchestrator | Create report output directory ------------------------------------------ 0.68s 2026-01-10 15:05:08.627027 | orchestrator | Calculate total number of OSDs in cluster ------------------------------- 0.66s 2026-01-10 15:05:08.627033 | orchestrator | Set validation result to failed if a test failed ------------------------ 0.60s 2026-01-10 15:05:08.627039 | orchestrator | Pass if count of encrypted OSDs equals count of OSDs -------------------- 0.59s 2026-01-10 15:05:08.627045 | orchestrator | Prepare test data ------------------------------------------------------- 0.56s 2026-01-10 15:05:08.627051 | orchestrator | Set test result to failed when count of containers is wrong ------------- 0.56s 2026-01-10 15:05:08.627057 | orchestrator | Get unencrypted and encrypted OSDs -------------------------------------- 0.54s 2026-01-10 15:05:08.627064 | orchestrator | Get list of ceph-osd containers on host --------------------------------- 0.52s 2026-01-10 15:05:08.627070 | orchestrator | Get count of ceph-osd containers that are not running ------------------- 0.52s 2026-01-10 15:05:08.627076 | orchestrator | Parse LVM data as JSON -------------------------------------------------- 0.52s 2026-01-10 15:05:08.627082 | orchestrator | Aggregate test results step one ----------------------------------------- 0.50s 2026-01-10 15:05:08.627088 | orchestrator | Prepare test data ------------------------------------------------------- 0.49s 2026-01-10 15:05:08.627094 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.47s 2026-01-10 15:05:08.940807 | orchestrator | + sh -c /opt/configuration/scripts/check/200-infrastructure.sh 2026-01-10 15:05:08.950241 | orchestrator | + set -e 2026-01-10 15:05:08.950926 | orchestrator | + source /opt/manager-vars.sh 2026-01-10 15:05:08.950962 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-01-10 15:05:08.950974 | orchestrator | ++ NUMBER_OF_NODES=6 2026-01-10 15:05:08.950984 | orchestrator | ++ export CEPH_VERSION=reef 2026-01-10 15:05:08.950993 | orchestrator | ++ CEPH_VERSION=reef 2026-01-10 15:05:08.951003 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-01-10 15:05:08.951015 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-01-10 15:05:08.951025 | orchestrator | ++ export MANAGER_VERSION=latest 2026-01-10 15:05:08.951034 | orchestrator | ++ MANAGER_VERSION=latest 2026-01-10 15:05:08.951043 | orchestrator | ++ export OPENSTACK_VERSION=2025.1 2026-01-10 15:05:08.951053 | orchestrator | ++ OPENSTACK_VERSION=2025.1 2026-01-10 15:05:08.951063 | orchestrator | ++ export ARA=false 2026-01-10 15:05:08.951073 | orchestrator | ++ ARA=false 2026-01-10 15:05:08.951083 | orchestrator | ++ export DEPLOY_MODE=manager 2026-01-10 15:05:08.951093 | orchestrator | ++ DEPLOY_MODE=manager 2026-01-10 15:05:08.951102 | orchestrator | ++ export TEMPEST=false 2026-01-10 15:05:08.951112 | orchestrator | ++ TEMPEST=false 2026-01-10 15:05:08.951121 | orchestrator | ++ export IS_ZUUL=true 2026-01-10 15:05:08.951130 | orchestrator | ++ IS_ZUUL=true 2026-01-10 15:05:08.951139 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.86 2026-01-10 15:05:08.951149 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.86 2026-01-10 15:05:08.951159 | orchestrator | ++ export EXTERNAL_API=false 2026-01-10 15:05:08.951168 | orchestrator | ++ EXTERNAL_API=false 2026-01-10 15:05:08.951177 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-01-10 15:05:08.951185 | orchestrator | ++ IMAGE_USER=ubuntu 2026-01-10 15:05:08.951194 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-01-10 15:05:08.951204 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-01-10 15:05:08.951213 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-01-10 15:05:08.951221 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-01-10 15:05:08.951229 | orchestrator | + [[ -e /etc/redhat-release ]] 2026-01-10 15:05:08.951238 | orchestrator | + source /etc/os-release 2026-01-10 15:05:08.951246 | orchestrator | ++ PRETTY_NAME='Ubuntu 24.04.3 LTS' 2026-01-10 15:05:08.951254 | orchestrator | ++ NAME=Ubuntu 2026-01-10 15:05:08.951263 | orchestrator | ++ VERSION_ID=24.04 2026-01-10 15:05:08.951272 | orchestrator | ++ VERSION='24.04.3 LTS (Noble Numbat)' 2026-01-10 15:05:08.951282 | orchestrator | ++ VERSION_CODENAME=noble 2026-01-10 15:05:08.951291 | orchestrator | ++ ID=ubuntu 2026-01-10 15:05:08.951301 | orchestrator | ++ ID_LIKE=debian 2026-01-10 15:05:08.951311 | orchestrator | ++ HOME_URL=https://www.ubuntu.com/ 2026-01-10 15:05:08.951320 | orchestrator | ++ SUPPORT_URL=https://help.ubuntu.com/ 2026-01-10 15:05:08.951330 | orchestrator | ++ BUG_REPORT_URL=https://bugs.launchpad.net/ubuntu/ 2026-01-10 15:05:08.951339 | orchestrator | ++ PRIVACY_POLICY_URL=https://www.ubuntu.com/legal/terms-and-policies/privacy-policy 2026-01-10 15:05:08.951349 | orchestrator | ++ UBUNTU_CODENAME=noble 2026-01-10 15:05:08.951358 | orchestrator | ++ LOGO=ubuntu-logo 2026-01-10 15:05:08.951367 | orchestrator | + [[ ubuntu == \u\b\u\n\t\u ]] 2026-01-10 15:05:08.951378 | orchestrator | + packages='libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client' 2026-01-10 15:05:08.951389 | orchestrator | + dpkg -s libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2026-01-10 15:05:08.967958 | orchestrator | + sudo apt-get install -y libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2026-01-10 15:05:32.726871 | orchestrator | 2026-01-10 15:05:32.799767 | orchestrator | # Status of Elasticsearch 2026-01-10 15:05:32.799840 | orchestrator | 2026-01-10 15:05:32.799847 | orchestrator | + pushd /opt/configuration/contrib 2026-01-10 15:05:32.799854 | orchestrator | + echo 2026-01-10 15:05:32.799859 | orchestrator | + echo '# Status of Elasticsearch' 2026-01-10 15:05:32.799865 | orchestrator | + echo 2026-01-10 15:05:32.799871 | orchestrator | + bash nagios-plugins/check_elasticsearch -H api-int.testbed.osism.xyz -s 2026-01-10 15:05:32.914229 | orchestrator | OK - elasticsearch (kolla_logging) is running. status: green; timed_out: false; number_of_nodes: 3; number_of_data_nodes: 3; active_primary_shards: 9; active_shards: 22; relocating_shards: 0; initializing_shards: 0; delayed_unassigned_shards: 0; unassigned_shards: 0 | 'active_primary'=9 'active'=22 'relocating'=0 'init'=0 'delay_unass'=0 'unass'=0 2026-01-10 15:05:32.914352 | orchestrator | 2026-01-10 15:05:32.914367 | orchestrator | # Status of MariaDB 2026-01-10 15:05:32.914377 | orchestrator | 2026-01-10 15:05:32.914386 | orchestrator | + echo 2026-01-10 15:05:32.914395 | orchestrator | + echo '# Status of MariaDB' 2026-01-10 15:05:32.914438 | orchestrator | + echo 2026-01-10 15:05:32.915238 | orchestrator | ++ semver latest 10.0.0-0 2026-01-10 15:05:32.959541 | orchestrator | + [[ -1 -ge 0 ]] 2026-01-10 15:05:32.959633 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-01-10 15:05:32.959646 | orchestrator | + osism status database 2026-01-10 15:05:35.046514 | orchestrator | 2026-01-10 15:05:35 | ERROR  | Unable to get ansible vault password 2026-01-10 15:05:35.046615 | orchestrator | 2026-01-10 15:05:35 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-01-10 15:05:35.046629 | orchestrator | 2026-01-10 15:05:35 | ERROR  | Dropping encrypted entries 2026-01-10 15:05:35.079411 | orchestrator | 2026-01-10 15:05:35 | INFO  | Connecting to MariaDB at 192.168.16.9 as root_shard_0... 2026-01-10 15:05:35.091163 | orchestrator | 2026-01-10 15:05:35 | INFO  | Cluster Status: Primary 2026-01-10 15:05:35.091234 | orchestrator | 2026-01-10 15:05:35 | INFO  | Connected: ON 2026-01-10 15:05:35.091241 | orchestrator | 2026-01-10 15:05:35 | INFO  | Ready: ON 2026-01-10 15:05:35.091246 | orchestrator | 2026-01-10 15:05:35 | INFO  | Cluster Size: 3 2026-01-10 15:05:35.091296 | orchestrator | 2026-01-10 15:05:35 | INFO  | Local State: Synced 2026-01-10 15:05:35.091301 | orchestrator | 2026-01-10 15:05:35 | INFO  | Cluster State UUID: 62874c89-ee32-11f0-82b4-d38d08a25bcd 2026-01-10 15:05:35.091307 | orchestrator | 2026-01-10 15:05:35 | INFO  | Cluster Members: 192.168.16.11:3306,192.168.16.12:3306,192.168.16.10:3306 2026-01-10 15:05:35.091319 | orchestrator | 2026-01-10 15:05:35 | INFO  | Galera Version: 26.4.24(ra6b53429) 2026-01-10 15:05:35.091324 | orchestrator | 2026-01-10 15:05:35 | INFO  | Local Node UUID: 980ae71e-ee32-11f0-971e-57e42e6f9347 2026-01-10 15:05:35.091575 | orchestrator | 2026-01-10 15:05:35 | INFO  | Flow Control Paused: 0.04% 2026-01-10 15:05:35.091585 | orchestrator | 2026-01-10 15:05:35 | INFO  | Recv Queue Avg: 0.0075188 2026-01-10 15:05:35.091816 | orchestrator | 2026-01-10 15:05:35 | INFO  | Send Queue Avg: 0.00100781 2026-01-10 15:05:35.091942 | orchestrator | 2026-01-10 15:05:35 | INFO  | Transactions: 5140 local commits, 7871 replicated, 133 received 2026-01-10 15:05:35.092475 | orchestrator | 2026-01-10 15:05:35 | INFO  | Conflicts: 0 cert failures, 0 bf aborts 2026-01-10 15:05:35.092724 | orchestrator | 2026-01-10 15:05:35 | INFO  | MariaDB Uptime: 23 minutes, 4 seconds 2026-01-10 15:05:35.092760 | orchestrator | 2026-01-10 15:05:35 | INFO  | Threads: 146 connected, 1 running 2026-01-10 15:05:35.092870 | orchestrator | 2026-01-10 15:05:35 | INFO  | Queries: 128344 total, 0 slow 2026-01-10 15:05:35.092882 | orchestrator | 2026-01-10 15:05:35 | INFO  | Aborted Connects: 46 2026-01-10 15:05:35.093009 | orchestrator | 2026-01-10 15:05:35 | INFO  | MariaDB Galera Cluster validation PASSED 2026-01-10 15:05:35.456991 | orchestrator | 2026-01-10 15:05:35.457061 | orchestrator | # Status of Prometheus 2026-01-10 15:05:35.457068 | orchestrator | 2026-01-10 15:05:35.457073 | orchestrator | + echo 2026-01-10 15:05:35.457078 | orchestrator | + echo '# Status of Prometheus' 2026-01-10 15:05:35.457082 | orchestrator | + echo 2026-01-10 15:05:35.457087 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/healthy 2026-01-10 15:05:35.519885 | orchestrator | Unauthorized 2026-01-10 15:05:35.523449 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/ready 2026-01-10 15:05:35.582601 | orchestrator | Unauthorized 2026-01-10 15:05:35.586234 | orchestrator | 2026-01-10 15:05:35.586285 | orchestrator | # Status of RabbitMQ 2026-01-10 15:05:35.586291 | orchestrator | 2026-01-10 15:05:35.586295 | orchestrator | + echo 2026-01-10 15:05:35.586300 | orchestrator | + echo '# Status of RabbitMQ' 2026-01-10 15:05:35.586304 | orchestrator | + echo 2026-01-10 15:05:35.587256 | orchestrator | ++ semver latest 10.0.0-0 2026-01-10 15:05:35.643248 | orchestrator | + [[ -1 -ge 0 ]] 2026-01-10 15:05:35.643317 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-01-10 15:05:35.643343 | orchestrator | + osism status messaging 2026-01-10 15:05:57.214594 | orchestrator | 2026-01-10 15:05:57 | ERROR  | Unable to get ansible vault password 2026-01-10 15:05:57.214666 | orchestrator | 2026-01-10 15:05:57 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-01-10 15:05:57.214674 | orchestrator | 2026-01-10 15:05:57 | ERROR  | Dropping encrypted entries 2026-01-10 15:05:57.251126 | orchestrator | 2026-01-10 15:05:57 | INFO  | [testbed-node-0] Connecting to RabbitMQ Management API at 192.168.16.10:15672 as openstack... 2026-01-10 15:05:57.301743 | orchestrator | 2026-01-10 15:05:57 | INFO  | [testbed-node-0] RabbitMQ Version: 4.1.7 2026-01-10 15:05:57.301985 | orchestrator | 2026-01-10 15:05:57 | INFO  | [testbed-node-0] Erlang Version: 27.3.4.1 2026-01-10 15:05:57.302007 | orchestrator | 2026-01-10 15:05:57 | INFO  | [testbed-node-0] Cluster Name: rabbit@testbed-node-0 2026-01-10 15:05:57.302066 | orchestrator | 2026-01-10 15:05:57 | INFO  | [testbed-node-0] Cluster Size: 3 2026-01-10 15:05:57.302076 | orchestrator | 2026-01-10 15:05:57 | INFO  | [testbed-node-0] Nodes: rabbit@testbed-node-0, rabbit@testbed-node-1, rabbit@testbed-node-2 2026-01-10 15:05:57.302084 | orchestrator | 2026-01-10 15:05:57 | INFO  | [testbed-node-0] Running Nodes: rabbit@testbed-node-0, rabbit@testbed-node-1, rabbit@testbed-node-2 2026-01-10 15:05:57.302102 | orchestrator | 2026-01-10 15:05:57 | INFO  | [testbed-node-0] Partitions: None (healthy) 2026-01-10 15:05:57.302523 | orchestrator | 2026-01-10 15:05:57 | INFO  | [testbed-node-0] Connections: 209, Channels: 208, Queues: 173 2026-01-10 15:05:57.302554 | orchestrator | 2026-01-10 15:05:57 | INFO  | [testbed-node-0] Messages: 232 total, 232 ready, 0 unacked 2026-01-10 15:05:57.302560 | orchestrator | 2026-01-10 15:05:57 | INFO  | [testbed-node-0] Message Rates: 12.0/s publish, 12.4/s deliver 2026-01-10 15:05:57.303190 | orchestrator | 2026-01-10 15:05:57 | INFO  | [testbed-node-0] Disk Free: 58.6 GB (limit: 0.0 GB) 2026-01-10 15:05:57.303249 | orchestrator | 2026-01-10 15:05:57 | INFO  | [testbed-node-0] Memory Used: 0.14 GB (limit: 18.81 GB) 2026-01-10 15:05:57.303257 | orchestrator | 2026-01-10 15:05:57 | INFO  | [testbed-node-0] File Descriptors: 102/1024 2026-01-10 15:05:57.303570 | orchestrator | 2026-01-10 15:05:57 | INFO  | [testbed-node-0] Sockets: 0/0 2026-01-10 15:05:57.303616 | orchestrator | 2026-01-10 15:05:57 | INFO  | [testbed-node-1] Connecting to RabbitMQ Management API at 192.168.16.11:15672 as openstack... 2026-01-10 15:05:57.353361 | orchestrator | 2026-01-10 15:05:57 | INFO  | [testbed-node-1] RabbitMQ Version: 4.1.7 2026-01-10 15:05:57.353572 | orchestrator | 2026-01-10 15:05:57 | INFO  | [testbed-node-1] Erlang Version: 27.3.4.1 2026-01-10 15:05:57.353587 | orchestrator | 2026-01-10 15:05:57 | INFO  | [testbed-node-1] Cluster Name: rabbit@testbed-node-1 2026-01-10 15:05:57.353592 | orchestrator | 2026-01-10 15:05:57 | INFO  | [testbed-node-1] Cluster Size: 3 2026-01-10 15:05:57.353597 | orchestrator | 2026-01-10 15:05:57 | INFO  | [testbed-node-1] Nodes: rabbit@testbed-node-0, rabbit@testbed-node-1, rabbit@testbed-node-2 2026-01-10 15:05:57.354126 | orchestrator | 2026-01-10 15:05:57 | INFO  | [testbed-node-1] Running Nodes: rabbit@testbed-node-0, rabbit@testbed-node-1, rabbit@testbed-node-2 2026-01-10 15:05:57.354148 | orchestrator | 2026-01-10 15:05:57 | INFO  | [testbed-node-1] Partitions: None (healthy) 2026-01-10 15:05:57.354156 | orchestrator | 2026-01-10 15:05:57 | INFO  | [testbed-node-1] Connections: 209, Channels: 208, Queues: 173 2026-01-10 15:05:57.354180 | orchestrator | 2026-01-10 15:05:57 | INFO  | [testbed-node-1] Messages: 232 total, 232 ready, 0 unacked 2026-01-10 15:05:57.354207 | orchestrator | 2026-01-10 15:05:57 | INFO  | [testbed-node-1] Message Rates: 12.0/s publish, 12.4/s deliver 2026-01-10 15:05:57.354402 | orchestrator | 2026-01-10 15:05:57 | INFO  | [testbed-node-1] Disk Free: 58.9 GB (limit: 0.0 GB) 2026-01-10 15:05:57.354420 | orchestrator | 2026-01-10 15:05:57 | INFO  | [testbed-node-1] Memory Used: 0.15 GB (limit: 18.81 GB) 2026-01-10 15:05:57.354427 | orchestrator | 2026-01-10 15:05:57 | INFO  | [testbed-node-1] File Descriptors: 114/1024 2026-01-10 15:05:57.354884 | orchestrator | 2026-01-10 15:05:57 | INFO  | [testbed-node-1] Sockets: 0/0 2026-01-10 15:05:57.354897 | orchestrator | 2026-01-10 15:05:57 | INFO  | [testbed-node-2] Connecting to RabbitMQ Management API at 192.168.16.12:15672 as openstack... 2026-01-10 15:05:57.407519 | orchestrator | 2026-01-10 15:05:57 | INFO  | [testbed-node-2] RabbitMQ Version: 4.1.7 2026-01-10 15:05:57.407670 | orchestrator | 2026-01-10 15:05:57 | INFO  | [testbed-node-2] Erlang Version: 27.3.4.1 2026-01-10 15:05:57.407682 | orchestrator | 2026-01-10 15:05:57 | INFO  | [testbed-node-2] Cluster Name: rabbit@testbed-node-2 2026-01-10 15:05:57.407690 | orchestrator | 2026-01-10 15:05:57 | INFO  | [testbed-node-2] Cluster Size: 3 2026-01-10 15:05:57.407710 | orchestrator | 2026-01-10 15:05:57 | INFO  | [testbed-node-2] Nodes: rabbit@testbed-node-0, rabbit@testbed-node-1, rabbit@testbed-node-2 2026-01-10 15:05:57.407719 | orchestrator | 2026-01-10 15:05:57 | INFO  | [testbed-node-2] Running Nodes: rabbit@testbed-node-0, rabbit@testbed-node-1, rabbit@testbed-node-2 2026-01-10 15:05:57.407727 | orchestrator | 2026-01-10 15:05:57 | INFO  | [testbed-node-2] Partitions: None (healthy) 2026-01-10 15:05:57.408349 | orchestrator | 2026-01-10 15:05:57 | INFO  | [testbed-node-2] Connections: 209, Channels: 208, Queues: 173 2026-01-10 15:05:57.408374 | orchestrator | 2026-01-10 15:05:57 | INFO  | [testbed-node-2] Messages: 232 total, 232 ready, 0 unacked 2026-01-10 15:05:57.408379 | orchestrator | 2026-01-10 15:05:57 | INFO  | [testbed-node-2] Message Rates: 12.0/s publish, 12.4/s deliver 2026-01-10 15:05:57.408384 | orchestrator | 2026-01-10 15:05:57 | INFO  | [testbed-node-2] Disk Free: 59.2 GB (limit: 0.0 GB) 2026-01-10 15:05:57.408389 | orchestrator | 2026-01-10 15:05:57 | INFO  | [testbed-node-2] Memory Used: 0.15 GB (limit: 18.81 GB) 2026-01-10 15:05:57.408394 | orchestrator | 2026-01-10 15:05:57 | INFO  | [testbed-node-2] File Descriptors: 113/1024 2026-01-10 15:05:57.408521 | orchestrator | 2026-01-10 15:05:57 | INFO  | [testbed-node-2] Sockets: 0/0 2026-01-10 15:05:57.408530 | orchestrator | 2026-01-10 15:05:57 | INFO  | RabbitMQ Cluster validation PASSED 2026-01-10 15:05:57.723363 | orchestrator | 2026-01-10 15:05:57.723459 | orchestrator | # Status of Redis 2026-01-10 15:05:57.723469 | orchestrator | 2026-01-10 15:05:57.723476 | orchestrator | + echo 2026-01-10 15:05:57.723484 | orchestrator | + echo '# Status of Redis' 2026-01-10 15:05:57.723493 | orchestrator | + echo 2026-01-10 15:05:57.723501 | orchestrator | + /usr/lib/nagios/plugins/check_tcp -H 192.168.16.10 -p 6379 -A -E -s 'AUTH QHNA1SZRlOKzLADhUd5ZDgpHfQe6dNfr3bwEdY24\r\nPING\r\nINFO replication\r\nQUIT\r\n' -e PONG -e role:master -e slave0:ip=192.168.16.1 -e,port=6379 -j 2026-01-10 15:05:57.729652 | orchestrator | TCP OK - 0.002 second response time on 192.168.16.10 port 6379|time=0.001589s;;;0.000000;10.000000 2026-01-10 15:05:57.729939 | orchestrator | 2026-01-10 15:05:57.729957 | orchestrator | # Create backup of MariaDB database 2026-01-10 15:05:57.729962 | orchestrator | 2026-01-10 15:05:57.729967 | orchestrator | + popd 2026-01-10 15:05:57.729971 | orchestrator | + echo 2026-01-10 15:05:57.729975 | orchestrator | + echo '# Create backup of MariaDB database' 2026-01-10 15:05:57.729979 | orchestrator | + echo 2026-01-10 15:05:57.729984 | orchestrator | + osism apply mariadb_backup -e mariadb_backup_type=full 2026-01-10 15:05:59.849338 | orchestrator | 2026-01-10 15:05:59 | INFO  | Task 1a161f69-33d2-44d4-a835-c5008fbddc76 (mariadb_backup) was prepared for execution. 2026-01-10 15:05:59.849432 | orchestrator | 2026-01-10 15:05:59 | INFO  | It takes a moment until task 1a161f69-33d2-44d4-a835-c5008fbddc76 (mariadb_backup) has been started and output is visible here. 2026-01-10 15:09:32.009158 | orchestrator | 2026-01-10 15:09:32.009268 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-10 15:09:32.009282 | orchestrator | 2026-01-10 15:09:32.009291 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-10 15:09:32.009300 | orchestrator | Saturday 10 January 2026 15:06:04 +0000 (0:00:00.177) 0:00:00.177 ****** 2026-01-10 15:09:32.009309 | orchestrator | ok: [testbed-node-0] 2026-01-10 15:09:32.009319 | orchestrator | ok: [testbed-node-1] 2026-01-10 15:09:32.009327 | orchestrator | ok: [testbed-node-2] 2026-01-10 15:09:32.009336 | orchestrator | 2026-01-10 15:09:32.009344 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-10 15:09:32.009353 | orchestrator | Saturday 10 January 2026 15:06:04 +0000 (0:00:00.362) 0:00:00.539 ****** 2026-01-10 15:09:32.009361 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2026-01-10 15:09:32.009370 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2026-01-10 15:09:32.009378 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2026-01-10 15:09:32.009386 | orchestrator | 2026-01-10 15:09:32.009419 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2026-01-10 15:09:32.009430 | orchestrator | 2026-01-10 15:09:32.009438 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2026-01-10 15:09:32.009447 | orchestrator | Saturday 10 January 2026 15:06:05 +0000 (0:00:00.590) 0:00:01.130 ****** 2026-01-10 15:09:32.009456 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-01-10 15:09:32.009464 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-01-10 15:09:32.009473 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-01-10 15:09:32.009481 | orchestrator | 2026-01-10 15:09:32.009489 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-01-10 15:09:32.009497 | orchestrator | Saturday 10 January 2026 15:06:05 +0000 (0:00:00.462) 0:00:01.592 ****** 2026-01-10 15:09:32.009506 | orchestrator | included: /ansible/roles/mariadb/tasks/backup.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-10 15:09:32.009515 | orchestrator | 2026-01-10 15:09:32.009524 | orchestrator | TASK [mariadb : Get MariaDB container facts] *********************************** 2026-01-10 15:09:32.009532 | orchestrator | Saturday 10 January 2026 15:06:06 +0000 (0:00:00.591) 0:00:02.183 ****** 2026-01-10 15:09:32.009540 | orchestrator | ok: [testbed-node-1] 2026-01-10 15:09:32.009548 | orchestrator | ok: [testbed-node-2] 2026-01-10 15:09:32.009556 | orchestrator | ok: [testbed-node-0] 2026-01-10 15:09:32.009564 | orchestrator | 2026-01-10 15:09:32.009572 | orchestrator | TASK [mariadb : Taking full database backup via Mariabackup] ******************* 2026-01-10 15:09:32.009580 | orchestrator | Saturday 10 January 2026 15:06:09 +0000 (0:00:03.661) 0:00:05.845 ****** 2026-01-10 15:09:32.009589 | orchestrator | skipping: [testbed-node-1] 2026-01-10 15:09:32.009599 | orchestrator | skipping: [testbed-node-2] 2026-01-10 15:09:32.009607 | orchestrator | 2026-01-10 15:09:32.009615 | orchestrator | STILL ALIVE [task 'mariadb : Taking full database backup via Mariabackup' is running] *** 2026-01-10 15:09:32.009623 | orchestrator | 2026-01-10 15:09:32.009631 | orchestrator | STILL ALIVE [task 'mariadb : Taking full database backup via Mariabackup' is running] *** 2026-01-10 15:09:32.009639 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2026-01-10 15:09:32.009647 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_start 2026-01-10 15:09:32.009655 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-01-10 15:09:32.009663 | orchestrator | mariadb_bootstrap_restart 2026-01-10 15:09:32.009671 | orchestrator | changed: [testbed-node-0] 2026-01-10 15:09:32.009680 | orchestrator | 2026-01-10 15:09:32.009709 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-01-10 15:09:32.009718 | orchestrator | skipping: no hosts matched 2026-01-10 15:09:32.009727 | orchestrator | 2026-01-10 15:09:32.009735 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-01-10 15:09:32.009743 | orchestrator | skipping: no hosts matched 2026-01-10 15:09:32.009752 | orchestrator | 2026-01-10 15:09:32.009800 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2026-01-10 15:09:32.009809 | orchestrator | skipping: no hosts matched 2026-01-10 15:09:32.009817 | orchestrator | 2026-01-10 15:09:32.009825 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2026-01-10 15:09:32.009833 | orchestrator | 2026-01-10 15:09:32.009841 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2026-01-10 15:09:32.009850 | orchestrator | Saturday 10 January 2026 15:09:30 +0000 (0:03:20.982) 0:03:26.827 ****** 2026-01-10 15:09:32.009858 | orchestrator | skipping: [testbed-node-0] 2026-01-10 15:09:32.009867 | orchestrator | skipping: [testbed-node-1] 2026-01-10 15:09:32.009875 | orchestrator | skipping: [testbed-node-2] 2026-01-10 15:09:32.009883 | orchestrator | 2026-01-10 15:09:32.009891 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2026-01-10 15:09:32.009900 | orchestrator | Saturday 10 January 2026 15:09:31 +0000 (0:00:00.339) 0:03:27.167 ****** 2026-01-10 15:09:32.009908 | orchestrator | skipping: [testbed-node-0] 2026-01-10 15:09:32.009917 | orchestrator | skipping: [testbed-node-1] 2026-01-10 15:09:32.009924 | orchestrator | skipping: [testbed-node-2] 2026-01-10 15:09:32.009933 | orchestrator | 2026-01-10 15:09:32.009942 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-10 15:09:32.009951 | orchestrator | testbed-node-0 : ok=6  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-10 15:09:32.009961 | orchestrator | testbed-node-1 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-01-10 15:09:32.009969 | orchestrator | testbed-node-2 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-01-10 15:09:32.009977 | orchestrator | 2026-01-10 15:09:32.009985 | orchestrator | 2026-01-10 15:09:32.010010 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-10 15:09:32.010070 | orchestrator | Saturday 10 January 2026 15:09:31 +0000 (0:00:00.427) 0:03:27.594 ****** 2026-01-10 15:09:32.010094 | orchestrator | =============================================================================== 2026-01-10 15:09:32.010102 | orchestrator | mariadb : Taking full database backup via Mariabackup ----------------- 200.98s 2026-01-10 15:09:32.010136 | orchestrator | mariadb : Get MariaDB container facts ----------------------------------- 3.66s 2026-01-10 15:09:32.010144 | orchestrator | mariadb : include_tasks ------------------------------------------------- 0.59s 2026-01-10 15:09:32.010152 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.59s 2026-01-10 15:09:32.010160 | orchestrator | mariadb : Group MariaDB hosts based on shards --------------------------- 0.46s 2026-01-10 15:09:32.010168 | orchestrator | Include mariadb post-upgrade.yml ---------------------------------------- 0.43s 2026-01-10 15:09:32.010176 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.36s 2026-01-10 15:09:32.010184 | orchestrator | Include mariadb post-deploy.yml ----------------------------------------- 0.34s 2026-01-10 15:09:32.373176 | orchestrator | + sh -c /opt/configuration/scripts/check/300-openstack.sh 2026-01-10 15:09:32.381189 | orchestrator | + set -e 2026-01-10 15:09:32.381457 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-01-10 15:09:32.381473 | orchestrator | ++ export INTERACTIVE=false 2026-01-10 15:09:32.381480 | orchestrator | ++ INTERACTIVE=false 2026-01-10 15:09:32.381535 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-01-10 15:09:32.381549 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-01-10 15:09:32.381938 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-01-10 15:09:32.383363 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-01-10 15:09:32.388009 | orchestrator | 2026-01-10 15:09:32.388067 | orchestrator | # OpenStack endpoints 2026-01-10 15:09:32.388073 | orchestrator | 2026-01-10 15:09:32.388077 | orchestrator | ++ export MANAGER_VERSION=latest 2026-01-10 15:09:32.388082 | orchestrator | ++ MANAGER_VERSION=latest 2026-01-10 15:09:32.388086 | orchestrator | + export OS_CLOUD=admin 2026-01-10 15:09:32.388091 | orchestrator | + OS_CLOUD=admin 2026-01-10 15:09:32.388095 | orchestrator | + echo 2026-01-10 15:09:32.388099 | orchestrator | + echo '# OpenStack endpoints' 2026-01-10 15:09:32.388103 | orchestrator | + echo 2026-01-10 15:09:32.388107 | orchestrator | + openstack endpoint list 2026-01-10 15:09:35.563827 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2026-01-10 15:09:35.563918 | orchestrator | | ID | Region | Service Name | Service Type | Enabled | Interface | URL | 2026-01-10 15:09:35.563929 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2026-01-10 15:09:35.563934 | orchestrator | | 03806d3e6d38458483197454c86b7cb9 | RegionOne | cinderv3 | volumev3 | True | public | https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2026-01-10 15:09:35.563938 | orchestrator | | 217434bb5592430ca9897b7464be8e24 | RegionOne | keystone | identity | True | public | https://api.testbed.osism.xyz:5000 | 2026-01-10 15:09:35.563943 | orchestrator | | 316d11172c234e0ca268e89982e7d35f | RegionOne | octavia | load-balancer | True | public | https://api.testbed.osism.xyz:9876 | 2026-01-10 15:09:35.563947 | orchestrator | | 33ca5b91838241f0b90fdcd8c6f020de | RegionOne | cinder | block-storage | True | public | https://api.testbed.osism.xyz:8776/v3 | 2026-01-10 15:09:35.563950 | orchestrator | | 3754b92a9e6044b0b9f6d1477def8f16 | RegionOne | placement | placement | True | public | https://api.testbed.osism.xyz:8780 | 2026-01-10 15:09:35.563954 | orchestrator | | 417b1a2346f64d47a477973ff8fd8439 | RegionOne | neutron | network | True | public | https://api.testbed.osism.xyz:9696 | 2026-01-10 15:09:35.563958 | orchestrator | | 48d2c39897f9462ab29cf424bac25f68 | RegionOne | magnum | container-infra | True | public | https://api.testbed.osism.xyz:9511/v1 | 2026-01-10 15:09:35.563961 | orchestrator | | 4be07d2f9ac341c5b1fb27dddf00a1bc | RegionOne | designate | dns | True | public | https://api.testbed.osism.xyz:9001 | 2026-01-10 15:09:35.563965 | orchestrator | | 54ebb95f8eb84f9abac93997484438a4 | RegionOne | swift | object-store | True | public | https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2026-01-10 15:09:35.563969 | orchestrator | | 779435221b684449b38e0d5643916b4c | RegionOne | designate | dns | True | internal | https://api-int.testbed.osism.xyz:9001 | 2026-01-10 15:09:35.563972 | orchestrator | | 8a0d844889cc448d849da94de179f8ac | RegionOne | keystone | identity | True | internal | https://api-int.testbed.osism.xyz:5000 | 2026-01-10 15:09:35.563976 | orchestrator | | 95297d3b6a90407282ed4cca4712fb99 | RegionOne | neutron | network | True | internal | https://api-int.testbed.osism.xyz:9696 | 2026-01-10 15:09:35.563980 | orchestrator | | a45cc2800c0f4c86a06e135e8120b5d3 | RegionOne | nova | compute | True | internal | https://api-int.testbed.osism.xyz:8774/v2.1 | 2026-01-10 15:09:35.563983 | orchestrator | | a72d201277724010bdce72c1ffd8b893 | RegionOne | nova | compute | True | public | https://api.testbed.osism.xyz:8774/v2.1 | 2026-01-10 15:09:35.564034 | orchestrator | | b2fdf8a577824589982a0d782afe2b35 | RegionOne | barbican | key-manager | True | public | https://api.testbed.osism.xyz:9311 | 2026-01-10 15:09:35.564039 | orchestrator | | b8f29dd9014f4abfb0f00708eac159a4 | RegionOne | placement | placement | True | internal | https://api-int.testbed.osism.xyz:8780 | 2026-01-10 15:09:35.564042 | orchestrator | | bd9b8fccacb7487db26c447d1ae8fbfd | RegionOne | octavia | load-balancer | True | internal | https://api-int.testbed.osism.xyz:9876 | 2026-01-10 15:09:35.564046 | orchestrator | | c3bd0257897948fd839e6397c7e41d6c | RegionOne | cinderv3 | volumev3 | True | internal | https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2026-01-10 15:09:35.564050 | orchestrator | | c7c8680e8650485dbdc967abff8e1002 | RegionOne | swift | object-store | True | internal | https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2026-01-10 15:09:35.564066 | orchestrator | | db0e264c240a46e9b200f07a8f6a6aae | RegionOne | magnum | container-infra | True | internal | https://api-int.testbed.osism.xyz:9511/v1 | 2026-01-10 15:09:35.564082 | orchestrator | | e1264eb8b33b4c6cb224cf690d2f4a0e | RegionOne | cinder | block-storage | True | internal | https://api-int.testbed.osism.xyz:8776/v3 | 2026-01-10 15:09:35.564086 | orchestrator | | e7166ef8be0d47aaa8f544ac62f9171c | RegionOne | barbican | key-manager | True | internal | https://api-int.testbed.osism.xyz:9311 | 2026-01-10 15:09:35.564090 | orchestrator | | f16b67f802c546ddb469a3e66b041cab | RegionOne | glance | image | True | public | https://api.testbed.osism.xyz:9292 | 2026-01-10 15:09:35.564095 | orchestrator | | fdfcd7b23a824dc7b6da83c132a272de | RegionOne | glance | image | True | internal | https://api-int.testbed.osism.xyz:9292 | 2026-01-10 15:09:35.564098 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2026-01-10 15:09:35.822192 | orchestrator | 2026-01-10 15:09:35.822281 | orchestrator | # Cinder 2026-01-10 15:09:35.822292 | orchestrator | 2026-01-10 15:09:35.822298 | orchestrator | + echo 2026-01-10 15:09:35.822305 | orchestrator | + echo '# Cinder' 2026-01-10 15:09:35.822322 | orchestrator | + echo 2026-01-10 15:09:35.822329 | orchestrator | + openstack volume service list 2026-01-10 15:09:39.558916 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2026-01-10 15:09:39.559030 | orchestrator | | Binary | Host | Zone | Status | State | Updated At | 2026-01-10 15:09:39.559039 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2026-01-10 15:09:39.559045 | orchestrator | | cinder-scheduler | testbed-node-0 | internal | enabled | up | 2026-01-10T15:09:33.000000 | 2026-01-10 15:09:39.559050 | orchestrator | | cinder-scheduler | testbed-node-2 | internal | enabled | up | 2026-01-10T15:09:34.000000 | 2026-01-10 15:09:39.559056 | orchestrator | | cinder-scheduler | testbed-node-1 | internal | enabled | up | 2026-01-10T15:09:33.000000 | 2026-01-10 15:09:39.559061 | orchestrator | | cinder-volume | testbed-node-0@rbd-volumes | nova | enabled | up | 2026-01-10T15:09:34.000000 | 2026-01-10 15:09:39.559066 | orchestrator | | cinder-volume | testbed-node-1@rbd-volumes | nova | enabled | up | 2026-01-10T15:09:38.000000 | 2026-01-10 15:09:39.559071 | orchestrator | | cinder-volume | testbed-node-2@rbd-volumes | nova | enabled | up | 2026-01-10T15:09:38.000000 | 2026-01-10 15:09:39.559076 | orchestrator | | cinder-backup | testbed-node-0 | nova | enabled | up | 2026-01-10T15:09:37.000000 | 2026-01-10 15:09:39.559081 | orchestrator | | cinder-backup | testbed-node-1 | nova | enabled | up | 2026-01-10T15:09:30.000000 | 2026-01-10 15:09:39.559086 | orchestrator | | cinder-backup | testbed-node-2 | nova | enabled | up | 2026-01-10T15:09:30.000000 | 2026-01-10 15:09:39.559111 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2026-01-10 15:09:39.824965 | orchestrator | 2026-01-10 15:09:39.825089 | orchestrator | # Neutron 2026-01-10 15:09:39.825100 | orchestrator | 2026-01-10 15:09:39.825106 | orchestrator | + echo 2026-01-10 15:09:39.825114 | orchestrator | + echo '# Neutron' 2026-01-10 15:09:39.825121 | orchestrator | + echo 2026-01-10 15:09:39.825127 | orchestrator | + openstack network agent list 2026-01-10 15:09:42.584537 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2026-01-10 15:09:42.584622 | orchestrator | | ID | Agent Type | Host | Availability Zone | Alive | State | Binary | 2026-01-10 15:09:42.584629 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2026-01-10 15:09:42.584634 | orchestrator | | testbed-node-2 | OVN Controller Gateway agent | testbed-node-2 | nova | :-) | UP | ovn-controller | 2026-01-10 15:09:42.584638 | orchestrator | | testbed-node-3 | OVN Controller agent | testbed-node-3 | | :-) | UP | ovn-controller | 2026-01-10 15:09:42.584642 | orchestrator | | testbed-node-1 | OVN Controller Gateway agent | testbed-node-1 | nova | :-) | UP | ovn-controller | 2026-01-10 15:09:42.584645 | orchestrator | | testbed-node-5 | OVN Controller agent | testbed-node-5 | | :-) | UP | ovn-controller | 2026-01-10 15:09:42.584649 | orchestrator | | testbed-node-0 | OVN Controller Gateway agent | testbed-node-0 | nova | :-) | UP | ovn-controller | 2026-01-10 15:09:42.584667 | orchestrator | | testbed-node-4 | OVN Controller agent | testbed-node-4 | | :-) | UP | ovn-controller | 2026-01-10 15:09:42.584671 | orchestrator | | 36b9d21c-9928-5c0a-9b27-73ac7a3e770c | OVN Metadata agent | testbed-node-5 | | :-) | UP | neutron-ovn-metadata-agent | 2026-01-10 15:09:42.584674 | orchestrator | | e645415a-98f5-5758-8cd1-c47af282b5c0 | OVN Metadata agent | testbed-node-3 | | :-) | UP | neutron-ovn-metadata-agent | 2026-01-10 15:09:42.584678 | orchestrator | | 4939696e-6092-5a33-bb73-b850064684df | OVN Metadata agent | testbed-node-4 | | :-) | UP | neutron-ovn-metadata-agent | 2026-01-10 15:09:42.584682 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2026-01-10 15:09:42.885400 | orchestrator | + openstack network service provider list 2026-01-10 15:09:45.449711 | orchestrator | +---------------+------+---------+ 2026-01-10 15:09:45.449789 | orchestrator | | Service Type | Name | Default | 2026-01-10 15:09:45.449795 | orchestrator | +---------------+------+---------+ 2026-01-10 15:09:45.449799 | orchestrator | | L3_ROUTER_NAT | ovn | True | 2026-01-10 15:09:45.449804 | orchestrator | +---------------+------+---------+ 2026-01-10 15:09:45.726230 | orchestrator | 2026-01-10 15:09:45.726318 | orchestrator | # Nova 2026-01-10 15:09:45.726362 | orchestrator | 2026-01-10 15:09:45.726370 | orchestrator | + echo 2026-01-10 15:09:45.726377 | orchestrator | + echo '# Nova' 2026-01-10 15:09:45.726384 | orchestrator | + echo 2026-01-10 15:09:45.726390 | orchestrator | + openstack compute service list 2026-01-10 15:09:48.993380 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2026-01-10 15:09:48.993471 | orchestrator | | ID | Binary | Host | Zone | Status | State | Updated At | 2026-01-10 15:09:48.993479 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2026-01-10 15:09:48.993483 | orchestrator | | 052cb4df-81f4-4f34-8d32-5617a1bbb413 | nova-scheduler | testbed-node-1 | internal | enabled | up | 2026-01-10T15:09:45.000000 | 2026-01-10 15:09:48.993507 | orchestrator | | e3d0fb81-cec5-49b6-9819-d16f18d8669f | nova-scheduler | testbed-node-2 | internal | enabled | up | 2026-01-10T15:09:47.000000 | 2026-01-10 15:09:48.993512 | orchestrator | | 77978bc2-6a79-4b70-a9db-258c496697c3 | nova-scheduler | testbed-node-0 | internal | enabled | up | 2026-01-10T15:09:45.000000 | 2026-01-10 15:09:48.993519 | orchestrator | | f2681cbc-a077-4a29-b213-05c044abf8c2 | nova-conductor | testbed-node-0 | internal | enabled | up | 2026-01-10T15:09:41.000000 | 2026-01-10 15:09:48.993524 | orchestrator | | 47c8824f-0dc6-4150-a2fc-751a3ef0b61e | nova-conductor | testbed-node-1 | internal | enabled | up | 2026-01-10T15:09:42.000000 | 2026-01-10 15:09:48.993530 | orchestrator | | eb459377-ea70-42e5-9586-3e658918cef7 | nova-conductor | testbed-node-2 | internal | enabled | up | 2026-01-10T15:09:43.000000 | 2026-01-10 15:09:48.993536 | orchestrator | | e21b7b70-fe8f-45b2-807a-78c6b1aa9553 | nova-compute | testbed-node-3 | nova | enabled | up | 2026-01-10T15:09:47.000000 | 2026-01-10 15:09:48.993542 | orchestrator | | 2164687e-bb56-4736-bfc3-f977bae8a329 | nova-compute | testbed-node-4 | nova | enabled | up | 2026-01-10T15:09:47.000000 | 2026-01-10 15:09:48.993548 | orchestrator | | ee5b707a-a906-4d6e-bb51-531e815f2226 | nova-compute | testbed-node-5 | nova | enabled | up | 2026-01-10T15:09:47.000000 | 2026-01-10 15:09:48.993554 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2026-01-10 15:09:49.301298 | orchestrator | + openstack hypervisor list 2026-01-10 15:09:52.378238 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2026-01-10 15:09:52.378327 | orchestrator | | ID | Hypervisor Hostname | Hypervisor Type | Host IP | State | 2026-01-10 15:09:52.378333 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2026-01-10 15:09:52.378339 | orchestrator | | 34189d6e-4ef1-468b-bd91-926f8956a63e | testbed-node-3 | QEMU | 192.168.16.13 | up | 2026-01-10 15:09:52.378346 | orchestrator | | 28cf231d-799a-48c4-8b4b-6209e063288d | testbed-node-5 | QEMU | 192.168.16.15 | up | 2026-01-10 15:09:52.378350 | orchestrator | | b12f7dbc-63e9-43fe-80af-8ed243eb759d | testbed-node-4 | QEMU | 192.168.16.14 | up | 2026-01-10 15:09:52.378355 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2026-01-10 15:09:52.656441 | orchestrator | 2026-01-10 15:09:52.656541 | orchestrator | # Run OpenStack test play 2026-01-10 15:09:52.656554 | orchestrator | 2026-01-10 15:09:52.656560 | orchestrator | + echo 2026-01-10 15:09:52.656569 | orchestrator | + echo '# Run OpenStack test play' 2026-01-10 15:09:52.656578 | orchestrator | + echo 2026-01-10 15:09:52.656585 | orchestrator | + osism apply --environment openstack test 2026-01-10 15:09:54.675382 | orchestrator | 2026-01-10 15:09:54 | INFO  | Trying to run play test in environment openstack 2026-01-10 15:10:04.789350 | orchestrator | 2026-01-10 15:10:04 | INFO  | Task b4963bad-befb-4bae-b0e1-d1df3e5de07c (test) was prepared for execution. 2026-01-10 15:10:04.789443 | orchestrator | 2026-01-10 15:10:04 | INFO  | It takes a moment until task b4963bad-befb-4bae-b0e1-d1df3e5de07c (test) has been started and output is visible here. 2026-01-10 15:17:14.343837 | orchestrator | 2026-01-10 15:17:14.343962 | orchestrator | PLAY [Create test project] ***************************************************** 2026-01-10 15:17:14.343978 | orchestrator | 2026-01-10 15:17:14.343987 | orchestrator | TASK [Create test domain] ****************************************************** 2026-01-10 15:17:14.343995 | orchestrator | Saturday 10 January 2026 15:10:09 +0000 (0:00:00.071) 0:00:00.071 ****** 2026-01-10 15:17:14.344003 | orchestrator | changed: [localhost] 2026-01-10 15:17:14.344012 | orchestrator | 2026-01-10 15:17:14.344019 | orchestrator | TASK [Create test-admin user] ************************************************** 2026-01-10 15:17:14.344026 | orchestrator | Saturday 10 January 2026 15:10:12 +0000 (0:00:03.201) 0:00:03.272 ****** 2026-01-10 15:17:14.344034 | orchestrator | changed: [localhost] 2026-01-10 15:17:14.344062 | orchestrator | 2026-01-10 15:17:14.344070 | orchestrator | TASK [Add manager role to user test-admin] ************************************* 2026-01-10 15:17:14.344081 | orchestrator | Saturday 10 January 2026 15:10:16 +0000 (0:00:04.135) 0:00:07.407 ****** 2026-01-10 15:17:14.344093 | orchestrator | changed: [localhost] 2026-01-10 15:17:14.344189 | orchestrator | 2026-01-10 15:17:14.344203 | orchestrator | TASK [Create test project] ***************************************************** 2026-01-10 15:17:14.344215 | orchestrator | Saturday 10 January 2026 15:10:22 +0000 (0:00:06.527) 0:00:13.935 ****** 2026-01-10 15:17:14.344227 | orchestrator | changed: [localhost] 2026-01-10 15:17:14.344240 | orchestrator | 2026-01-10 15:17:14.344253 | orchestrator | TASK [Create test user] ******************************************************** 2026-01-10 15:17:14.344264 | orchestrator | Saturday 10 January 2026 15:10:26 +0000 (0:00:04.083) 0:00:18.019 ****** 2026-01-10 15:17:14.344277 | orchestrator | changed: [localhost] 2026-01-10 15:17:14.344285 | orchestrator | 2026-01-10 15:17:14.344292 | orchestrator | TASK [Add member roles to user test] ******************************************* 2026-01-10 15:17:14.344299 | orchestrator | Saturday 10 January 2026 15:10:31 +0000 (0:00:04.265) 0:00:22.284 ****** 2026-01-10 15:17:14.344307 | orchestrator | changed: [localhost] => (item=load-balancer_member) 2026-01-10 15:17:14.344314 | orchestrator | changed: [localhost] => (item=member) 2026-01-10 15:17:14.344323 | orchestrator | changed: [localhost] => (item=creator) 2026-01-10 15:17:14.344330 | orchestrator | 2026-01-10 15:17:14.344337 | orchestrator | TASK [Create test server group] ************************************************ 2026-01-10 15:17:14.344344 | orchestrator | Saturday 10 January 2026 15:10:43 +0000 (0:00:11.994) 0:00:34.278 ****** 2026-01-10 15:17:14.344351 | orchestrator | changed: [localhost] 2026-01-10 15:17:14.344359 | orchestrator | 2026-01-10 15:17:14.344367 | orchestrator | TASK [Create ssh security group] *********************************************** 2026-01-10 15:17:14.344375 | orchestrator | Saturday 10 January 2026 15:10:47 +0000 (0:00:04.260) 0:00:38.539 ****** 2026-01-10 15:17:14.344383 | orchestrator | changed: [localhost] 2026-01-10 15:17:14.344391 | orchestrator | 2026-01-10 15:17:14.344399 | orchestrator | TASK [Add rule to ssh security group] ****************************************** 2026-01-10 15:17:14.344407 | orchestrator | Saturday 10 January 2026 15:10:52 +0000 (0:00:04.714) 0:00:43.253 ****** 2026-01-10 15:17:14.344415 | orchestrator | changed: [localhost] 2026-01-10 15:17:14.344423 | orchestrator | 2026-01-10 15:17:14.344430 | orchestrator | TASK [Create icmp security group] ********************************************** 2026-01-10 15:17:14.344438 | orchestrator | Saturday 10 January 2026 15:10:56 +0000 (0:00:04.167) 0:00:47.420 ****** 2026-01-10 15:17:14.344447 | orchestrator | changed: [localhost] 2026-01-10 15:17:14.344456 | orchestrator | 2026-01-10 15:17:14.344468 | orchestrator | TASK [Add rule to icmp security group] ***************************************** 2026-01-10 15:17:14.344483 | orchestrator | Saturday 10 January 2026 15:11:00 +0000 (0:00:04.083) 0:00:51.504 ****** 2026-01-10 15:17:14.344499 | orchestrator | changed: [localhost] 2026-01-10 15:17:14.344514 | orchestrator | 2026-01-10 15:17:14.344525 | orchestrator | TASK [Create test keypair] ***************************************************** 2026-01-10 15:17:14.344536 | orchestrator | Saturday 10 January 2026 15:11:04 +0000 (0:00:04.170) 0:00:55.675 ****** 2026-01-10 15:17:14.344549 | orchestrator | changed: [localhost] 2026-01-10 15:17:14.344562 | orchestrator | 2026-01-10 15:17:14.344573 | orchestrator | TASK [Create test network] ***************************************************** 2026-01-10 15:17:14.344584 | orchestrator | Saturday 10 January 2026 15:11:08 +0000 (0:00:03.876) 0:00:59.551 ****** 2026-01-10 15:17:14.344595 | orchestrator | changed: [localhost] 2026-01-10 15:17:14.344606 | orchestrator | 2026-01-10 15:17:14.344615 | orchestrator | TASK [Create test subnet] ****************************************************** 2026-01-10 15:17:14.344627 | orchestrator | Saturday 10 January 2026 15:11:13 +0000 (0:00:04.756) 0:01:04.308 ****** 2026-01-10 15:17:14.344638 | orchestrator | changed: [localhost] 2026-01-10 15:17:14.344649 | orchestrator | 2026-01-10 15:17:14.344659 | orchestrator | TASK [Create test router] ****************************************************** 2026-01-10 15:17:14.344670 | orchestrator | Saturday 10 January 2026 15:11:18 +0000 (0:00:05.331) 0:01:09.639 ****** 2026-01-10 15:17:14.344694 | orchestrator | changed: [localhost] 2026-01-10 15:17:14.344705 | orchestrator | 2026-01-10 15:17:14.344716 | orchestrator | TASK [Create test instances] *************************************************** 2026-01-10 15:17:14.344727 | orchestrator | Saturday 10 January 2026 15:11:29 +0000 (0:00:10.967) 0:01:20.607 ****** 2026-01-10 15:17:14.344737 | orchestrator | changed: [localhost] => (item=test) 2026-01-10 15:17:14.344749 | orchestrator | changed: [localhost] => (item=test-1) 2026-01-10 15:17:14.344761 | orchestrator | 2026-01-10 15:17:14.344770 | orchestrator | STILL ALIVE [task 'Create test instances' is running] ************************** 2026-01-10 15:17:14.344778 | orchestrator | 2026-01-10 15:17:14.344785 | orchestrator | STILL ALIVE [task 'Create test instances' is running] ************************** 2026-01-10 15:17:14.344792 | orchestrator | changed: [localhost] => (item=test-2) 2026-01-10 15:17:14.344799 | orchestrator | 2026-01-10 15:17:14.344806 | orchestrator | STILL ALIVE [task 'Create test instances' is running] ************************** 2026-01-10 15:17:14.344813 | orchestrator | changed: [localhost] => (item=test-3) 2026-01-10 15:17:14.344820 | orchestrator | 2026-01-10 15:17:14.344827 | orchestrator | STILL ALIVE [task 'Create test instances' is running] ************************** 2026-01-10 15:17:14.344835 | orchestrator | 2026-01-10 15:17:14.344847 | orchestrator | STILL ALIVE [task 'Create test instances' is running] ************************** 2026-01-10 15:17:14.344859 | orchestrator | changed: [localhost] => (item=test-4) 2026-01-10 15:17:14.344870 | orchestrator | 2026-01-10 15:17:14.344882 | orchestrator | TASK [Add metadata to instances] *********************************************** 2026-01-10 15:17:14.344924 | orchestrator | Saturday 10 January 2026 15:15:48 +0000 (0:04:18.816) 0:05:39.424 ****** 2026-01-10 15:17:14.344939 | orchestrator | changed: [localhost] => (item=test) 2026-01-10 15:17:14.344951 | orchestrator | changed: [localhost] => (item=test-1) 2026-01-10 15:17:14.344962 | orchestrator | changed: [localhost] => (item=test-2) 2026-01-10 15:17:14.344974 | orchestrator | changed: [localhost] => (item=test-3) 2026-01-10 15:17:14.344985 | orchestrator | changed: [localhost] => (item=test-4) 2026-01-10 15:17:14.344992 | orchestrator | 2026-01-10 15:17:14.344999 | orchestrator | TASK [Add tag to instances] **************************************************** 2026-01-10 15:17:14.345006 | orchestrator | Saturday 10 January 2026 15:16:12 +0000 (0:00:24.055) 0:06:03.479 ****** 2026-01-10 15:17:14.345013 | orchestrator | changed: [localhost] => (item=test) 2026-01-10 15:17:14.345020 | orchestrator | changed: [localhost] => (item=test-1) 2026-01-10 15:17:14.345027 | orchestrator | changed: [localhost] => (item=test-2) 2026-01-10 15:17:14.345034 | orchestrator | changed: [localhost] => (item=test-3) 2026-01-10 15:17:14.345041 | orchestrator | changed: [localhost] => (item=test-4) 2026-01-10 15:17:14.345049 | orchestrator | 2026-01-10 15:17:14.345056 | orchestrator | TASK [Create test volume] ****************************************************** 2026-01-10 15:17:14.345063 | orchestrator | Saturday 10 January 2026 15:16:47 +0000 (0:00:35.282) 0:06:38.761 ****** 2026-01-10 15:17:14.345070 | orchestrator | changed: [localhost] 2026-01-10 15:17:14.345077 | orchestrator | 2026-01-10 15:17:14.345084 | orchestrator | TASK [Attach test volume] ****************************************************** 2026-01-10 15:17:14.345091 | orchestrator | Saturday 10 January 2026 15:16:54 +0000 (0:00:07.041) 0:06:45.803 ****** 2026-01-10 15:17:14.345160 | orchestrator | changed: [localhost] 2026-01-10 15:17:14.345172 | orchestrator | 2026-01-10 15:17:14.345183 | orchestrator | TASK [Create floating ip address] ********************************************** 2026-01-10 15:17:14.345196 | orchestrator | Saturday 10 January 2026 15:17:08 +0000 (0:00:13.883) 0:06:59.686 ****** 2026-01-10 15:17:14.345208 | orchestrator | ok: [localhost] 2026-01-10 15:17:14.345221 | orchestrator | 2026-01-10 15:17:14.345230 | orchestrator | TASK [Print floating ip address] *********************************************** 2026-01-10 15:17:14.345237 | orchestrator | Saturday 10 January 2026 15:17:13 +0000 (0:00:05.323) 0:07:05.010 ****** 2026-01-10 15:17:14.345243 | orchestrator | ok: [localhost] => { 2026-01-10 15:17:14.345251 | orchestrator |  "msg": "192.168.112.159" 2026-01-10 15:17:14.345258 | orchestrator | } 2026-01-10 15:17:14.345273 | orchestrator | 2026-01-10 15:17:14.345280 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-10 15:17:14.345288 | orchestrator | localhost : ok=22  changed=20  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-10 15:17:14.345296 | orchestrator | 2026-01-10 15:17:14.345303 | orchestrator | 2026-01-10 15:17:14.345310 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-10 15:17:14.345317 | orchestrator | Saturday 10 January 2026 15:17:14 +0000 (0:00:00.041) 0:07:05.051 ****** 2026-01-10 15:17:14.345324 | orchestrator | =============================================================================== 2026-01-10 15:17:14.345331 | orchestrator | Create test instances ------------------------------------------------- 258.82s 2026-01-10 15:17:14.345338 | orchestrator | Add tag to instances --------------------------------------------------- 35.28s 2026-01-10 15:17:14.345345 | orchestrator | Add metadata to instances ---------------------------------------------- 24.06s 2026-01-10 15:17:14.345352 | orchestrator | Attach test volume ----------------------------------------------------- 13.88s 2026-01-10 15:17:14.345359 | orchestrator | Add member roles to user test ------------------------------------------ 11.99s 2026-01-10 15:17:14.345366 | orchestrator | Create test router ----------------------------------------------------- 10.97s 2026-01-10 15:17:14.345373 | orchestrator | Create test volume ------------------------------------------------------ 7.04s 2026-01-10 15:17:14.345380 | orchestrator | Add manager role to user test-admin ------------------------------------- 6.53s 2026-01-10 15:17:14.345387 | orchestrator | Create test subnet ------------------------------------------------------ 5.33s 2026-01-10 15:17:14.345394 | orchestrator | Create floating ip address ---------------------------------------------- 5.32s 2026-01-10 15:17:14.345401 | orchestrator | Create test network ----------------------------------------------------- 4.76s 2026-01-10 15:17:14.345409 | orchestrator | Create ssh security group ----------------------------------------------- 4.71s 2026-01-10 15:17:14.345416 | orchestrator | Create test user -------------------------------------------------------- 4.27s 2026-01-10 15:17:14.345423 | orchestrator | Create test server group ------------------------------------------------ 4.26s 2026-01-10 15:17:14.345430 | orchestrator | Add rule to icmp security group ----------------------------------------- 4.17s 2026-01-10 15:17:14.345437 | orchestrator | Add rule to ssh security group ------------------------------------------ 4.17s 2026-01-10 15:17:14.345444 | orchestrator | Create test-admin user -------------------------------------------------- 4.14s 2026-01-10 15:17:14.345451 | orchestrator | Create icmp security group ---------------------------------------------- 4.08s 2026-01-10 15:17:14.345458 | orchestrator | Create test project ----------------------------------------------------- 4.08s 2026-01-10 15:17:14.345465 | orchestrator | Create test keypair ----------------------------------------------------- 3.88s 2026-01-10 15:17:14.672384 | orchestrator | + server_list 2026-01-10 15:17:14.672467 | orchestrator | + openstack --os-cloud test server list 2026-01-10 15:17:18.518721 | orchestrator | +--------------------------------------+--------+--------+---------------------------------------+--------------------------+----------+ 2026-01-10 15:17:18.518825 | orchestrator | | ID | Name | Status | Networks | Image | Flavor | 2026-01-10 15:17:18.518839 | orchestrator | +--------------------------------------+--------+--------+---------------------------------------+--------------------------+----------+ 2026-01-10 15:17:18.518849 | orchestrator | | 339c60a1-f1de-4477-8671-1cf7187b8137 | test-4 | ACTIVE | test=192.168.112.127, 192.168.200.128 | N/A (booted from volume) | SCS-1L-1 | 2026-01-10 15:17:18.518859 | orchestrator | | 8d5e766f-ef8c-4833-bfb6-6daeb61c1ae6 | test-3 | ACTIVE | test=192.168.112.160, 192.168.200.83 | N/A (booted from volume) | SCS-1L-1 | 2026-01-10 15:17:18.518870 | orchestrator | | 5f72ba77-bd85-468f-8626-74fb2642ae0d | test-2 | ACTIVE | test=192.168.112.149, 192.168.200.144 | N/A (booted from volume) | SCS-1L-1 | 2026-01-10 15:17:18.518880 | orchestrator | | 27de98fd-6f6f-4a22-ad37-6d0985f6951a | test-1 | ACTIVE | test=192.168.112.110, 192.168.200.177 | N/A (booted from volume) | SCS-1L-1 | 2026-01-10 15:17:18.518921 | orchestrator | | b093228a-4314-4e18-871f-9ea35a18b83f | test | ACTIVE | test=192.168.112.159, 192.168.200.158 | N/A (booted from volume) | SCS-1L-1 | 2026-01-10 15:17:18.518933 | orchestrator | +--------------------------------------+--------+--------+---------------------------------------+--------------------------+----------+ 2026-01-10 15:17:18.771010 | orchestrator | + openstack --os-cloud test server show test 2026-01-10 15:17:21.979295 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-01-10 15:17:21.979387 | orchestrator | | Field | Value | 2026-01-10 15:17:21.979396 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-01-10 15:17:21.979403 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-01-10 15:17:21.979410 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-01-10 15:17:21.979423 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-01-10 15:17:21.979430 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test | 2026-01-10 15:17:21.979439 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-01-10 15:17:21.979445 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-01-10 15:17:21.979480 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-01-10 15:17:21.979487 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-01-10 15:17:21.979497 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-01-10 15:17:21.979509 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-01-10 15:17:21.979524 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-01-10 15:17:21.979536 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-01-10 15:17:21.979549 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-01-10 15:17:21.979560 | orchestrator | | OS-EXT-STS:task_state | None | 2026-01-10 15:17:21.979576 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-01-10 15:17:21.979596 | orchestrator | | OS-SRV-USG:launched_at | 2026-01-10T15:12:14.000000 | 2026-01-10 15:17:21.979613 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-01-10 15:17:21.979623 | orchestrator | | accessIPv4 | | 2026-01-10 15:17:21.979634 | orchestrator | | accessIPv6 | | 2026-01-10 15:17:21.979645 | orchestrator | | addresses | test=192.168.112.159, 192.168.200.158 | 2026-01-10 15:17:21.979657 | orchestrator | | config_drive | | 2026-01-10 15:17:21.979669 | orchestrator | | created | 2026-01-10T15:11:37Z | 2026-01-10 15:17:21.979682 | orchestrator | | description | None | 2026-01-10 15:17:21.979694 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-01-10 15:17:21.979718 | orchestrator | | hostId | f0282884ea9da1233ef7286f3f4790cd0c3bdc47e51cd3be8f51d81c | 2026-01-10 15:17:21.979730 | orchestrator | | host_status | None | 2026-01-10 15:17:21.979743 | orchestrator | | id | b093228a-4314-4e18-871f-9ea35a18b83f | 2026-01-10 15:17:21.979751 | orchestrator | | image | N/A (booted from volume) | 2026-01-10 15:17:21.979758 | orchestrator | | key_name | test | 2026-01-10 15:17:21.979766 | orchestrator | | locked | False | 2026-01-10 15:17:21.979773 | orchestrator | | locked_reason | None | 2026-01-10 15:17:21.979780 | orchestrator | | name | test | 2026-01-10 15:17:21.979787 | orchestrator | | pinned_availability_zone | None | 2026-01-10 15:17:21.979799 | orchestrator | | progress | 0 | 2026-01-10 15:17:21.979810 | orchestrator | | project_id | 61ba3875ae4b49cd8c99d0f35c288dbd | 2026-01-10 15:17:21.979816 | orchestrator | | properties | hostname='test' | 2026-01-10 15:17:21.979848 | orchestrator | | security_groups | name='ssh' | 2026-01-10 15:17:21.979855 | orchestrator | | | name='icmp' | 2026-01-10 15:17:21.979861 | orchestrator | | server_groups | None | 2026-01-10 15:17:21.979867 | orchestrator | | status | ACTIVE | 2026-01-10 15:17:21.979874 | orchestrator | | tags | test | 2026-01-10 15:17:21.979880 | orchestrator | | trusted_image_certificates | None | 2026-01-10 15:17:21.979886 | orchestrator | | updated | 2026-01-10T15:15:53Z | 2026-01-10 15:17:21.979900 | orchestrator | | user_id | 41817e4167584625b138399408d6dcc7 | 2026-01-10 15:17:21.979910 | orchestrator | | volumes_attached | delete_on_termination='True', id='e05481cd-28f3-46b4-a871-c47d48a4fa9c' | 2026-01-10 15:17:21.979916 | orchestrator | | | delete_on_termination='False', id='00af1e8b-ba04-4518-b28a-1e5afdeded03' | 2026-01-10 15:17:21.982453 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-01-10 15:17:22.265388 | orchestrator | + openstack --os-cloud test server show test-1 2026-01-10 15:17:25.267131 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-01-10 15:17:25.267213 | orchestrator | | Field | Value | 2026-01-10 15:17:25.267223 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-01-10 15:17:25.267230 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-01-10 15:17:25.267236 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-01-10 15:17:25.267258 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-01-10 15:17:25.267264 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-1 | 2026-01-10 15:17:25.267279 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-01-10 15:17:25.267286 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-01-10 15:17:25.267305 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-01-10 15:17:25.267311 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-01-10 15:17:25.267317 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-01-10 15:17:25.267324 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-01-10 15:17:25.267330 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-01-10 15:17:25.267344 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-01-10 15:17:25.267350 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-01-10 15:17:25.267356 | orchestrator | | OS-EXT-STS:task_state | None | 2026-01-10 15:17:25.267365 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-01-10 15:17:25.267371 | orchestrator | | OS-SRV-USG:launched_at | 2026-01-10T15:13:12.000000 | 2026-01-10 15:17:25.267387 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-01-10 15:17:25.267396 | orchestrator | | accessIPv4 | | 2026-01-10 15:17:25.267431 | orchestrator | | accessIPv6 | | 2026-01-10 15:17:25.267442 | orchestrator | | addresses | test=192.168.112.110, 192.168.200.177 | 2026-01-10 15:17:25.267459 | orchestrator | | config_drive | | 2026-01-10 15:17:25.267469 | orchestrator | | created | 2026-01-10T15:12:36Z | 2026-01-10 15:17:25.267479 | orchestrator | | description | None | 2026-01-10 15:17:25.267489 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-01-10 15:17:25.267503 | orchestrator | | hostId | 32816a133403a15cbbbcdbc0ce7cc3d20323ce9d7102503932e9e322 | 2026-01-10 15:17:25.267513 | orchestrator | | host_status | None | 2026-01-10 15:17:25.267531 | orchestrator | | id | 27de98fd-6f6f-4a22-ad37-6d0985f6951a | 2026-01-10 15:17:25.267537 | orchestrator | | image | N/A (booted from volume) | 2026-01-10 15:17:25.267543 | orchestrator | | key_name | test | 2026-01-10 15:17:25.267553 | orchestrator | | locked | False | 2026-01-10 15:17:25.267559 | orchestrator | | locked_reason | None | 2026-01-10 15:17:25.267564 | orchestrator | | name | test-1 | 2026-01-10 15:17:25.267570 | orchestrator | | pinned_availability_zone | None | 2026-01-10 15:17:25.267576 | orchestrator | | progress | 0 | 2026-01-10 15:17:25.267585 | orchestrator | | project_id | 61ba3875ae4b49cd8c99d0f35c288dbd | 2026-01-10 15:17:25.267591 | orchestrator | | properties | hostname='test-1' | 2026-01-10 15:17:25.267603 | orchestrator | | security_groups | name='ssh' | 2026-01-10 15:17:25.267609 | orchestrator | | | name='icmp' | 2026-01-10 15:17:25.267614 | orchestrator | | server_groups | None | 2026-01-10 15:17:25.267624 | orchestrator | | status | ACTIVE | 2026-01-10 15:17:25.267632 | orchestrator | | tags | test | 2026-01-10 15:17:25.267638 | orchestrator | | trusted_image_certificates | None | 2026-01-10 15:17:25.267646 | orchestrator | | updated | 2026-01-10T15:15:58Z | 2026-01-10 15:17:25.267659 | orchestrator | | user_id | 41817e4167584625b138399408d6dcc7 | 2026-01-10 15:17:25.267669 | orchestrator | | volumes_attached | delete_on_termination='True', id='4b978699-bae6-45b9-bc40-a6eafb9b0d26' | 2026-01-10 15:17:25.271967 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-01-10 15:17:25.582341 | orchestrator | + openstack --os-cloud test server show test-2 2026-01-10 15:17:28.653591 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-01-10 15:17:28.653688 | orchestrator | | Field | Value | 2026-01-10 15:17:28.653729 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-01-10 15:17:28.653741 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-01-10 15:17:28.653751 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-01-10 15:17:28.653760 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-01-10 15:17:28.653770 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-2 | 2026-01-10 15:17:28.653780 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-01-10 15:17:28.653790 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-01-10 15:17:28.653816 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-01-10 15:17:28.653827 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-01-10 15:17:28.653844 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-01-10 15:17:28.653854 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-01-10 15:17:28.653864 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-01-10 15:17:28.653874 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-01-10 15:17:28.653884 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-01-10 15:17:28.653894 | orchestrator | | OS-EXT-STS:task_state | None | 2026-01-10 15:17:28.653925 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-01-10 15:17:28.653936 | orchestrator | | OS-SRV-USG:launched_at | 2026-01-10T15:14:09.000000 | 2026-01-10 15:17:28.653952 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-01-10 15:17:28.653968 | orchestrator | | accessIPv4 | | 2026-01-10 15:17:28.653978 | orchestrator | | accessIPv6 | | 2026-01-10 15:17:28.653987 | orchestrator | | addresses | test=192.168.112.149, 192.168.200.144 | 2026-01-10 15:17:28.653997 | orchestrator | | config_drive | | 2026-01-10 15:17:28.654007 | orchestrator | | created | 2026-01-10T15:13:30Z | 2026-01-10 15:17:28.654123 | orchestrator | | description | None | 2026-01-10 15:17:28.654140 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-01-10 15:17:28.654155 | orchestrator | | hostId | cd024cd764565190760e313246d20fde5394e4fa7977d34918768c62 | 2026-01-10 15:17:28.654166 | orchestrator | | host_status | None | 2026-01-10 15:17:28.654192 | orchestrator | | id | 5f72ba77-bd85-468f-8626-74fb2642ae0d | 2026-01-10 15:17:28.654203 | orchestrator | | image | N/A (booted from volume) | 2026-01-10 15:17:28.654213 | orchestrator | | key_name | test | 2026-01-10 15:17:28.654223 | orchestrator | | locked | False | 2026-01-10 15:17:28.654233 | orchestrator | | locked_reason | None | 2026-01-10 15:17:28.654242 | orchestrator | | name | test-2 | 2026-01-10 15:17:28.654252 | orchestrator | | pinned_availability_zone | None | 2026-01-10 15:17:28.654262 | orchestrator | | progress | 0 | 2026-01-10 15:17:28.654276 | orchestrator | | project_id | 61ba3875ae4b49cd8c99d0f35c288dbd | 2026-01-10 15:17:28.654287 | orchestrator | | properties | hostname='test-2' | 2026-01-10 15:17:28.654309 | orchestrator | | security_groups | name='ssh' | 2026-01-10 15:17:28.654319 | orchestrator | | | name='icmp' | 2026-01-10 15:17:28.654331 | orchestrator | | server_groups | None | 2026-01-10 15:17:28.654349 | orchestrator | | status | ACTIVE | 2026-01-10 15:17:28.654365 | orchestrator | | tags | test | 2026-01-10 15:17:28.654382 | orchestrator | | trusted_image_certificates | None | 2026-01-10 15:17:28.654398 | orchestrator | | updated | 2026-01-10T15:16:02Z | 2026-01-10 15:17:28.654415 | orchestrator | | user_id | 41817e4167584625b138399408d6dcc7 | 2026-01-10 15:17:28.654437 | orchestrator | | volumes_attached | delete_on_termination='True', id='22dc42be-3b10-4f56-a4e0-4ebeebb945ca' | 2026-01-10 15:17:28.658798 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-01-10 15:17:28.949058 | orchestrator | + openstack --os-cloud test server show test-3 2026-01-10 15:17:31.913886 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-01-10 15:17:31.913964 | orchestrator | | Field | Value | 2026-01-10 15:17:31.913972 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-01-10 15:17:31.913978 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-01-10 15:17:31.913983 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-01-10 15:17:31.913988 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-01-10 15:17:31.913994 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-3 | 2026-01-10 15:17:31.913999 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-01-10 15:17:31.914065 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-01-10 15:17:31.914086 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-01-10 15:17:31.914092 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-01-10 15:17:31.914140 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-01-10 15:17:31.914146 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-01-10 15:17:31.914151 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-01-10 15:17:31.914157 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-01-10 15:17:31.914162 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-01-10 15:17:31.914167 | orchestrator | | OS-EXT-STS:task_state | None | 2026-01-10 15:17:31.914180 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-01-10 15:17:31.914186 | orchestrator | | OS-SRV-USG:launched_at | 2026-01-10T15:14:54.000000 | 2026-01-10 15:17:31.914196 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-01-10 15:17:31.914201 | orchestrator | | accessIPv4 | | 2026-01-10 15:17:31.914207 | orchestrator | | accessIPv6 | | 2026-01-10 15:17:31.914212 | orchestrator | | addresses | test=192.168.112.160, 192.168.200.83 | 2026-01-10 15:17:31.914217 | orchestrator | | config_drive | | 2026-01-10 15:17:31.914222 | orchestrator | | created | 2026-01-10T15:14:28Z | 2026-01-10 15:17:31.914227 | orchestrator | | description | None | 2026-01-10 15:17:31.914232 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-01-10 15:17:31.914241 | orchestrator | | hostId | f0282884ea9da1233ef7286f3f4790cd0c3bdc47e51cd3be8f51d81c | 2026-01-10 15:17:31.914247 | orchestrator | | host_status | None | 2026-01-10 15:17:31.914257 | orchestrator | | id | 8d5e766f-ef8c-4833-bfb6-6daeb61c1ae6 | 2026-01-10 15:17:31.914262 | orchestrator | | image | N/A (booted from volume) | 2026-01-10 15:17:31.914267 | orchestrator | | key_name | test | 2026-01-10 15:17:31.914273 | orchestrator | | locked | False | 2026-01-10 15:17:31.914278 | orchestrator | | locked_reason | None | 2026-01-10 15:17:31.914575 | orchestrator | | name | test-3 | 2026-01-10 15:17:31.914591 | orchestrator | | pinned_availability_zone | None | 2026-01-10 15:17:31.914602 | orchestrator | | progress | 0 | 2026-01-10 15:17:31.914607 | orchestrator | | project_id | 61ba3875ae4b49cd8c99d0f35c288dbd | 2026-01-10 15:17:31.914613 | orchestrator | | properties | hostname='test-3' | 2026-01-10 15:17:31.914625 | orchestrator | | security_groups | name='ssh' | 2026-01-10 15:17:31.914630 | orchestrator | | | name='icmp' | 2026-01-10 15:17:31.914635 | orchestrator | | server_groups | None | 2026-01-10 15:17:31.914641 | orchestrator | | status | ACTIVE | 2026-01-10 15:17:31.914646 | orchestrator | | tags | test | 2026-01-10 15:17:31.914654 | orchestrator | | trusted_image_certificates | None | 2026-01-10 15:17:31.914664 | orchestrator | | updated | 2026-01-10T15:16:07Z | 2026-01-10 15:17:31.914669 | orchestrator | | user_id | 41817e4167584625b138399408d6dcc7 | 2026-01-10 15:17:31.914674 | orchestrator | | volumes_attached | delete_on_termination='True', id='a5b92d2e-f68c-4e1f-8104-aa5fa865dfd4' | 2026-01-10 15:17:31.921764 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-01-10 15:17:32.213303 | orchestrator | + openstack --os-cloud test server show test-4 2026-01-10 15:17:35.367219 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-01-10 15:17:35.367320 | orchestrator | | Field | Value | 2026-01-10 15:17:35.367333 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-01-10 15:17:35.367342 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-01-10 15:17:35.367350 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-01-10 15:17:35.367394 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-01-10 15:17:35.367403 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-4 | 2026-01-10 15:17:35.367412 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-01-10 15:17:35.367420 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-01-10 15:17:35.367447 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-01-10 15:17:35.367462 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-01-10 15:17:35.367475 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-01-10 15:17:35.367488 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-01-10 15:17:35.367503 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-01-10 15:17:35.367535 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-01-10 15:17:35.367549 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-01-10 15:17:35.367557 | orchestrator | | OS-EXT-STS:task_state | None | 2026-01-10 15:17:35.367565 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-01-10 15:17:35.367573 | orchestrator | | OS-SRV-USG:launched_at | 2026-01-10T15:15:37.000000 | 2026-01-10 15:17:35.367588 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-01-10 15:17:35.367596 | orchestrator | | accessIPv4 | | 2026-01-10 15:17:35.367604 | orchestrator | | accessIPv6 | | 2026-01-10 15:17:35.367612 | orchestrator | | addresses | test=192.168.112.127, 192.168.200.128 | 2026-01-10 15:17:35.367620 | orchestrator | | config_drive | | 2026-01-10 15:17:35.367634 | orchestrator | | created | 2026-01-10T15:15:11Z | 2026-01-10 15:17:35.367646 | orchestrator | | description | None | 2026-01-10 15:17:35.367655 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-01-10 15:17:35.367663 | orchestrator | | hostId | 32816a133403a15cbbbcdbc0ce7cc3d20323ce9d7102503932e9e322 | 2026-01-10 15:17:35.367671 | orchestrator | | host_status | None | 2026-01-10 15:17:35.367685 | orchestrator | | id | 339c60a1-f1de-4477-8671-1cf7187b8137 | 2026-01-10 15:17:35.367694 | orchestrator | | image | N/A (booted from volume) | 2026-01-10 15:17:35.367702 | orchestrator | | key_name | test | 2026-01-10 15:17:35.367710 | orchestrator | | locked | False | 2026-01-10 15:17:35.367724 | orchestrator | | locked_reason | None | 2026-01-10 15:17:35.367732 | orchestrator | | name | test-4 | 2026-01-10 15:17:35.367745 | orchestrator | | pinned_availability_zone | None | 2026-01-10 15:17:35.367754 | orchestrator | | progress | 0 | 2026-01-10 15:17:35.367764 | orchestrator | | project_id | 61ba3875ae4b49cd8c99d0f35c288dbd | 2026-01-10 15:17:35.367773 | orchestrator | | properties | hostname='test-4' | 2026-01-10 15:17:35.367788 | orchestrator | | security_groups | name='ssh' | 2026-01-10 15:17:35.367798 | orchestrator | | | name='icmp' | 2026-01-10 15:17:35.367807 | orchestrator | | server_groups | None | 2026-01-10 15:17:35.367821 | orchestrator | | status | ACTIVE | 2026-01-10 15:17:35.367830 | orchestrator | | tags | test | 2026-01-10 15:17:35.367840 | orchestrator | | trusted_image_certificates | None | 2026-01-10 15:17:35.367853 | orchestrator | | updated | 2026-01-10T15:16:12Z | 2026-01-10 15:17:35.367862 | orchestrator | | user_id | 41817e4167584625b138399408d6dcc7 | 2026-01-10 15:17:35.367872 | orchestrator | | volumes_attached | delete_on_termination='True', id='1846e391-5e7a-495d-a606-8cbb261d1353' | 2026-01-10 15:17:35.371483 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-01-10 15:17:35.655160 | orchestrator | + server_ping 2026-01-10 15:17:35.656595 | orchestrator | ++ tr -d '\r' 2026-01-10 15:17:35.657153 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2026-01-10 15:17:38.541154 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-01-10 15:17:38.541231 | orchestrator | + ping -c3 192.168.112.110 2026-01-10 15:17:38.564269 | orchestrator | PING 192.168.112.110 (192.168.112.110) 56(84) bytes of data. 2026-01-10 15:17:38.564388 | orchestrator | 64 bytes from 192.168.112.110: icmp_seq=1 ttl=63 time=13.9 ms 2026-01-10 15:17:39.554186 | orchestrator | 64 bytes from 192.168.112.110: icmp_seq=2 ttl=63 time=2.47 ms 2026-01-10 15:17:40.555650 | orchestrator | 64 bytes from 192.168.112.110: icmp_seq=3 ttl=63 time=1.89 ms 2026-01-10 15:17:40.555743 | orchestrator | 2026-01-10 15:17:40.555776 | orchestrator | --- 192.168.112.110 ping statistics --- 2026-01-10 15:17:40.555784 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-01-10 15:17:40.555791 | orchestrator | rtt min/avg/max/mdev = 1.887/6.075/13.875/5.520 ms 2026-01-10 15:17:40.555797 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-01-10 15:17:40.555803 | orchestrator | + ping -c3 192.168.112.127 2026-01-10 15:17:40.569777 | orchestrator | PING 192.168.112.127 (192.168.112.127) 56(84) bytes of data. 2026-01-10 15:17:40.569864 | orchestrator | 64 bytes from 192.168.112.127: icmp_seq=1 ttl=63 time=9.16 ms 2026-01-10 15:17:41.564890 | orchestrator | 64 bytes from 192.168.112.127: icmp_seq=2 ttl=63 time=2.66 ms 2026-01-10 15:17:42.566287 | orchestrator | 64 bytes from 192.168.112.127: icmp_seq=3 ttl=63 time=2.07 ms 2026-01-10 15:17:42.566400 | orchestrator | 2026-01-10 15:17:42.566413 | orchestrator | --- 192.168.112.127 ping statistics --- 2026-01-10 15:17:42.566422 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-01-10 15:17:42.566429 | orchestrator | rtt min/avg/max/mdev = 2.070/4.630/9.163/3.213 ms 2026-01-10 15:17:42.566582 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-01-10 15:17:42.566595 | orchestrator | + ping -c3 192.168.112.160 2026-01-10 15:17:42.578942 | orchestrator | PING 192.168.112.160 (192.168.112.160) 56(84) bytes of data. 2026-01-10 15:17:42.579021 | orchestrator | 64 bytes from 192.168.112.160: icmp_seq=1 ttl=63 time=6.93 ms 2026-01-10 15:17:43.575920 | orchestrator | 64 bytes from 192.168.112.160: icmp_seq=2 ttl=63 time=2.38 ms 2026-01-10 15:17:44.578470 | orchestrator | 64 bytes from 192.168.112.160: icmp_seq=3 ttl=63 time=2.17 ms 2026-01-10 15:17:44.578578 | orchestrator | 2026-01-10 15:17:44.578589 | orchestrator | --- 192.168.112.160 ping statistics --- 2026-01-10 15:17:44.578597 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2026-01-10 15:17:44.578604 | orchestrator | rtt min/avg/max/mdev = 2.168/3.826/6.930/2.196 ms 2026-01-10 15:17:44.578979 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-01-10 15:17:44.579006 | orchestrator | + ping -c3 192.168.112.149 2026-01-10 15:17:44.590061 | orchestrator | PING 192.168.112.149 (192.168.112.149) 56(84) bytes of data. 2026-01-10 15:17:44.590148 | orchestrator | 64 bytes from 192.168.112.149: icmp_seq=1 ttl=63 time=5.48 ms 2026-01-10 15:17:45.588805 | orchestrator | 64 bytes from 192.168.112.149: icmp_seq=2 ttl=63 time=2.48 ms 2026-01-10 15:17:46.590209 | orchestrator | 64 bytes from 192.168.112.149: icmp_seq=3 ttl=63 time=1.83 ms 2026-01-10 15:17:46.590326 | orchestrator | 2026-01-10 15:17:46.590347 | orchestrator | --- 192.168.112.149 ping statistics --- 2026-01-10 15:17:46.590365 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2026-01-10 15:17:46.590380 | orchestrator | rtt min/avg/max/mdev = 1.825/3.263/5.480/1.590 ms 2026-01-10 15:17:46.590881 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-01-10 15:17:46.590991 | orchestrator | + ping -c3 192.168.112.159 2026-01-10 15:17:46.602252 | orchestrator | PING 192.168.112.159 (192.168.112.159) 56(84) bytes of data. 2026-01-10 15:17:46.602352 | orchestrator | 64 bytes from 192.168.112.159: icmp_seq=1 ttl=63 time=6.94 ms 2026-01-10 15:17:47.599045 | orchestrator | 64 bytes from 192.168.112.159: icmp_seq=2 ttl=63 time=2.33 ms 2026-01-10 15:17:48.601177 | orchestrator | 64 bytes from 192.168.112.159: icmp_seq=3 ttl=63 time=2.07 ms 2026-01-10 15:17:48.601290 | orchestrator | 2026-01-10 15:17:48.601325 | orchestrator | --- 192.168.112.159 ping statistics --- 2026-01-10 15:17:48.601352 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-01-10 15:17:48.601364 | orchestrator | rtt min/avg/max/mdev = 2.066/3.778/6.942/2.239 ms 2026-01-10 15:17:48.601375 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-01-10 15:17:48.601386 | orchestrator | + compute_list 2026-01-10 15:17:48.601398 | orchestrator | + osism manage compute list testbed-node-3 2026-01-10 15:17:52.078381 | orchestrator | +--------------------------------------+--------+----------+ 2026-01-10 15:17:52.078465 | orchestrator | | ID | Name | Status | 2026-01-10 15:17:52.078471 | orchestrator | |--------------------------------------+--------+----------| 2026-01-10 15:17:52.078475 | orchestrator | | 8d5e766f-ef8c-4833-bfb6-6daeb61c1ae6 | test-3 | ACTIVE | 2026-01-10 15:17:52.078500 | orchestrator | | b093228a-4314-4e18-871f-9ea35a18b83f | test | ACTIVE | 2026-01-10 15:17:52.078505 | orchestrator | +--------------------------------------+--------+----------+ 2026-01-10 15:17:52.406778 | orchestrator | + osism manage compute list testbed-node-4 2026-01-10 15:17:55.904623 | orchestrator | +--------------------------------------+--------+----------+ 2026-01-10 15:17:55.904726 | orchestrator | | ID | Name | Status | 2026-01-10 15:17:55.904736 | orchestrator | |--------------------------------------+--------+----------| 2026-01-10 15:17:55.904743 | orchestrator | | 339c60a1-f1de-4477-8671-1cf7187b8137 | test-4 | ACTIVE | 2026-01-10 15:17:55.904749 | orchestrator | | 27de98fd-6f6f-4a22-ad37-6d0985f6951a | test-1 | ACTIVE | 2026-01-10 15:17:55.904756 | orchestrator | +--------------------------------------+--------+----------+ 2026-01-10 15:17:56.277430 | orchestrator | + osism manage compute list testbed-node-5 2026-01-10 15:17:59.562943 | orchestrator | +--------------------------------------+--------+----------+ 2026-01-10 15:17:59.563022 | orchestrator | | ID | Name | Status | 2026-01-10 15:17:59.563028 | orchestrator | |--------------------------------------+--------+----------| 2026-01-10 15:17:59.563033 | orchestrator | | 5f72ba77-bd85-468f-8626-74fb2642ae0d | test-2 | ACTIVE | 2026-01-10 15:17:59.563037 | orchestrator | +--------------------------------------+--------+----------+ 2026-01-10 15:17:59.898594 | orchestrator | + osism manage compute migrate --yes --target testbed-node-3 testbed-node-4 2026-01-10 15:18:03.399436 | orchestrator | 2026-01-10 15:18:03 | INFO  | Live migrating server 339c60a1-f1de-4477-8671-1cf7187b8137 2026-01-10 15:18:16.295155 | orchestrator | 2026-01-10 15:18:16 | INFO  | Live migration of 339c60a1-f1de-4477-8671-1cf7187b8137 (test-4) is still in progress 2026-01-10 15:18:18.717821 | orchestrator | 2026-01-10 15:18:18 | INFO  | Live migration of 339c60a1-f1de-4477-8671-1cf7187b8137 (test-4) is still in progress 2026-01-10 15:18:21.203349 | orchestrator | 2026-01-10 15:18:21 | INFO  | Live migration of 339c60a1-f1de-4477-8671-1cf7187b8137 (test-4) is still in progress 2026-01-10 15:18:23.628492 | orchestrator | 2026-01-10 15:18:23 | INFO  | Live migration of 339c60a1-f1de-4477-8671-1cf7187b8137 (test-4) is still in progress 2026-01-10 15:18:25.964951 | orchestrator | 2026-01-10 15:18:25 | INFO  | Live migration of 339c60a1-f1de-4477-8671-1cf7187b8137 (test-4) is still in progress 2026-01-10 15:18:28.376007 | orchestrator | 2026-01-10 15:18:28 | INFO  | Live migration of 339c60a1-f1de-4477-8671-1cf7187b8137 (test-4) is still in progress 2026-01-10 15:18:30.645996 | orchestrator | 2026-01-10 15:18:30 | INFO  | Live migration of 339c60a1-f1de-4477-8671-1cf7187b8137 (test-4) is still in progress 2026-01-10 15:18:32.964049 | orchestrator | 2026-01-10 15:18:32 | INFO  | Live migration of 339c60a1-f1de-4477-8671-1cf7187b8137 (test-4) is still in progress 2026-01-10 15:18:35.337837 | orchestrator | 2026-01-10 15:18:35 | INFO  | Live migration of 339c60a1-f1de-4477-8671-1cf7187b8137 (test-4) completed with status ACTIVE 2026-01-10 15:18:35.337914 | orchestrator | 2026-01-10 15:18:35 | INFO  | Live migrating server 27de98fd-6f6f-4a22-ad37-6d0985f6951a 2026-01-10 15:18:48.141616 | orchestrator | 2026-01-10 15:18:48 | INFO  | Live migration of 27de98fd-6f6f-4a22-ad37-6d0985f6951a (test-1) is still in progress 2026-01-10 15:18:50.564453 | orchestrator | 2026-01-10 15:18:50 | INFO  | Live migration of 27de98fd-6f6f-4a22-ad37-6d0985f6951a (test-1) is still in progress 2026-01-10 15:18:52.922265 | orchestrator | 2026-01-10 15:18:52 | INFO  | Live migration of 27de98fd-6f6f-4a22-ad37-6d0985f6951a (test-1) is still in progress 2026-01-10 15:18:55.247256 | orchestrator | 2026-01-10 15:18:55 | INFO  | Live migration of 27de98fd-6f6f-4a22-ad37-6d0985f6951a (test-1) is still in progress 2026-01-10 15:18:57.548764 | orchestrator | 2026-01-10 15:18:57 | INFO  | Live migration of 27de98fd-6f6f-4a22-ad37-6d0985f6951a (test-1) is still in progress 2026-01-10 15:18:59.965172 | orchestrator | 2026-01-10 15:18:59 | INFO  | Live migration of 27de98fd-6f6f-4a22-ad37-6d0985f6951a (test-1) is still in progress 2026-01-10 15:19:02.332479 | orchestrator | 2026-01-10 15:19:02 | INFO  | Live migration of 27de98fd-6f6f-4a22-ad37-6d0985f6951a (test-1) is still in progress 2026-01-10 15:19:04.617542 | orchestrator | 2026-01-10 15:19:04 | INFO  | Live migration of 27de98fd-6f6f-4a22-ad37-6d0985f6951a (test-1) is still in progress 2026-01-10 15:19:07.038199 | orchestrator | 2026-01-10 15:19:07 | INFO  | Live migration of 27de98fd-6f6f-4a22-ad37-6d0985f6951a (test-1) is still in progress 2026-01-10 15:19:09.434829 | orchestrator | 2026-01-10 15:19:09 | INFO  | Live migration of 27de98fd-6f6f-4a22-ad37-6d0985f6951a (test-1) is still in progress 2026-01-10 15:19:11.840615 | orchestrator | 2026-01-10 15:19:11 | INFO  | Live migration of 27de98fd-6f6f-4a22-ad37-6d0985f6951a (test-1) completed with status ACTIVE 2026-01-10 15:19:12.255754 | orchestrator | + compute_list 2026-01-10 15:19:12.255829 | orchestrator | + osism manage compute list testbed-node-3 2026-01-10 15:19:15.734607 | orchestrator | +--------------------------------------+--------+----------+ 2026-01-10 15:19:15.734806 | orchestrator | | ID | Name | Status | 2026-01-10 15:19:15.734825 | orchestrator | |--------------------------------------+--------+----------| 2026-01-10 15:19:15.734833 | orchestrator | | 339c60a1-f1de-4477-8671-1cf7187b8137 | test-4 | ACTIVE | 2026-01-10 15:19:15.734841 | orchestrator | | 8d5e766f-ef8c-4833-bfb6-6daeb61c1ae6 | test-3 | ACTIVE | 2026-01-10 15:19:15.734849 | orchestrator | | 27de98fd-6f6f-4a22-ad37-6d0985f6951a | test-1 | ACTIVE | 2026-01-10 15:19:15.734857 | orchestrator | | b093228a-4314-4e18-871f-9ea35a18b83f | test | ACTIVE | 2026-01-10 15:19:15.734865 | orchestrator | +--------------------------------------+--------+----------+ 2026-01-10 15:19:16.179931 | orchestrator | + osism manage compute list testbed-node-4 2026-01-10 15:19:19.442795 | orchestrator | +------+--------+----------+ 2026-01-10 15:19:19.442875 | orchestrator | | ID | Name | Status | 2026-01-10 15:19:19.442883 | orchestrator | |------+--------+----------| 2026-01-10 15:19:19.442890 | orchestrator | +------+--------+----------+ 2026-01-10 15:19:19.826830 | orchestrator | + osism manage compute list testbed-node-5 2026-01-10 15:19:23.314597 | orchestrator | +--------------------------------------+--------+----------+ 2026-01-10 15:19:23.314739 | orchestrator | | ID | Name | Status | 2026-01-10 15:19:23.314757 | orchestrator | |--------------------------------------+--------+----------| 2026-01-10 15:19:23.314770 | orchestrator | | 5f72ba77-bd85-468f-8626-74fb2642ae0d | test-2 | ACTIVE | 2026-01-10 15:19:23.314781 | orchestrator | +--------------------------------------+--------+----------+ 2026-01-10 15:19:23.694326 | orchestrator | + server_ping 2026-01-10 15:19:23.696149 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2026-01-10 15:19:23.696834 | orchestrator | ++ tr -d '\r' 2026-01-10 15:19:26.587008 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-01-10 15:19:26.587124 | orchestrator | + ping -c3 192.168.112.110 2026-01-10 15:19:26.598291 | orchestrator | PING 192.168.112.110 (192.168.112.110) 56(84) bytes of data. 2026-01-10 15:19:26.598388 | orchestrator | 64 bytes from 192.168.112.110: icmp_seq=1 ttl=63 time=6.79 ms 2026-01-10 15:19:27.594993 | orchestrator | 64 bytes from 192.168.112.110: icmp_seq=2 ttl=63 time=2.45 ms 2026-01-10 15:19:28.596900 | orchestrator | 64 bytes from 192.168.112.110: icmp_seq=3 ttl=63 time=2.25 ms 2026-01-10 15:19:28.596992 | orchestrator | 2026-01-10 15:19:28.597001 | orchestrator | --- 192.168.112.110 ping statistics --- 2026-01-10 15:19:28.597009 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-01-10 15:19:28.597016 | orchestrator | rtt min/avg/max/mdev = 2.254/3.831/6.792/2.095 ms 2026-01-10 15:19:28.597023 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-01-10 15:19:28.597100 | orchestrator | + ping -c3 192.168.112.127 2026-01-10 15:19:28.609947 | orchestrator | PING 192.168.112.127 (192.168.112.127) 56(84) bytes of data. 2026-01-10 15:19:28.610101 | orchestrator | 64 bytes from 192.168.112.127: icmp_seq=1 ttl=63 time=7.44 ms 2026-01-10 15:19:29.606800 | orchestrator | 64 bytes from 192.168.112.127: icmp_seq=2 ttl=63 time=2.57 ms 2026-01-10 15:19:30.608936 | orchestrator | 64 bytes from 192.168.112.127: icmp_seq=3 ttl=63 time=1.86 ms 2026-01-10 15:19:30.609005 | orchestrator | 2026-01-10 15:19:30.609012 | orchestrator | --- 192.168.112.127 ping statistics --- 2026-01-10 15:19:30.609018 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-01-10 15:19:30.609023 | orchestrator | rtt min/avg/max/mdev = 1.864/3.957/7.439/2.478 ms 2026-01-10 15:19:30.609028 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-01-10 15:19:30.609033 | orchestrator | + ping -c3 192.168.112.160 2026-01-10 15:19:30.619648 | orchestrator | PING 192.168.112.160 (192.168.112.160) 56(84) bytes of data. 2026-01-10 15:19:30.619767 | orchestrator | 64 bytes from 192.168.112.160: icmp_seq=1 ttl=63 time=6.03 ms 2026-01-10 15:19:31.617399 | orchestrator | 64 bytes from 192.168.112.160: icmp_seq=2 ttl=63 time=2.54 ms 2026-01-10 15:19:32.618558 | orchestrator | 64 bytes from 192.168.112.160: icmp_seq=3 ttl=63 time=1.76 ms 2026-01-10 15:19:32.618635 | orchestrator | 2026-01-10 15:19:32.618644 | orchestrator | --- 192.168.112.160 ping statistics --- 2026-01-10 15:19:32.618651 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-01-10 15:19:32.618656 | orchestrator | rtt min/avg/max/mdev = 1.762/3.443/6.026/1.853 ms 2026-01-10 15:19:32.619490 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-01-10 15:19:32.619517 | orchestrator | + ping -c3 192.168.112.149 2026-01-10 15:19:32.630990 | orchestrator | PING 192.168.112.149 (192.168.112.149) 56(84) bytes of data. 2026-01-10 15:19:32.631152 | orchestrator | 64 bytes from 192.168.112.149: icmp_seq=1 ttl=63 time=7.69 ms 2026-01-10 15:19:33.627882 | orchestrator | 64 bytes from 192.168.112.149: icmp_seq=2 ttl=63 time=2.31 ms 2026-01-10 15:19:34.629790 | orchestrator | 64 bytes from 192.168.112.149: icmp_seq=3 ttl=63 time=1.85 ms 2026-01-10 15:19:34.629885 | orchestrator | 2026-01-10 15:19:34.629904 | orchestrator | --- 192.168.112.149 ping statistics --- 2026-01-10 15:19:34.629920 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2026-01-10 15:19:34.629932 | orchestrator | rtt min/avg/max/mdev = 1.852/3.952/7.694/2.652 ms 2026-01-10 15:19:34.629963 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-01-10 15:19:34.629974 | orchestrator | + ping -c3 192.168.112.159 2026-01-10 15:19:34.640604 | orchestrator | PING 192.168.112.159 (192.168.112.159) 56(84) bytes of data. 2026-01-10 15:19:34.640702 | orchestrator | 64 bytes from 192.168.112.159: icmp_seq=1 ttl=63 time=5.52 ms 2026-01-10 15:19:35.639370 | orchestrator | 64 bytes from 192.168.112.159: icmp_seq=2 ttl=63 time=2.41 ms 2026-01-10 15:19:36.640882 | orchestrator | 64 bytes from 192.168.112.159: icmp_seq=3 ttl=63 time=1.89 ms 2026-01-10 15:19:36.640970 | orchestrator | 2026-01-10 15:19:36.640998 | orchestrator | --- 192.168.112.159 ping statistics --- 2026-01-10 15:19:36.641009 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2026-01-10 15:19:36.641017 | orchestrator | rtt min/avg/max/mdev = 1.887/3.269/5.517/1.603 ms 2026-01-10 15:19:36.641580 | orchestrator | + osism manage compute migrate --yes --target testbed-node-3 testbed-node-5 2026-01-10 15:19:40.217685 | orchestrator | 2026-01-10 15:19:40 | INFO  | Live migrating server 5f72ba77-bd85-468f-8626-74fb2642ae0d 2026-01-10 15:19:53.031157 | orchestrator | 2026-01-10 15:19:53 | INFO  | Live migration of 5f72ba77-bd85-468f-8626-74fb2642ae0d (test-2) is still in progress 2026-01-10 15:19:55.418318 | orchestrator | 2026-01-10 15:19:55 | INFO  | Live migration of 5f72ba77-bd85-468f-8626-74fb2642ae0d (test-2) is still in progress 2026-01-10 15:19:58.012587 | orchestrator | 2026-01-10 15:19:58 | INFO  | Live migration of 5f72ba77-bd85-468f-8626-74fb2642ae0d (test-2) is still in progress 2026-01-10 15:20:00.386760 | orchestrator | 2026-01-10 15:20:00 | INFO  | Live migration of 5f72ba77-bd85-468f-8626-74fb2642ae0d (test-2) is still in progress 2026-01-10 15:20:02.697680 | orchestrator | 2026-01-10 15:20:02 | INFO  | Live migration of 5f72ba77-bd85-468f-8626-74fb2642ae0d (test-2) is still in progress 2026-01-10 15:20:04.997172 | orchestrator | 2026-01-10 15:20:04 | INFO  | Live migration of 5f72ba77-bd85-468f-8626-74fb2642ae0d (test-2) is still in progress 2026-01-10 15:20:07.357362 | orchestrator | 2026-01-10 15:20:07 | INFO  | Live migration of 5f72ba77-bd85-468f-8626-74fb2642ae0d (test-2) is still in progress 2026-01-10 15:20:09.663700 | orchestrator | 2026-01-10 15:20:09 | INFO  | Live migration of 5f72ba77-bd85-468f-8626-74fb2642ae0d (test-2) is still in progress 2026-01-10 15:20:11.925179 | orchestrator | 2026-01-10 15:20:11 | INFO  | Live migration of 5f72ba77-bd85-468f-8626-74fb2642ae0d (test-2) completed with status ACTIVE 2026-01-10 15:20:12.332189 | orchestrator | + compute_list 2026-01-10 15:20:12.332277 | orchestrator | + osism manage compute list testbed-node-3 2026-01-10 15:20:15.861122 | orchestrator | +--------------------------------------+--------+----------+ 2026-01-10 15:20:15.861255 | orchestrator | | ID | Name | Status | 2026-01-10 15:20:15.861271 | orchestrator | |--------------------------------------+--------+----------| 2026-01-10 15:20:15.861282 | orchestrator | | 339c60a1-f1de-4477-8671-1cf7187b8137 | test-4 | ACTIVE | 2026-01-10 15:20:15.861292 | orchestrator | | 8d5e766f-ef8c-4833-bfb6-6daeb61c1ae6 | test-3 | ACTIVE | 2026-01-10 15:20:15.861302 | orchestrator | | 5f72ba77-bd85-468f-8626-74fb2642ae0d | test-2 | ACTIVE | 2026-01-10 15:20:15.861312 | orchestrator | | 27de98fd-6f6f-4a22-ad37-6d0985f6951a | test-1 | ACTIVE | 2026-01-10 15:20:15.861322 | orchestrator | | b093228a-4314-4e18-871f-9ea35a18b83f | test | ACTIVE | 2026-01-10 15:20:15.861332 | orchestrator | +--------------------------------------+--------+----------+ 2026-01-10 15:20:16.339279 | orchestrator | + osism manage compute list testbed-node-4 2026-01-10 15:20:19.288797 | orchestrator | +------+--------+----------+ 2026-01-10 15:20:19.288898 | orchestrator | | ID | Name | Status | 2026-01-10 15:20:19.288908 | orchestrator | |------+--------+----------| 2026-01-10 15:20:19.288915 | orchestrator | +------+--------+----------+ 2026-01-10 15:20:19.663799 | orchestrator | + osism manage compute list testbed-node-5 2026-01-10 15:20:22.597108 | orchestrator | +------+--------+----------+ 2026-01-10 15:20:22.597190 | orchestrator | | ID | Name | Status | 2026-01-10 15:20:22.597197 | orchestrator | |------+--------+----------| 2026-01-10 15:20:22.597203 | orchestrator | +------+--------+----------+ 2026-01-10 15:20:22.972023 | orchestrator | + server_ping 2026-01-10 15:20:22.974069 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2026-01-10 15:20:22.974120 | orchestrator | ++ tr -d '\r' 2026-01-10 15:20:25.750667 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-01-10 15:20:25.750759 | orchestrator | + ping -c3 192.168.112.110 2026-01-10 15:20:25.762108 | orchestrator | PING 192.168.112.110 (192.168.112.110) 56(84) bytes of data. 2026-01-10 15:20:25.762187 | orchestrator | 64 bytes from 192.168.112.110: icmp_seq=1 ttl=63 time=8.21 ms 2026-01-10 15:20:26.758101 | orchestrator | 64 bytes from 192.168.112.110: icmp_seq=2 ttl=63 time=2.56 ms 2026-01-10 15:20:27.759886 | orchestrator | 64 bytes from 192.168.112.110: icmp_seq=3 ttl=63 time=1.98 ms 2026-01-10 15:20:28.150704 | orchestrator | 2026-01-10 15:20:28.150752 | orchestrator | --- 192.168.112.110 ping statistics --- 2026-01-10 15:20:28.150770 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-01-10 15:20:28.150786 | orchestrator | rtt min/avg/max/mdev = 1.982/4.251/8.212/2.810 ms 2026-01-10 15:20:28.150816 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-01-10 15:20:28.150833 | orchestrator | + ping -c3 192.168.112.127 2026-01-10 15:20:28.150851 | orchestrator | PING 192.168.112.127 (192.168.112.127) 56(84) bytes of data. 2026-01-10 15:20:28.150868 | orchestrator | 64 bytes from 192.168.112.127: icmp_seq=1 ttl=63 time=10.2 ms 2026-01-10 15:20:28.768872 | orchestrator | 64 bytes from 192.168.112.127: icmp_seq=2 ttl=63 time=2.91 ms 2026-01-10 15:20:29.770292 | orchestrator | 64 bytes from 192.168.112.127: icmp_seq=3 ttl=63 time=2.58 ms 2026-01-10 15:20:29.770848 | orchestrator | 2026-01-10 15:20:29.771025 | orchestrator | --- 192.168.112.127 ping statistics --- 2026-01-10 15:20:29.771125 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-01-10 15:20:29.771136 | orchestrator | rtt min/avg/max/mdev = 2.584/5.235/10.209/3.519 ms 2026-01-10 15:20:29.771150 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-01-10 15:20:29.771158 | orchestrator | + ping -c3 192.168.112.160 2026-01-10 15:20:29.782189 | orchestrator | PING 192.168.112.160 (192.168.112.160) 56(84) bytes of data. 2026-01-10 15:20:29.782259 | orchestrator | 64 bytes from 192.168.112.160: icmp_seq=1 ttl=63 time=5.75 ms 2026-01-10 15:20:30.780947 | orchestrator | 64 bytes from 192.168.112.160: icmp_seq=2 ttl=63 time=2.49 ms 2026-01-10 15:20:31.782317 | orchestrator | 64 bytes from 192.168.112.160: icmp_seq=3 ttl=63 time=1.85 ms 2026-01-10 15:20:31.782495 | orchestrator | 2026-01-10 15:20:31.782512 | orchestrator | --- 192.168.112.160 ping statistics --- 2026-01-10 15:20:31.782522 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2026-01-10 15:20:31.782530 | orchestrator | rtt min/avg/max/mdev = 1.850/3.363/5.748/1.706 ms 2026-01-10 15:20:31.782547 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-01-10 15:20:31.782554 | orchestrator | + ping -c3 192.168.112.149 2026-01-10 15:20:31.792319 | orchestrator | PING 192.168.112.149 (192.168.112.149) 56(84) bytes of data. 2026-01-10 15:20:31.792399 | orchestrator | 64 bytes from 192.168.112.149: icmp_seq=1 ttl=63 time=6.14 ms 2026-01-10 15:20:32.789188 | orchestrator | 64 bytes from 192.168.112.149: icmp_seq=2 ttl=63 time=2.18 ms 2026-01-10 15:20:33.790946 | orchestrator | 64 bytes from 192.168.112.149: icmp_seq=3 ttl=63 time=2.23 ms 2026-01-10 15:20:33.791013 | orchestrator | 2026-01-10 15:20:33.791023 | orchestrator | --- 192.168.112.149 ping statistics --- 2026-01-10 15:20:33.791217 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2026-01-10 15:20:33.791227 | orchestrator | rtt min/avg/max/mdev = 2.177/3.516/6.142/1.856 ms 2026-01-10 15:20:33.791240 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-01-10 15:20:33.791245 | orchestrator | + ping -c3 192.168.112.159 2026-01-10 15:20:33.802126 | orchestrator | PING 192.168.112.159 (192.168.112.159) 56(84) bytes of data. 2026-01-10 15:20:33.802189 | orchestrator | 64 bytes from 192.168.112.159: icmp_seq=1 ttl=63 time=5.44 ms 2026-01-10 15:20:34.801517 | orchestrator | 64 bytes from 192.168.112.159: icmp_seq=2 ttl=63 time=3.01 ms 2026-01-10 15:20:35.801986 | orchestrator | 64 bytes from 192.168.112.159: icmp_seq=3 ttl=63 time=2.19 ms 2026-01-10 15:20:35.802108 | orchestrator | 2026-01-10 15:20:35.802118 | orchestrator | --- 192.168.112.159 ping statistics --- 2026-01-10 15:20:35.802125 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-01-10 15:20:35.802131 | orchestrator | rtt min/avg/max/mdev = 2.188/3.544/5.436/1.379 ms 2026-01-10 15:20:35.803204 | orchestrator | + osism manage compute migrate --yes --target testbed-node-4 testbed-node-3 2026-01-10 15:20:39.258645 | orchestrator | 2026-01-10 15:20:39 | INFO  | Live migrating server 339c60a1-f1de-4477-8671-1cf7187b8137 2026-01-10 15:20:50.400310 | orchestrator | 2026-01-10 15:20:50 | INFO  | Live migration of 339c60a1-f1de-4477-8671-1cf7187b8137 (test-4) is still in progress 2026-01-10 15:20:52.796351 | orchestrator | 2026-01-10 15:20:52 | INFO  | Live migration of 339c60a1-f1de-4477-8671-1cf7187b8137 (test-4) is still in progress 2026-01-10 15:20:55.150829 | orchestrator | 2026-01-10 15:20:55 | INFO  | Live migration of 339c60a1-f1de-4477-8671-1cf7187b8137 (test-4) is still in progress 2026-01-10 15:20:57.488554 | orchestrator | 2026-01-10 15:20:57 | INFO  | Live migration of 339c60a1-f1de-4477-8671-1cf7187b8137 (test-4) is still in progress 2026-01-10 15:20:59.825434 | orchestrator | 2026-01-10 15:20:59 | INFO  | Live migration of 339c60a1-f1de-4477-8671-1cf7187b8137 (test-4) is still in progress 2026-01-10 15:21:02.206265 | orchestrator | 2026-01-10 15:21:02 | INFO  | Live migration of 339c60a1-f1de-4477-8671-1cf7187b8137 (test-4) is still in progress 2026-01-10 15:21:04.487618 | orchestrator | 2026-01-10 15:21:04 | INFO  | Live migration of 339c60a1-f1de-4477-8671-1cf7187b8137 (test-4) is still in progress 2026-01-10 15:21:06.838383 | orchestrator | 2026-01-10 15:21:06 | INFO  | Live migration of 339c60a1-f1de-4477-8671-1cf7187b8137 (test-4) is still in progress 2026-01-10 15:21:09.224901 | orchestrator | 2026-01-10 15:21:09 | INFO  | Live migration of 339c60a1-f1de-4477-8671-1cf7187b8137 (test-4) is still in progress 2026-01-10 15:21:11.642298 | orchestrator | 2026-01-10 15:21:11 | INFO  | Live migration of 339c60a1-f1de-4477-8671-1cf7187b8137 (test-4) completed with status ACTIVE 2026-01-10 15:21:11.642393 | orchestrator | 2026-01-10 15:21:11 | INFO  | Live migrating server 8d5e766f-ef8c-4833-bfb6-6daeb61c1ae6 2026-01-10 15:21:22.395017 | orchestrator | 2026-01-10 15:21:22 | INFO  | Live migration of 8d5e766f-ef8c-4833-bfb6-6daeb61c1ae6 (test-3) is still in progress 2026-01-10 15:21:24.752070 | orchestrator | 2026-01-10 15:21:24 | INFO  | Live migration of 8d5e766f-ef8c-4833-bfb6-6daeb61c1ae6 (test-3) is still in progress 2026-01-10 15:21:27.099983 | orchestrator | 2026-01-10 15:21:27 | INFO  | Live migration of 8d5e766f-ef8c-4833-bfb6-6daeb61c1ae6 (test-3) is still in progress 2026-01-10 15:21:29.433320 | orchestrator | 2026-01-10 15:21:29 | INFO  | Live migration of 8d5e766f-ef8c-4833-bfb6-6daeb61c1ae6 (test-3) is still in progress 2026-01-10 15:21:31.773944 | orchestrator | 2026-01-10 15:21:31 | INFO  | Live migration of 8d5e766f-ef8c-4833-bfb6-6daeb61c1ae6 (test-3) is still in progress 2026-01-10 15:21:34.156231 | orchestrator | 2026-01-10 15:21:34 | INFO  | Live migration of 8d5e766f-ef8c-4833-bfb6-6daeb61c1ae6 (test-3) is still in progress 2026-01-10 15:21:36.528452 | orchestrator | 2026-01-10 15:21:36 | INFO  | Live migration of 8d5e766f-ef8c-4833-bfb6-6daeb61c1ae6 (test-3) is still in progress 2026-01-10 15:21:38.908386 | orchestrator | 2026-01-10 15:21:38 | INFO  | Live migration of 8d5e766f-ef8c-4833-bfb6-6daeb61c1ae6 (test-3) is still in progress 2026-01-10 15:21:41.305813 | orchestrator | 2026-01-10 15:21:41 | INFO  | Live migration of 8d5e766f-ef8c-4833-bfb6-6daeb61c1ae6 (test-3) is still in progress 2026-01-10 15:21:43.864524 | orchestrator | 2026-01-10 15:21:43 | INFO  | Live migration of 8d5e766f-ef8c-4833-bfb6-6daeb61c1ae6 (test-3) completed with status ACTIVE 2026-01-10 15:21:43.864629 | orchestrator | 2026-01-10 15:21:43 | INFO  | Live migrating server 5f72ba77-bd85-468f-8626-74fb2642ae0d 2026-01-10 15:21:57.955044 | orchestrator | 2026-01-10 15:21:57 | INFO  | Live migration of 5f72ba77-bd85-468f-8626-74fb2642ae0d (test-2) is still in progress 2026-01-10 15:22:00.361200 | orchestrator | 2026-01-10 15:22:00 | INFO  | Live migration of 5f72ba77-bd85-468f-8626-74fb2642ae0d (test-2) is still in progress 2026-01-10 15:22:02.717573 | orchestrator | 2026-01-10 15:22:02 | INFO  | Live migration of 5f72ba77-bd85-468f-8626-74fb2642ae0d (test-2) is still in progress 2026-01-10 15:22:05.009355 | orchestrator | 2026-01-10 15:22:05 | INFO  | Live migration of 5f72ba77-bd85-468f-8626-74fb2642ae0d (test-2) is still in progress 2026-01-10 15:22:07.514076 | orchestrator | 2026-01-10 15:22:07 | INFO  | Live migration of 5f72ba77-bd85-468f-8626-74fb2642ae0d (test-2) is still in progress 2026-01-10 15:22:09.827919 | orchestrator | 2026-01-10 15:22:09 | INFO  | Live migration of 5f72ba77-bd85-468f-8626-74fb2642ae0d (test-2) is still in progress 2026-01-10 15:22:12.199834 | orchestrator | 2026-01-10 15:22:12 | INFO  | Live migration of 5f72ba77-bd85-468f-8626-74fb2642ae0d (test-2) is still in progress 2026-01-10 15:22:14.504338 | orchestrator | 2026-01-10 15:22:14 | INFO  | Live migration of 5f72ba77-bd85-468f-8626-74fb2642ae0d (test-2) is still in progress 2026-01-10 15:22:16.813754 | orchestrator | 2026-01-10 15:22:16 | INFO  | Live migration of 5f72ba77-bd85-468f-8626-74fb2642ae0d (test-2) is still in progress 2026-01-10 15:22:19.149797 | orchestrator | 2026-01-10 15:22:19 | INFO  | Live migration of 5f72ba77-bd85-468f-8626-74fb2642ae0d (test-2) completed with status ACTIVE 2026-01-10 15:22:19.149864 | orchestrator | 2026-01-10 15:22:19 | INFO  | Live migrating server 27de98fd-6f6f-4a22-ad37-6d0985f6951a 2026-01-10 15:22:29.827587 | orchestrator | 2026-01-10 15:22:29 | INFO  | Live migration of 27de98fd-6f6f-4a22-ad37-6d0985f6951a (test-1) is still in progress 2026-01-10 15:22:32.184169 | orchestrator | 2026-01-10 15:22:32 | INFO  | Live migration of 27de98fd-6f6f-4a22-ad37-6d0985f6951a (test-1) is still in progress 2026-01-10 15:22:34.569816 | orchestrator | 2026-01-10 15:22:34 | INFO  | Live migration of 27de98fd-6f6f-4a22-ad37-6d0985f6951a (test-1) is still in progress 2026-01-10 15:22:37.137538 | orchestrator | 2026-01-10 15:22:37 | INFO  | Live migration of 27de98fd-6f6f-4a22-ad37-6d0985f6951a (test-1) is still in progress 2026-01-10 15:22:39.489122 | orchestrator | 2026-01-10 15:22:39 | INFO  | Live migration of 27de98fd-6f6f-4a22-ad37-6d0985f6951a (test-1) is still in progress 2026-01-10 15:22:41.795191 | orchestrator | 2026-01-10 15:22:41 | INFO  | Live migration of 27de98fd-6f6f-4a22-ad37-6d0985f6951a (test-1) is still in progress 2026-01-10 15:22:44.086383 | orchestrator | 2026-01-10 15:22:44 | INFO  | Live migration of 27de98fd-6f6f-4a22-ad37-6d0985f6951a (test-1) is still in progress 2026-01-10 15:22:46.383748 | orchestrator | 2026-01-10 15:22:46 | INFO  | Live migration of 27de98fd-6f6f-4a22-ad37-6d0985f6951a (test-1) is still in progress 2026-01-10 15:22:48.657671 | orchestrator | 2026-01-10 15:22:48 | INFO  | Live migration of 27de98fd-6f6f-4a22-ad37-6d0985f6951a (test-1) completed with status ACTIVE 2026-01-10 15:22:48.657781 | orchestrator | 2026-01-10 15:22:48 | INFO  | Live migrating server b093228a-4314-4e18-871f-9ea35a18b83f 2026-01-10 15:22:57.917747 | orchestrator | 2026-01-10 15:22:57 | INFO  | Live migration of b093228a-4314-4e18-871f-9ea35a18b83f (test) is still in progress 2026-01-10 15:23:00.337961 | orchestrator | 2026-01-10 15:23:00 | INFO  | Live migration of b093228a-4314-4e18-871f-9ea35a18b83f (test) is still in progress 2026-01-10 15:23:02.920520 | orchestrator | 2026-01-10 15:23:02 | INFO  | Live migration of b093228a-4314-4e18-871f-9ea35a18b83f (test) is still in progress 2026-01-10 15:23:05.310299 | orchestrator | 2026-01-10 15:23:05 | INFO  | Live migration of b093228a-4314-4e18-871f-9ea35a18b83f (test) is still in progress 2026-01-10 15:23:07.703913 | orchestrator | 2026-01-10 15:23:07 | INFO  | Live migration of b093228a-4314-4e18-871f-9ea35a18b83f (test) is still in progress 2026-01-10 15:23:10.122948 | orchestrator | 2026-01-10 15:23:10 | INFO  | Live migration of b093228a-4314-4e18-871f-9ea35a18b83f (test) is still in progress 2026-01-10 15:23:12.432014 | orchestrator | 2026-01-10 15:23:12 | INFO  | Live migration of b093228a-4314-4e18-871f-9ea35a18b83f (test) is still in progress 2026-01-10 15:23:14.797268 | orchestrator | 2026-01-10 15:23:14 | INFO  | Live migration of b093228a-4314-4e18-871f-9ea35a18b83f (test) is still in progress 2026-01-10 15:23:17.151705 | orchestrator | 2026-01-10 15:23:17 | INFO  | Live migration of b093228a-4314-4e18-871f-9ea35a18b83f (test) is still in progress 2026-01-10 15:23:19.470789 | orchestrator | 2026-01-10 15:23:19 | INFO  | Live migration of b093228a-4314-4e18-871f-9ea35a18b83f (test) is still in progress 2026-01-10 15:23:21.908829 | orchestrator | 2026-01-10 15:23:21 | INFO  | Live migration of b093228a-4314-4e18-871f-9ea35a18b83f (test) completed with status ACTIVE 2026-01-10 15:23:22.250625 | orchestrator | + compute_list 2026-01-10 15:23:22.250697 | orchestrator | + osism manage compute list testbed-node-3 2026-01-10 15:23:25.092164 | orchestrator | +------+--------+----------+ 2026-01-10 15:23:25.092274 | orchestrator | | ID | Name | Status | 2026-01-10 15:23:25.092286 | orchestrator | |------+--------+----------| 2026-01-10 15:23:25.092294 | orchestrator | +------+--------+----------+ 2026-01-10 15:23:25.461277 | orchestrator | + osism manage compute list testbed-node-4 2026-01-10 15:23:28.884354 | orchestrator | +--------------------------------------+--------+----------+ 2026-01-10 15:23:28.884446 | orchestrator | | ID | Name | Status | 2026-01-10 15:23:28.884458 | orchestrator | |--------------------------------------+--------+----------| 2026-01-10 15:23:28.884469 | orchestrator | | 339c60a1-f1de-4477-8671-1cf7187b8137 | test-4 | ACTIVE | 2026-01-10 15:23:28.884479 | orchestrator | | 8d5e766f-ef8c-4833-bfb6-6daeb61c1ae6 | test-3 | ACTIVE | 2026-01-10 15:23:28.884490 | orchestrator | | 5f72ba77-bd85-468f-8626-74fb2642ae0d | test-2 | ACTIVE | 2026-01-10 15:23:28.884500 | orchestrator | | 27de98fd-6f6f-4a22-ad37-6d0985f6951a | test-1 | ACTIVE | 2026-01-10 15:23:28.884510 | orchestrator | | b093228a-4314-4e18-871f-9ea35a18b83f | test | ACTIVE | 2026-01-10 15:23:28.884520 | orchestrator | +--------------------------------------+--------+----------+ 2026-01-10 15:23:29.193353 | orchestrator | + osism manage compute list testbed-node-5 2026-01-10 15:23:32.125810 | orchestrator | +------+--------+----------+ 2026-01-10 15:23:32.125884 | orchestrator | | ID | Name | Status | 2026-01-10 15:23:32.125891 | orchestrator | |------+--------+----------| 2026-01-10 15:23:32.125897 | orchestrator | +------+--------+----------+ 2026-01-10 15:23:32.453586 | orchestrator | + server_ping 2026-01-10 15:23:32.454501 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2026-01-10 15:23:32.454772 | orchestrator | ++ tr -d '\r' 2026-01-10 15:23:35.471152 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-01-10 15:23:35.471219 | orchestrator | + ping -c3 192.168.112.110 2026-01-10 15:23:35.483427 | orchestrator | PING 192.168.112.110 (192.168.112.110) 56(84) bytes of data. 2026-01-10 15:23:35.483496 | orchestrator | 64 bytes from 192.168.112.110: icmp_seq=1 ttl=63 time=9.95 ms 2026-01-10 15:23:36.477914 | orchestrator | 64 bytes from 192.168.112.110: icmp_seq=2 ttl=63 time=3.13 ms 2026-01-10 15:23:37.479686 | orchestrator | 64 bytes from 192.168.112.110: icmp_seq=3 ttl=63 time=2.55 ms 2026-01-10 15:23:37.479769 | orchestrator | 2026-01-10 15:23:37.479780 | orchestrator | --- 192.168.112.110 ping statistics --- 2026-01-10 15:23:37.479789 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-01-10 15:23:37.479796 | orchestrator | rtt min/avg/max/mdev = 2.546/5.208/9.947/3.359 ms 2026-01-10 15:23:37.479804 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-01-10 15:23:37.479809 | orchestrator | + ping -c3 192.168.112.127 2026-01-10 15:23:37.492507 | orchestrator | PING 192.168.112.127 (192.168.112.127) 56(84) bytes of data. 2026-01-10 15:23:37.492574 | orchestrator | 64 bytes from 192.168.112.127: icmp_seq=1 ttl=63 time=10.4 ms 2026-01-10 15:23:38.487117 | orchestrator | 64 bytes from 192.168.112.127: icmp_seq=2 ttl=63 time=2.44 ms 2026-01-10 15:23:39.488757 | orchestrator | 64 bytes from 192.168.112.127: icmp_seq=3 ttl=63 time=2.39 ms 2026-01-10 15:23:39.489889 | orchestrator | 2026-01-10 15:23:39.489929 | orchestrator | --- 192.168.112.127 ping statistics --- 2026-01-10 15:23:39.489961 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-01-10 15:23:39.489973 | orchestrator | rtt min/avg/max/mdev = 2.385/5.066/10.372/3.751 ms 2026-01-10 15:23:39.490008 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-01-10 15:23:39.490040 | orchestrator | + ping -c3 192.168.112.160 2026-01-10 15:23:39.501040 | orchestrator | PING 192.168.112.160 (192.168.112.160) 56(84) bytes of data. 2026-01-10 15:23:39.501120 | orchestrator | 64 bytes from 192.168.112.160: icmp_seq=1 ttl=63 time=7.02 ms 2026-01-10 15:23:40.496917 | orchestrator | 64 bytes from 192.168.112.160: icmp_seq=2 ttl=63 time=2.33 ms 2026-01-10 15:23:41.498260 | orchestrator | 64 bytes from 192.168.112.160: icmp_seq=3 ttl=63 time=1.64 ms 2026-01-10 15:23:41.498654 | orchestrator | 2026-01-10 15:23:41.498690 | orchestrator | --- 192.168.112.160 ping statistics --- 2026-01-10 15:23:41.498715 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2026-01-10 15:23:41.498732 | orchestrator | rtt min/avg/max/mdev = 1.644/3.665/7.023/2.390 ms 2026-01-10 15:23:41.499152 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-01-10 15:23:41.499180 | orchestrator | + ping -c3 192.168.112.149 2026-01-10 15:23:41.509532 | orchestrator | PING 192.168.112.149 (192.168.112.149) 56(84) bytes of data. 2026-01-10 15:23:41.509592 | orchestrator | 64 bytes from 192.168.112.149: icmp_seq=1 ttl=63 time=5.81 ms 2026-01-10 15:23:42.507900 | orchestrator | 64 bytes from 192.168.112.149: icmp_seq=2 ttl=63 time=2.67 ms 2026-01-10 15:23:43.509961 | orchestrator | 64 bytes from 192.168.112.149: icmp_seq=3 ttl=63 time=2.59 ms 2026-01-10 15:23:43.510214 | orchestrator | 2026-01-10 15:23:43.510234 | orchestrator | --- 192.168.112.149 ping statistics --- 2026-01-10 15:23:43.510247 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-01-10 15:23:43.510259 | orchestrator | rtt min/avg/max/mdev = 2.594/3.692/5.814/1.500 ms 2026-01-10 15:23:43.510468 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-01-10 15:23:43.510490 | orchestrator | + ping -c3 192.168.112.159 2026-01-10 15:23:43.521458 | orchestrator | PING 192.168.112.159 (192.168.112.159) 56(84) bytes of data. 2026-01-10 15:23:43.521581 | orchestrator | 64 bytes from 192.168.112.159: icmp_seq=1 ttl=63 time=5.79 ms 2026-01-10 15:23:44.521340 | orchestrator | 64 bytes from 192.168.112.159: icmp_seq=2 ttl=63 time=3.06 ms 2026-01-10 15:23:45.521403 | orchestrator | 64 bytes from 192.168.112.159: icmp_seq=3 ttl=63 time=1.71 ms 2026-01-10 15:23:45.521505 | orchestrator | 2026-01-10 15:23:45.521521 | orchestrator | --- 192.168.112.159 ping statistics --- 2026-01-10 15:23:45.521534 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2026-01-10 15:23:45.521545 | orchestrator | rtt min/avg/max/mdev = 1.710/3.517/5.786/1.695 ms 2026-01-10 15:23:45.522867 | orchestrator | + osism manage compute migrate --yes --target testbed-node-5 testbed-node-4 2026-01-10 15:23:48.941888 | orchestrator | 2026-01-10 15:23:48 | INFO  | Live migrating server 339c60a1-f1de-4477-8671-1cf7187b8137 2026-01-10 15:24:02.355736 | orchestrator | 2026-01-10 15:24:02 | INFO  | Live migration of 339c60a1-f1de-4477-8671-1cf7187b8137 (test-4) is still in progress 2026-01-10 15:24:04.738156 | orchestrator | 2026-01-10 15:24:04 | INFO  | Live migration of 339c60a1-f1de-4477-8671-1cf7187b8137 (test-4) is still in progress 2026-01-10 15:24:07.132346 | orchestrator | 2026-01-10 15:24:07 | INFO  | Live migration of 339c60a1-f1de-4477-8671-1cf7187b8137 (test-4) is still in progress 2026-01-10 15:24:09.548345 | orchestrator | 2026-01-10 15:24:09 | INFO  | Live migration of 339c60a1-f1de-4477-8671-1cf7187b8137 (test-4) is still in progress 2026-01-10 15:24:11.847946 | orchestrator | 2026-01-10 15:24:11 | INFO  | Live migration of 339c60a1-f1de-4477-8671-1cf7187b8137 (test-4) is still in progress 2026-01-10 15:24:14.162747 | orchestrator | 2026-01-10 15:24:14 | INFO  | Live migration of 339c60a1-f1de-4477-8671-1cf7187b8137 (test-4) is still in progress 2026-01-10 15:24:16.528191 | orchestrator | 2026-01-10 15:24:16 | INFO  | Live migration of 339c60a1-f1de-4477-8671-1cf7187b8137 (test-4) is still in progress 2026-01-10 15:24:18.890085 | orchestrator | 2026-01-10 15:24:18 | INFO  | Live migration of 339c60a1-f1de-4477-8671-1cf7187b8137 (test-4) is still in progress 2026-01-10 15:24:21.192810 | orchestrator | 2026-01-10 15:24:21 | INFO  | Live migration of 339c60a1-f1de-4477-8671-1cf7187b8137 (test-4) completed with status ACTIVE 2026-01-10 15:24:21.192905 | orchestrator | 2026-01-10 15:24:21 | INFO  | Live migrating server 8d5e766f-ef8c-4833-bfb6-6daeb61c1ae6 2026-01-10 15:24:31.451400 | orchestrator | 2026-01-10 15:24:31 | INFO  | Live migration of 8d5e766f-ef8c-4833-bfb6-6daeb61c1ae6 (test-3) is still in progress 2026-01-10 15:24:33.804950 | orchestrator | 2026-01-10 15:24:33 | INFO  | Live migration of 8d5e766f-ef8c-4833-bfb6-6daeb61c1ae6 (test-3) is still in progress 2026-01-10 15:24:36.183278 | orchestrator | 2026-01-10 15:24:36 | INFO  | Live migration of 8d5e766f-ef8c-4833-bfb6-6daeb61c1ae6 (test-3) is still in progress 2026-01-10 15:24:38.572137 | orchestrator | 2026-01-10 15:24:38 | INFO  | Live migration of 8d5e766f-ef8c-4833-bfb6-6daeb61c1ae6 (test-3) is still in progress 2026-01-10 15:24:40.895649 | orchestrator | 2026-01-10 15:24:40 | INFO  | Live migration of 8d5e766f-ef8c-4833-bfb6-6daeb61c1ae6 (test-3) is still in progress 2026-01-10 15:24:43.217543 | orchestrator | 2026-01-10 15:24:43 | INFO  | Live migration of 8d5e766f-ef8c-4833-bfb6-6daeb61c1ae6 (test-3) is still in progress 2026-01-10 15:24:45.507785 | orchestrator | 2026-01-10 15:24:45 | INFO  | Live migration of 8d5e766f-ef8c-4833-bfb6-6daeb61c1ae6 (test-3) is still in progress 2026-01-10 15:24:47.822120 | orchestrator | 2026-01-10 15:24:47 | INFO  | Live migration of 8d5e766f-ef8c-4833-bfb6-6daeb61c1ae6 (test-3) is still in progress 2026-01-10 15:24:50.173350 | orchestrator | 2026-01-10 15:24:50 | INFO  | Live migration of 8d5e766f-ef8c-4833-bfb6-6daeb61c1ae6 (test-3) completed with status ACTIVE 2026-01-10 15:24:50.173445 | orchestrator | 2026-01-10 15:24:50 | INFO  | Live migrating server 5f72ba77-bd85-468f-8626-74fb2642ae0d 2026-01-10 15:25:00.534408 | orchestrator | 2026-01-10 15:25:00 | INFO  | Live migration of 5f72ba77-bd85-468f-8626-74fb2642ae0d (test-2) is still in progress 2026-01-10 15:25:02.906220 | orchestrator | 2026-01-10 15:25:02 | INFO  | Live migration of 5f72ba77-bd85-468f-8626-74fb2642ae0d (test-2) is still in progress 2026-01-10 15:25:05.275903 | orchestrator | 2026-01-10 15:25:05 | INFO  | Live migration of 5f72ba77-bd85-468f-8626-74fb2642ae0d (test-2) is still in progress 2026-01-10 15:25:07.675283 | orchestrator | 2026-01-10 15:25:07 | INFO  | Live migration of 5f72ba77-bd85-468f-8626-74fb2642ae0d (test-2) is still in progress 2026-01-10 15:25:10.018250 | orchestrator | 2026-01-10 15:25:10 | INFO  | Live migration of 5f72ba77-bd85-468f-8626-74fb2642ae0d (test-2) is still in progress 2026-01-10 15:25:12.298543 | orchestrator | 2026-01-10 15:25:12 | INFO  | Live migration of 5f72ba77-bd85-468f-8626-74fb2642ae0d (test-2) is still in progress 2026-01-10 15:25:14.649696 | orchestrator | 2026-01-10 15:25:14 | INFO  | Live migration of 5f72ba77-bd85-468f-8626-74fb2642ae0d (test-2) is still in progress 2026-01-10 15:25:16.951892 | orchestrator | 2026-01-10 15:25:16 | INFO  | Live migration of 5f72ba77-bd85-468f-8626-74fb2642ae0d (test-2) is still in progress 2026-01-10 15:25:19.385736 | orchestrator | 2026-01-10 15:25:19 | INFO  | Live migration of 5f72ba77-bd85-468f-8626-74fb2642ae0d (test-2) completed with status ACTIVE 2026-01-10 15:25:19.385813 | orchestrator | 2026-01-10 15:25:19 | INFO  | Live migrating server 27de98fd-6f6f-4a22-ad37-6d0985f6951a 2026-01-10 15:25:29.360730 | orchestrator | 2026-01-10 15:25:29 | INFO  | Live migration of 27de98fd-6f6f-4a22-ad37-6d0985f6951a (test-1) is still in progress 2026-01-10 15:25:31.746450 | orchestrator | 2026-01-10 15:25:31 | INFO  | Live migration of 27de98fd-6f6f-4a22-ad37-6d0985f6951a (test-1) is still in progress 2026-01-10 15:25:34.157407 | orchestrator | 2026-01-10 15:25:34 | INFO  | Live migration of 27de98fd-6f6f-4a22-ad37-6d0985f6951a (test-1) is still in progress 2026-01-10 15:25:36.551706 | orchestrator | 2026-01-10 15:25:36 | INFO  | Live migration of 27de98fd-6f6f-4a22-ad37-6d0985f6951a (test-1) is still in progress 2026-01-10 15:25:38.859323 | orchestrator | 2026-01-10 15:25:38 | INFO  | Live migration of 27de98fd-6f6f-4a22-ad37-6d0985f6951a (test-1) is still in progress 2026-01-10 15:25:41.222570 | orchestrator | 2026-01-10 15:25:41 | INFO  | Live migration of 27de98fd-6f6f-4a22-ad37-6d0985f6951a (test-1) is still in progress 2026-01-10 15:25:43.547903 | orchestrator | 2026-01-10 15:25:43 | INFO  | Live migration of 27de98fd-6f6f-4a22-ad37-6d0985f6951a (test-1) is still in progress 2026-01-10 15:25:45.834274 | orchestrator | 2026-01-10 15:25:45 | INFO  | Live migration of 27de98fd-6f6f-4a22-ad37-6d0985f6951a (test-1) is still in progress 2026-01-10 15:25:48.245495 | orchestrator | 2026-01-10 15:25:48 | INFO  | Live migration of 27de98fd-6f6f-4a22-ad37-6d0985f6951a (test-1) is still in progress 2026-01-10 15:25:50.600160 | orchestrator | 2026-01-10 15:25:50 | INFO  | Live migration of 27de98fd-6f6f-4a22-ad37-6d0985f6951a (test-1) completed with status ACTIVE 2026-01-10 15:25:50.600240 | orchestrator | 2026-01-10 15:25:50 | INFO  | Live migrating server b093228a-4314-4e18-871f-9ea35a18b83f 2026-01-10 15:26:01.123741 | orchestrator | 2026-01-10 15:26:01 | INFO  | Live migration of b093228a-4314-4e18-871f-9ea35a18b83f (test) is still in progress 2026-01-10 15:26:03.460354 | orchestrator | 2026-01-10 15:26:03 | INFO  | Live migration of b093228a-4314-4e18-871f-9ea35a18b83f (test) is still in progress 2026-01-10 15:26:05.858074 | orchestrator | 2026-01-10 15:26:05 | INFO  | Live migration of b093228a-4314-4e18-871f-9ea35a18b83f (test) is still in progress 2026-01-10 15:26:08.248139 | orchestrator | 2026-01-10 15:26:08 | INFO  | Live migration of b093228a-4314-4e18-871f-9ea35a18b83f (test) is still in progress 2026-01-10 15:26:10.583058 | orchestrator | 2026-01-10 15:26:10 | INFO  | Live migration of b093228a-4314-4e18-871f-9ea35a18b83f (test) is still in progress 2026-01-10 15:26:12.964873 | orchestrator | 2026-01-10 15:26:12 | INFO  | Live migration of b093228a-4314-4e18-871f-9ea35a18b83f (test) is still in progress 2026-01-10 15:26:15.302205 | orchestrator | 2026-01-10 15:26:15 | INFO  | Live migration of b093228a-4314-4e18-871f-9ea35a18b83f (test) is still in progress 2026-01-10 15:26:17.624999 | orchestrator | 2026-01-10 15:26:17 | INFO  | Live migration of b093228a-4314-4e18-871f-9ea35a18b83f (test) is still in progress 2026-01-10 15:26:19.936821 | orchestrator | 2026-01-10 15:26:19 | INFO  | Live migration of b093228a-4314-4e18-871f-9ea35a18b83f (test) is still in progress 2026-01-10 15:26:22.254334 | orchestrator | 2026-01-10 15:26:22 | INFO  | Live migration of b093228a-4314-4e18-871f-9ea35a18b83f (test) is still in progress 2026-01-10 15:26:24.545187 | orchestrator | 2026-01-10 15:26:24 | INFO  | Live migration of b093228a-4314-4e18-871f-9ea35a18b83f (test) completed with status ACTIVE 2026-01-10 15:26:24.909414 | orchestrator | + compute_list 2026-01-10 15:26:24.909480 | orchestrator | + osism manage compute list testbed-node-3 2026-01-10 15:26:27.834196 | orchestrator | +------+--------+----------+ 2026-01-10 15:26:27.834285 | orchestrator | | ID | Name | Status | 2026-01-10 15:26:27.834294 | orchestrator | |------+--------+----------| 2026-01-10 15:26:27.834301 | orchestrator | +------+--------+----------+ 2026-01-10 15:26:28.175218 | orchestrator | + osism manage compute list testbed-node-4 2026-01-10 15:26:31.151639 | orchestrator | +------+--------+----------+ 2026-01-10 15:26:31.151755 | orchestrator | | ID | Name | Status | 2026-01-10 15:26:31.151781 | orchestrator | |------+--------+----------| 2026-01-10 15:26:31.151801 | orchestrator | +------+--------+----------+ 2026-01-10 15:26:31.481460 | orchestrator | + osism manage compute list testbed-node-5 2026-01-10 15:26:34.902713 | orchestrator | +--------------------------------------+--------+----------+ 2026-01-10 15:26:34.902811 | orchestrator | | ID | Name | Status | 2026-01-10 15:26:34.902824 | orchestrator | |--------------------------------------+--------+----------| 2026-01-10 15:26:34.902834 | orchestrator | | 339c60a1-f1de-4477-8671-1cf7187b8137 | test-4 | ACTIVE | 2026-01-10 15:26:34.902844 | orchestrator | | 8d5e766f-ef8c-4833-bfb6-6daeb61c1ae6 | test-3 | ACTIVE | 2026-01-10 15:26:34.902852 | orchestrator | | 5f72ba77-bd85-468f-8626-74fb2642ae0d | test-2 | ACTIVE | 2026-01-10 15:26:34.902862 | orchestrator | | 27de98fd-6f6f-4a22-ad37-6d0985f6951a | test-1 | ACTIVE | 2026-01-10 15:26:34.902871 | orchestrator | | b093228a-4314-4e18-871f-9ea35a18b83f | test | ACTIVE | 2026-01-10 15:26:34.902907 | orchestrator | +--------------------------------------+--------+----------+ 2026-01-10 15:26:35.233019 | orchestrator | + server_ping 2026-01-10 15:26:35.235039 | orchestrator | ++ tr -d '\r' 2026-01-10 15:26:35.235079 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2026-01-10 15:26:38.348592 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-01-10 15:26:38.348708 | orchestrator | + ping -c3 192.168.112.110 2026-01-10 15:26:38.362613 | orchestrator | PING 192.168.112.110 (192.168.112.110) 56(84) bytes of data. 2026-01-10 15:26:38.362719 | orchestrator | 64 bytes from 192.168.112.110: icmp_seq=1 ttl=63 time=10.2 ms 2026-01-10 15:26:39.356982 | orchestrator | 64 bytes from 192.168.112.110: icmp_seq=2 ttl=63 time=2.74 ms 2026-01-10 15:26:40.358713 | orchestrator | 64 bytes from 192.168.112.110: icmp_seq=3 ttl=63 time=1.68 ms 2026-01-10 15:26:40.358850 | orchestrator | 2026-01-10 15:26:40.358881 | orchestrator | --- 192.168.112.110 ping statistics --- 2026-01-10 15:26:40.358903 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2026-01-10 15:26:40.358920 | orchestrator | rtt min/avg/max/mdev = 1.679/4.859/10.156/3.770 ms 2026-01-10 15:26:40.358993 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-01-10 15:26:40.359012 | orchestrator | + ping -c3 192.168.112.127 2026-01-10 15:26:40.373932 | orchestrator | PING 192.168.112.127 (192.168.112.127) 56(84) bytes of data. 2026-01-10 15:26:40.374124 | orchestrator | 64 bytes from 192.168.112.127: icmp_seq=1 ttl=63 time=11.1 ms 2026-01-10 15:26:41.367030 | orchestrator | 64 bytes from 192.168.112.127: icmp_seq=2 ttl=63 time=2.78 ms 2026-01-10 15:26:42.369413 | orchestrator | 64 bytes from 192.168.112.127: icmp_seq=3 ttl=63 time=2.41 ms 2026-01-10 15:26:42.369517 | orchestrator | 2026-01-10 15:26:42.369534 | orchestrator | --- 192.168.112.127 ping statistics --- 2026-01-10 15:26:42.369547 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2026-01-10 15:26:42.369559 | orchestrator | rtt min/avg/max/mdev = 2.412/5.439/11.127/4.024 ms 2026-01-10 15:26:42.369571 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-01-10 15:26:42.369583 | orchestrator | + ping -c3 192.168.112.160 2026-01-10 15:26:42.382290 | orchestrator | PING 192.168.112.160 (192.168.112.160) 56(84) bytes of data. 2026-01-10 15:26:42.382381 | orchestrator | 64 bytes from 192.168.112.160: icmp_seq=1 ttl=63 time=8.08 ms 2026-01-10 15:26:43.378257 | orchestrator | 64 bytes from 192.168.112.160: icmp_seq=2 ttl=63 time=2.63 ms 2026-01-10 15:26:44.379696 | orchestrator | 64 bytes from 192.168.112.160: icmp_seq=3 ttl=63 time=2.12 ms 2026-01-10 15:26:44.379796 | orchestrator | 2026-01-10 15:26:44.379812 | orchestrator | --- 192.168.112.160 ping statistics --- 2026-01-10 15:26:44.379825 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-01-10 15:26:44.379837 | orchestrator | rtt min/avg/max/mdev = 2.120/4.277/8.078/2.695 ms 2026-01-10 15:26:44.380436 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-01-10 15:26:44.380469 | orchestrator | + ping -c3 192.168.112.149 2026-01-10 15:26:44.392275 | orchestrator | PING 192.168.112.149 (192.168.112.149) 56(84) bytes of data. 2026-01-10 15:26:44.392400 | orchestrator | 64 bytes from 192.168.112.149: icmp_seq=1 ttl=63 time=7.13 ms 2026-01-10 15:26:45.389516 | orchestrator | 64 bytes from 192.168.112.149: icmp_seq=2 ttl=63 time=2.82 ms 2026-01-10 15:26:46.391293 | orchestrator | 64 bytes from 192.168.112.149: icmp_seq=3 ttl=63 time=2.25 ms 2026-01-10 15:26:46.391394 | orchestrator | 2026-01-10 15:26:46.391410 | orchestrator | --- 192.168.112.149 ping statistics --- 2026-01-10 15:26:46.391435 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-01-10 15:26:46.391448 | orchestrator | rtt min/avg/max/mdev = 2.248/4.068/7.134/2.180 ms 2026-01-10 15:26:46.392243 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-01-10 15:26:46.392270 | orchestrator | + ping -c3 192.168.112.159 2026-01-10 15:26:46.403301 | orchestrator | PING 192.168.112.159 (192.168.112.159) 56(84) bytes of data. 2026-01-10 15:26:46.403396 | orchestrator | 64 bytes from 192.168.112.159: icmp_seq=1 ttl=63 time=5.27 ms 2026-01-10 15:26:47.401754 | orchestrator | 64 bytes from 192.168.112.159: icmp_seq=2 ttl=63 time=2.18 ms 2026-01-10 15:26:48.403592 | orchestrator | 64 bytes from 192.168.112.159: icmp_seq=3 ttl=63 time=1.86 ms 2026-01-10 15:26:48.403673 | orchestrator | 2026-01-10 15:26:48.403682 | orchestrator | --- 192.168.112.159 ping statistics --- 2026-01-10 15:26:48.403691 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-01-10 15:26:48.403697 | orchestrator | rtt min/avg/max/mdev = 1.859/3.102/5.270/1.538 ms 2026-01-10 15:26:48.692997 | orchestrator | ok: Runtime: 0:24:17.680686 2026-01-10 15:26:48.753847 | 2026-01-10 15:26:48.753994 | TASK [Run tempest] 2026-01-10 15:26:49.290152 | orchestrator | skipping: Conditional result was False 2026-01-10 15:26:49.310788 | 2026-01-10 15:26:49.311058 | TASK [Check prometheus alert status] 2026-01-10 15:26:49.850758 | orchestrator | skipping: Conditional result was False 2026-01-10 15:26:49.853677 | 2026-01-10 15:26:49.853836 | PLAY RECAP 2026-01-10 15:26:49.853942 | orchestrator | ok: 24 changed: 11 unreachable: 0 failed: 0 skipped: 5 rescued: 0 ignored: 0 2026-01-10 15:26:49.853989 | 2026-01-10 15:26:50.093101 | RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/deploy.yml@main] 2026-01-10 15:26:50.095498 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2026-01-10 15:26:50.878282 | 2026-01-10 15:26:50.878491 | PLAY [Post output play] 2026-01-10 15:26:50.896218 | 2026-01-10 15:26:50.896413 | LOOP [stage-output : Register sources] 2026-01-10 15:26:50.975130 | 2026-01-10 15:26:50.975519 | TASK [stage-output : Check sudo] 2026-01-10 15:26:51.909495 | orchestrator | sudo: a password is required 2026-01-10 15:26:52.024756 | orchestrator | ok: Runtime: 0:00:00.012670 2026-01-10 15:26:52.040407 | 2026-01-10 15:26:52.040593 | LOOP [stage-output : Set source and destination for files and folders] 2026-01-10 15:26:52.092947 | 2026-01-10 15:26:52.093328 | TASK [stage-output : Build a list of source, dest dictionaries] 2026-01-10 15:26:52.174581 | orchestrator | ok 2026-01-10 15:26:52.183372 | 2026-01-10 15:26:52.183558 | LOOP [stage-output : Ensure target folders exist] 2026-01-10 15:26:52.668098 | orchestrator | ok: "docs" 2026-01-10 15:26:52.668526 | 2026-01-10 15:26:52.917152 | orchestrator | ok: "artifacts" 2026-01-10 15:26:53.179068 | orchestrator | ok: "logs" 2026-01-10 15:26:53.198685 | 2026-01-10 15:26:53.198909 | LOOP [stage-output : Copy files and folders to staging folder] 2026-01-10 15:26:53.249958 | 2026-01-10 15:26:53.250300 | TASK [stage-output : Make all log files readable] 2026-01-10 15:26:53.566422 | orchestrator | ok 2026-01-10 15:26:53.574332 | 2026-01-10 15:26:53.574471 | TASK [stage-output : Rename log files that match extensions_to_txt] 2026-01-10 15:26:53.608814 | orchestrator | skipping: Conditional result was False 2026-01-10 15:26:53.620962 | 2026-01-10 15:26:53.621123 | TASK [stage-output : Discover log files for compression] 2026-01-10 15:26:53.645561 | orchestrator | skipping: Conditional result was False 2026-01-10 15:26:53.658415 | 2026-01-10 15:26:53.658600 | LOOP [stage-output : Archive everything from logs] 2026-01-10 15:26:53.707307 | 2026-01-10 15:26:53.707514 | PLAY [Post cleanup play] 2026-01-10 15:26:53.717904 | 2026-01-10 15:26:53.718041 | TASK [Set cloud fact (Zuul deployment)] 2026-01-10 15:26:53.789347 | orchestrator | ok 2026-01-10 15:26:53.800048 | 2026-01-10 15:26:53.800266 | TASK [Set cloud fact (local deployment)] 2026-01-10 15:26:53.826722 | orchestrator | skipping: Conditional result was False 2026-01-10 15:26:53.842685 | 2026-01-10 15:26:53.842870 | TASK [Clean the cloud environment] 2026-01-10 15:26:54.453945 | orchestrator | 2026-01-10 15:26:54 - clean up servers 2026-01-10 15:26:55.294546 | orchestrator | 2026-01-10 15:26:55 - testbed-manager 2026-01-10 15:26:55.391775 | orchestrator | 2026-01-10 15:26:55 - testbed-node-5 2026-01-10 15:26:55.481970 | orchestrator | 2026-01-10 15:26:55 - testbed-node-4 2026-01-10 15:26:55.569436 | orchestrator | 2026-01-10 15:26:55 - testbed-node-1 2026-01-10 15:26:55.670120 | orchestrator | 2026-01-10 15:26:55 - testbed-node-0 2026-01-10 15:26:55.764127 | orchestrator | 2026-01-10 15:26:55 - testbed-node-2 2026-01-10 15:26:55.865115 | orchestrator | 2026-01-10 15:26:55 - testbed-node-3 2026-01-10 15:26:55.964060 | orchestrator | 2026-01-10 15:26:55 - clean up keypairs 2026-01-10 15:26:55.985339 | orchestrator | 2026-01-10 15:26:55 - testbed 2026-01-10 15:26:56.014881 | orchestrator | 2026-01-10 15:26:56 - wait for servers to be gone 2026-01-10 15:27:04.789729 | orchestrator | 2026-01-10 15:27:04 - clean up ports 2026-01-10 15:27:04.967670 | orchestrator | 2026-01-10 15:27:04 - 0d29d766-e7a1-4fb6-89d1-1dfc193ea721 2026-01-10 15:27:05.219177 | orchestrator | 2026-01-10 15:27:05 - 15bccc55-3b79-4612-a026-a719205ac2ed 2026-01-10 15:27:05.495186 | orchestrator | 2026-01-10 15:27:05 - 22b798ee-68e4-4f5a-916d-a968b9464d61 2026-01-10 15:27:05.747910 | orchestrator | 2026-01-10 15:27:05 - 2eb836ac-e5d7-40da-b842-32062da087ab 2026-01-10 15:27:05.942801 | orchestrator | 2026-01-10 15:27:05 - 3e0c9515-5380-42c7-a24d-f73e336916a4 2026-01-10 15:27:06.350266 | orchestrator | 2026-01-10 15:27:06 - 41044753-bfca-4855-b291-797854a64a1a 2026-01-10 15:27:06.576403 | orchestrator | 2026-01-10 15:27:06 - 6f5d7597-9ce8-401f-aeac-791a57f66809 2026-01-10 15:27:06.774375 | orchestrator | 2026-01-10 15:27:06 - clean up volumes 2026-01-10 15:27:06.889173 | orchestrator | 2026-01-10 15:27:06 - testbed-volume-0-node-base 2026-01-10 15:27:06.926736 | orchestrator | 2026-01-10 15:27:06 - testbed-volume-2-node-base 2026-01-10 15:27:06.966084 | orchestrator | 2026-01-10 15:27:06 - testbed-volume-5-node-base 2026-01-10 15:27:07.004799 | orchestrator | 2026-01-10 15:27:07 - testbed-volume-1-node-base 2026-01-10 15:27:07.048638 | orchestrator | 2026-01-10 15:27:07 - testbed-volume-4-node-base 2026-01-10 15:27:07.089358 | orchestrator | 2026-01-10 15:27:07 - testbed-volume-3-node-base 2026-01-10 15:27:07.133054 | orchestrator | 2026-01-10 15:27:07 - testbed-volume-manager-base 2026-01-10 15:27:07.178873 | orchestrator | 2026-01-10 15:27:07 - testbed-volume-7-node-4 2026-01-10 15:27:07.224489 | orchestrator | 2026-01-10 15:27:07 - testbed-volume-6-node-3 2026-01-10 15:27:07.267635 | orchestrator | 2026-01-10 15:27:07 - testbed-volume-4-node-4 2026-01-10 15:27:07.315984 | orchestrator | 2026-01-10 15:27:07 - testbed-volume-2-node-5 2026-01-10 15:27:07.361232 | orchestrator | 2026-01-10 15:27:07 - testbed-volume-1-node-4 2026-01-10 15:27:07.405419 | orchestrator | 2026-01-10 15:27:07 - testbed-volume-3-node-3 2026-01-10 15:27:07.446793 | orchestrator | 2026-01-10 15:27:07 - testbed-volume-5-node-5 2026-01-10 15:27:07.489143 | orchestrator | 2026-01-10 15:27:07 - testbed-volume-8-node-5 2026-01-10 15:27:07.541258 | orchestrator | 2026-01-10 15:27:07 - testbed-volume-0-node-3 2026-01-10 15:27:07.584394 | orchestrator | 2026-01-10 15:27:07 - disconnect routers 2026-01-10 15:27:07.703750 | orchestrator | 2026-01-10 15:27:07 - testbed 2026-01-10 15:27:09.047017 | orchestrator | 2026-01-10 15:27:09 - clean up subnets 2026-01-10 15:27:09.087442 | orchestrator | 2026-01-10 15:27:09 - subnet-testbed-management 2026-01-10 15:27:09.229813 | orchestrator | 2026-01-10 15:27:09 - clean up networks 2026-01-10 15:27:09.369864 | orchestrator | 2026-01-10 15:27:09 - net-testbed-management 2026-01-10 15:27:09.672851 | orchestrator | 2026-01-10 15:27:09 - clean up security groups 2026-01-10 15:27:09.711107 | orchestrator | 2026-01-10 15:27:09 - testbed-management 2026-01-10 15:27:09.818589 | orchestrator | 2026-01-10 15:27:09 - testbed-node 2026-01-10 15:27:09.919880 | orchestrator | 2026-01-10 15:27:09 - clean up floating ips 2026-01-10 15:27:09.956008 | orchestrator | 2026-01-10 15:27:09 - 81.163.193.86 2026-01-10 15:27:10.294990 | orchestrator | 2026-01-10 15:27:10 - clean up routers 2026-01-10 15:27:10.366091 | orchestrator | 2026-01-10 15:27:10 - testbed 2026-01-10 15:27:11.901264 | orchestrator | ok: Runtime: 0:00:17.458908 2026-01-10 15:27:11.905741 | 2026-01-10 15:27:11.905913 | PLAY RECAP 2026-01-10 15:27:11.906038 | orchestrator | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 7 rescued: 0 ignored: 0 2026-01-10 15:27:11.906103 | 2026-01-10 15:27:12.059767 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2026-01-10 15:27:12.062343 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2026-01-10 15:27:12.851832 | 2026-01-10 15:27:12.852027 | PLAY [Cleanup play] 2026-01-10 15:27:12.869262 | 2026-01-10 15:27:12.869427 | TASK [Set cloud fact (Zuul deployment)] 2026-01-10 15:27:12.932568 | orchestrator | ok 2026-01-10 15:27:12.941498 | 2026-01-10 15:27:12.941696 | TASK [Set cloud fact (local deployment)] 2026-01-10 15:27:12.977050 | orchestrator | skipping: Conditional result was False 2026-01-10 15:27:12.998311 | 2026-01-10 15:27:12.998493 | TASK [Clean the cloud environment] 2026-01-10 15:27:14.223571 | orchestrator | 2026-01-10 15:27:14 - clean up servers 2026-01-10 15:27:14.703131 | orchestrator | 2026-01-10 15:27:14 - clean up keypairs 2026-01-10 15:27:14.723573 | orchestrator | 2026-01-10 15:27:14 - wait for servers to be gone 2026-01-10 15:27:14.769657 | orchestrator | 2026-01-10 15:27:14 - clean up ports 2026-01-10 15:27:14.869993 | orchestrator | 2026-01-10 15:27:14 - clean up volumes 2026-01-10 15:27:14.946587 | orchestrator | 2026-01-10 15:27:14 - disconnect routers 2026-01-10 15:27:14.972757 | orchestrator | 2026-01-10 15:27:14 - clean up subnets 2026-01-10 15:27:14.996454 | orchestrator | 2026-01-10 15:27:14 - clean up networks 2026-01-10 15:27:15.156652 | orchestrator | 2026-01-10 15:27:15 - clean up security groups 2026-01-10 15:27:15.194371 | orchestrator | 2026-01-10 15:27:15 - clean up floating ips 2026-01-10 15:27:15.219277 | orchestrator | 2026-01-10 15:27:15 - clean up routers 2026-01-10 15:27:15.538318 | orchestrator | ok: Runtime: 0:00:01.453280 2026-01-10 15:27:15.542268 | 2026-01-10 15:27:15.542436 | PLAY RECAP 2026-01-10 15:27:15.542564 | orchestrator | ok: 2 changed: 1 unreachable: 0 failed: 0 skipped: 1 rescued: 0 ignored: 0 2026-01-10 15:27:15.542630 | 2026-01-10 15:27:15.682526 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2026-01-10 15:27:15.685041 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2026-01-10 15:27:16.493228 | 2026-01-10 15:27:16.493405 | PLAY [Base post-fetch] 2026-01-10 15:27:16.508910 | 2026-01-10 15:27:16.509074 | TASK [fetch-output : Set log path for multiple nodes] 2026-01-10 15:27:16.564909 | orchestrator | skipping: Conditional result was False 2026-01-10 15:27:16.573921 | 2026-01-10 15:27:16.574127 | TASK [fetch-output : Set log path for single node] 2026-01-10 15:27:16.633155 | orchestrator | ok 2026-01-10 15:27:16.644851 | 2026-01-10 15:27:16.645001 | LOOP [fetch-output : Ensure local output dirs] 2026-01-10 15:27:17.180145 | orchestrator -> localhost | ok: "/var/lib/zuul/builds/aed8ea0702db46caaf17932aabaccc56/work/logs" 2026-01-10 15:27:17.486931 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/aed8ea0702db46caaf17932aabaccc56/work/artifacts" 2026-01-10 15:27:17.804114 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/aed8ea0702db46caaf17932aabaccc56/work/docs" 2026-01-10 15:27:17.828073 | 2026-01-10 15:27:17.828293 | LOOP [fetch-output : Collect logs, artifacts and docs] 2026-01-10 15:27:18.849605 | orchestrator | changed: .d..t...... ./ 2026-01-10 15:27:18.850009 | orchestrator | changed: All items complete 2026-01-10 15:27:18.850072 | 2026-01-10 15:27:19.610357 | orchestrator | changed: .d..t...... ./ 2026-01-10 15:27:20.380168 | orchestrator | changed: .d..t...... ./ 2026-01-10 15:27:20.413766 | 2026-01-10 15:27:20.413968 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2026-01-10 15:27:20.969299 | orchestrator -> localhost | ok: Item: artifacts Runtime: 0:00:00.010920 2026-01-10 15:27:21.261599 | orchestrator -> localhost | ok: Item: docs Runtime: 0:00:00.006348 2026-01-10 15:27:21.284694 | 2026-01-10 15:27:21.284870 | PLAY RECAP 2026-01-10 15:27:21.284982 | orchestrator | ok: 4 changed: 3 unreachable: 0 failed: 0 skipped: 1 rescued: 0 ignored: 0 2026-01-10 15:27:21.285037 | 2026-01-10 15:27:21.433789 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2026-01-10 15:27:21.435204 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-01-10 15:27:22.202979 | 2026-01-10 15:27:22.203973 | PLAY [Base post] 2026-01-10 15:27:22.219695 | 2026-01-10 15:27:22.219853 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2026-01-10 15:27:23.225300 | orchestrator | changed 2026-01-10 15:27:23.236091 | 2026-01-10 15:27:23.236312 | PLAY RECAP 2026-01-10 15:27:23.236402 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2026-01-10 15:27:23.236481 | 2026-01-10 15:27:23.376292 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-01-10 15:27:23.377775 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2026-01-10 15:27:24.224961 | 2026-01-10 15:27:24.225162 | PLAY [Base post-logs] 2026-01-10 15:27:24.236729 | 2026-01-10 15:27:24.236888 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2026-01-10 15:27:24.764820 | localhost | changed 2026-01-10 15:27:24.783417 | 2026-01-10 15:27:24.783642 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2026-01-10 15:27:24.822925 | localhost | ok 2026-01-10 15:27:24.828379 | 2026-01-10 15:27:24.828585 | TASK [Set zuul-log-path fact] 2026-01-10 15:27:24.847472 | localhost | ok 2026-01-10 15:27:24.862368 | 2026-01-10 15:27:24.862540 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-01-10 15:27:24.891303 | localhost | ok 2026-01-10 15:27:24.896883 | 2026-01-10 15:27:24.897046 | TASK [upload-logs : Create log directories] 2026-01-10 15:27:25.405810 | localhost | changed 2026-01-10 15:27:25.413972 | 2026-01-10 15:27:25.414182 | TASK [upload-logs : Ensure logs are readable before uploading] 2026-01-10 15:27:25.980364 | localhost -> localhost | ok: Runtime: 0:00:00.008567 2026-01-10 15:27:25.990817 | 2026-01-10 15:27:25.991052 | TASK [upload-logs : Upload logs to log server] 2026-01-10 15:27:26.630665 | localhost | Output suppressed because no_log was given 2026-01-10 15:27:26.633939 | 2026-01-10 15:27:26.634113 | LOOP [upload-logs : Compress console log and json output] 2026-01-10 15:27:26.693227 | localhost | skipping: Conditional result was False 2026-01-10 15:27:26.700019 | localhost | skipping: Conditional result was False 2026-01-10 15:27:26.707515 | 2026-01-10 15:27:26.707836 | LOOP [upload-logs : Upload compressed console log and json output] 2026-01-10 15:27:26.759429 | localhost | skipping: Conditional result was False 2026-01-10 15:27:26.759729 | 2026-01-10 15:27:26.768525 | localhost | skipping: Conditional result was False 2026-01-10 15:27:26.781285 | 2026-01-10 15:27:26.781522 | LOOP [upload-logs : Upload console log and json output]